From liuxinguo at huawei.com Mon Dec 1 00:33:43 2014 From: liuxinguo at huawei.com (liuxinguo) Date: Mon, 1 Dec 2014 00:33:43 +0000 Subject: [openstack-dev] [devstack] image create mysql error Message-ID: When our CI run devstack, it occurs error when run " image create mysql". Log is pasted as following: 22186 2014-11-29 21:11:48.611 | ++ basename /opt/stack/new/devstack/files/mysql.qcow2 .qcow2 22187 2014-11-29 21:11:48.623 | + image_name=mysql 22188 2014-11-29 21:11:48.624 | + disk_format=qcow2 22189 2014-11-29 21:11:48.624 | + container_format=bare 22190 2014-11-29 21:11:48.624 | + is_arch ppc64 22191 2014-11-29 21:11:48.628 | ++ uname -m 22192 2014-11-29 21:11:48.710 | + [[ i686 == \p\p\c\6\4 ]] 22193 2014-11-29 21:11:48.710 | + '[' bare = bare ']' 22194 2014-11-29 21:11:48.710 | + '[' '' = zcat ']' 22195 2014-11-29 21:11:48.710 | + openstack --os-token 5387fe9c6f6d4182b09461fe232501db --os-url http://127.0.0.1:9292 image create mysql --public --container-format=bare --disk-format qcow2 22196 2014-11-29 21:11:57.275 | ERROR: openstack 22197 2014-11-29 21:11:57.275 | 22198 2014-11-29 21:11:57.275 | 401 Unauthorized 22199 2014-11-29 21:11:57.275 | 22200 2014-11-29 21:11:57.275 | 22201 2014-11-29 21:11:57.275 |

401 Unauthorized

22202 2014-11-29 21:11:57.275 | This server could not verify that you are authorized to access the document you requested. Either you supplied the wrong credentials (e.g., bad password), or your browser does not understand how to supply the credentials required.

22203 2014-11-29 21:11:57.275 | 22204 2014-11-29 21:11:57.276 | 22205 2014-11-29 21:11:57.276 | (HTTP 401) 22206 2014-11-29 21:11:57.344 | + exit_trap * Any one can give me some hint? * Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gus at inodes.org Mon Dec 1 00:56:57 2014 From: gus at inodes.org (Angus Lees) Date: Mon, 01 Dec 2014 00:56:57 +0000 Subject: [openstack-dev] [stable] Re: [neutron] the hostname regex pattern fix also changed behaviour :( References: <547860E4.7080304@redhat.com> Message-ID: On Fri Nov 28 2014 at 10:49:21 PM Ihar Hrachyshka wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > On 28/11/14 01:26, Angus Lees wrote: > > Context: https://review.openstack.org/#/c/135616 > > > > If we're happy disabling the check for components being all-digits, then > > a minimal change to the existing regex that could be backported might be > > something like > > r'(?=^.{1,254}$)(^(?:[a-zA-Z0-9_](?:[a-zA-Z0-9_-]{,61}[a-zA- > Z0-9])\.)*(?:[a-zA-Z]{2,})$)' > > > > Alternatively (and clearly preferable for Kilo), Kevin has a replacement > > underway that rewrites this entirely to conform to modern RFCs in > > I003cf14d95070707e43e40d55da62e11a28dfa4e > > With the change, will existing instances work as before? > Yes, this cuts straight to the heart of the matter: What's the purpose of these validation checks? Specifically, if someone is using an "invalid" hostname that passed the previous check but doesn't pass an improved/updated check, should we continue to allow it? I figure our role here is either to allow exactly what the relevant standards say, and deliberately reject/break anything that falls outside that - or be liberal, restrict only to some sort of 'safe' input and then let the underlying system perform the final validation. I can see plenty of reasons for either approach, but somewhere in the middle doesn't seem to make much sense - and I think the approach chosen also dictates any migration path. As they currently stand, I think both Kevin's and my alternative above _should_ be more liberal than the original (before the "fix") regex. Specifically, they now allow all-digits hostname components - in line with newer RFCs. However, TLD handling is a little different between the variants: - Kevin's continues to reject an all-digits TLD (also following RFC guidance) - mine and the original force TLDs to be all alpha (a-z; no digits or dash/underscore) The TLD handling is more interesting because an unqualified hostname (with no '.' characters) hits the TLD logic in all variants, but the original has a "\.?" quirk that means an unqualified hostname is forced to end with at least 2 alpha-only chars. As written above, mine is probably too restrictive for unqualified names, and this would need to be fixed. As the above shows, describing regex patterns in prose is long, boring and inaccurate. Someone who is going to have to approve the change should just dictate what they want here and then we'll go do that :P I suggest they also consider the DoS-fix-backport and the Kilo-and-forwards cases separately. - Gus -------------- next part -------------- An HTML attachment was scrubbed... URL: From alawson at aqorn.com Mon Dec 1 01:28:49 2014 From: alawson at aqorn.com (Adam Lawson) Date: Sun, 30 Nov 2014 17:28:49 -0800 Subject: [openstack-dev] Board election In-Reply-To: References: Message-ID: Everyone on my team are also seeing the same errors. Will submit a bug but that's a lot of people to file bugs. ; ) *Adam Lawson* AQORN, Inc. 427 North Tatnall Street Ste. 58461 Wilmington, Delaware 19801-2230 Toll-free: (844) 4-AQORN-NOW ext. 101 International: +1 302-387-4660 Direct: +1 916-246-2072 On Sun, Nov 30, 2014 at 8:17 AM, Anne Gentle wrote: > Hi Gary and Anita, > We do have a bug tracker for www.openstack.org at: > > https://bugs.launchpad.net/openstack-org > > Log a bug there to make sure the web devs at the Foundation get it. > Thanks, > Anne > > On Sun, Nov 30, 2014 at 8:44 AM, Gary Kotton wrote: > >> Hi, >> When I log into the site I am unable to nominate people. Any ideas? I >> get: "*Your account credentials do not allow you to nominate candidates.* >> *?* >> Any idea how to address that? >> Thanks >> Gary >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juvenboy1987 at gmail.com Mon Dec 1 01:37:51 2014 From: juvenboy1987 at gmail.com (lu jander) Date: Mon, 1 Dec 2014 09:37:51 +0800 Subject: [openstack-dev] [sahara] sahara Integration tests issue Message-ID: Hi Sahara dev, I am working on the integration tests in sahara(with nova network), when I am using "tox -e cdh" to run cdh tests, it failed with error below, then I check the sahara log, it says heat error. This error reminds me a bug which is not merged https://bugs.launchpad.net/sahara/+bug/1392738 (I checkout this patch, and it seems still does't work) so I manually set auto_security_group false in test_cdh_gating.py, but still met this error and not successfully passed the integration test. below is Integration tests error and sahara log Tests Error: Traceback (most recent call last): File "/opt/stack/sahara/sahara/tests/integration/tests/gating/test_cdh_gating.py", line 305, in test_cdh_plugin_gating self._create_cluster() File "sahara/tests/integration/tests/base.py", line 49, in wrapper ITestCase.print_error_log(message, e) File "/opt/stack/sahara/.tox/integration/local/lib/python2.7/site-packages/oslo/utils/excutils.py", line 82, in __exit__ six.reraise(self.type_, self.value, self.tb) File "sahara/tests/integration/tests/base.py", line 46, in wrapper fct(*args, **kwargs) File "/opt/stack/sahara/sahara/tests/integration/tests/gating/test_cdh_gating.py", line 191, in _create_cluster self.poll_cluster_state(self.cluster_id) File "sahara/tests/integration/tests/base.py", line 237, in poll_cluster_state self.fail('Cluster state == \'Error\'.') File "/opt/stack/sahara/.tox/integration/local/lib/python2.7/site-packages/unittest2/case.py", line 666, in fail raise self.failureException(msg) AssertionError: Cluster state == 'Error'. Ran 1 tests in 147.204s (+21.220s) FAILED (id=47, failures=1) error: testr failed (1) ERROR: InvocationError: '/opt/stack/sahara/.tox/integration/bin/python setup.py test --slowest --testr-args=cdh' Sahara Log: 2014-12-01 09:02:44.001 ERROR sahara.service.ops [-] Error during operating cluster 'test-cluster-cdh' (reason: Heat stack failed with status CREATE_FAILED) 2014-12-01 09:02:44.001 TRACE sahara.service.ops Traceback (most recent call last): 2014-12-01 09:02:44.001 TRACE sahara.service.ops File "/opt/stack/sahara/sahara/service/ops.py", line 141, in wrapper 2014-12-01 09:02:44.001 TRACE sahara.service.ops f(cluster_id, *args, **kwds) 2014-12-01 09:02:44.001 TRACE sahara.service.ops File "/opt/stack/sahara/sahara/service/ops.py", line 227, in _provision_cluster 2014-12-01 09:02:44.001 TRACE sahara.service.ops INFRA.create_cluster(cluster) 2014-12-01 09:02:44.001 TRACE sahara.service.ops File "/opt/stack/sahara/sahara/service/heat_engine.py", line 57, in create_cluster 2014-12-01 09:02:44.001 TRACE sahara.service.ops launcher.launch_instances(cluster, target_count) 2014-12-01 09:02:44.001 TRACE sahara.service.ops File "/opt/stack/sahara/sahara/service/heat_engine.py", line 209, in launch_instances 2014-12-01 09:02:44.001 TRACE sahara.service.ops heat.wait_stack_completion(stack.heat_stack) 2014-12-01 09:02:44.001 TRACE sahara.service.ops File "/opt/stack/sahara/sahara/utils/openstack/heat.py", line 60, in wait_stack_completion 2014-12-01 09:02:44.001 TRACE sahara.service.ops 2014-12-01 09:02:44.001 TRACE sahara.service.ops HeatStackException: Heat stack failed with status CREATE_FAILED 2014-12-01 09:02:44.001 TRACE sahara.service.ops -------------- next part -------------- An HTML attachment was scrubbed... URL: From anteaya at anteaya.info Mon Dec 1 01:46:02 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Sun, 30 Nov 2014 20:46:02 -0500 Subject: [openstack-dev] Board election In-Reply-To: References: Message-ID: <547BC85A.7090303@anteaya.info> On 11/30/2014 08:28 PM, Adam Lawson wrote: > Everyone on my team are also seeing the same errors. Will submit a bug but > that's a lot of people to file bugs. ; ) Or search by most recently reported bug and add to the bug report for whomever reports first: https://bugs.launchpad.net/openstack-org/+bugs?orderby=-id&start=0 Thanks Adam, Anita. > > > *Adam Lawson* > > AQORN, Inc. > 427 North Tatnall Street > Ste. 58461 > Wilmington, Delaware 19801-2230 > Toll-free: (844) 4-AQORN-NOW ext. 101 > International: +1 302-387-4660 > Direct: +1 916-246-2072 > > > On Sun, Nov 30, 2014 at 8:17 AM, Anne Gentle wrote: > >> Hi Gary and Anita, >> We do have a bug tracker for www.openstack.org at: >> >> https://bugs.launchpad.net/openstack-org >> >> Log a bug there to make sure the web devs at the Foundation get it. >> Thanks, >> Anne >> >> On Sun, Nov 30, 2014 at 8:44 AM, Gary Kotton wrote: >> >>> Hi, >>> When I log into the site I am unable to nominate people. Any ideas? I >>> get: "*Your account credentials do not allow you to nominate candidates.* >>> *?* >>> Any idea how to address that? >>> Thanks >>> Gary >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jamielennox at redhat.com Mon Dec 1 01:51:20 2014 From: jamielennox at redhat.com (Jamie Lennox) Date: Mon, 01 Dec 2014 11:51:20 +1000 Subject: [openstack-dev] [keystone][oslo] Handling contexts and policy enforcement in services Message-ID: <1417398680.3087.1.camel@redhat.com> TL;DR: I think we can handle most of oslo.context with some additions to auth_token middleware and simplify policy enforcement (from a service perspective) at the same time. There is currently a push to release oslo.context as a library, for reference: https://github.com/openstack/oslo.context/blob/master/oslo_context/context.py Whilst I love the intent to standardize this functionality I think that many of the requirements in there are incorrect and don't apply to all services. It is my understanding for example that read_only, show_deleted are essentially nova requirements, and the use of is_admin needs to be killed off, not standardized. Currently each service builds a context based on headers made available from auth_token middleware and some additional interpretations based on that user authentication. Each service does this slightly differently based on its needs/when it copied it from nova. I propose that auth_token middleware essentially handle the creation and management of an authentication object that will be passed and used by all services. This will standardize so much of the oslo.context library that I'm not sure it will be still needed. I bring this up now as I am wanting to push this way and don't want to change things after everyone has adopted oslo.context. The current release of auth_token middleware creates and passes to services (via env['keystone.token_auth']) an auth plugin that can be passed to clients to use the current user authentication. My intention here is to expand that object to expose all of the authentication information required for the services to operate. There are two components to context that I can see: - The current authentication information that is retrieved from auth_token middleware. - service specific context added based on that user information eg read_only, show_deleted, is_admin, resource_id Regarding the first point of current authentication there are three places I can see this used: - communicating with other services as that user - associating resources with a user/project - policy enforcement Addressing each of the 'current authentication' needs: - As mentioned for service to service communication auth_token middleware already provides an auth_plugin that can be used with (at this point most) of the clients. This greatly simplifies reusing an existing token and correctly using the service catalog as each client would do this differently. In future this plugin will be extended to provide support for concepts such as filling in the X-Service-Token [1] on behalf of the service, managing the request id, and generally standardizing service->service communication without requiring explicit support from every project and client. - Given that this authentication plugin is built within auth_token middleware it is a fairly trivial step to provide public properties on this object to give access to the current user_id, project_id and other relevant authentication data that the services can access. This is fairly well handled today but it means it is done without the service having to fetch all these objects from headers. - With upcoming changes to policy to handle features such as the X-Service-Token the existing context will need to gain a bunch of new entries. With the keystone team looking to wrap policy enforcement into its own standalone library it makes more sense to provide this authentication object directly to policy enforcement. This will allow the keystone team to manipulate policy data from both auth_token and the enforcement side, letting us introduce new features to policy transparent to the services. It will also standardize the naming of variables within these policy files. What is left for a context object after this is managing serialization and deserialization of this auth object and any additional fields (read_only etc) that are generally calculated at context creation time. This would be a very small library. There are still a number of steps to getting there: - Adding enough data to the existing authentication plugin to allow policy enforcement and general usage. - Making the authentication object serializable for transmitting between services. - Extracting policy enforcement into a library. However I think that this approach brings enough benefits to hold off on releasing and standardizing the use of the current context objects. I'd love to hear everyone thoughts on this, and where it would fall down. I see there could be some issues with how the context would fit into nova's versioned objects for example - but I think this would be the same issues that an oslo.context library would face anyway. Jamie [1] This is where service->service communication includes the service token as well as the user token to allow smarter policy and resource access. For example, a user can't access certain neutron functions directly however it should be allowed when nova calls neutron on behalf of a user, or an object that a service made on behalf of a user can only be deleted when the service makes the request on behalf of that user. From stefano at openstack.org Mon Dec 1 01:55:25 2014 From: stefano at openstack.org (Stefano Maffulli) Date: Sun, 30 Nov 2014 17:55:25 -0800 Subject: [openstack-dev] Board election In-Reply-To: References: Message-ID: <547BCA8D.5040900@openstack.org> On 11/30/2014 06:44 AM, Gary Kotton wrote: > When I log into the site I am unable to nominate people. Any ideas? I > get: "*Your account credentials do not allow you to nominate candidates.**?* That means that the account you're using is not the account of an Individual Member of OpenStack Foundation. Only member of the Foundation can participate in elections. > Any idea how to address that? For you and others, the only way to proceed is to file a bug on https://bugs.launchpad.net/openstack-org so the web team can help out. if you want to become a Member of the Foundation and since there is no UI to change an account from simply a registered user of openstack.org to Individual Member) you should provide in the bug report your name and state "I agree to the terms stated on https://www.openstack.org/join/register, I want to register as an Individual Member of the OpenStack Foundation". HTH stef From henry4hly at gmail.com Mon Dec 1 03:04:37 2014 From: henry4hly at gmail.com (henry hly) Date: Mon, 1 Dec 2014 11:04:37 +0800 Subject: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories In-Reply-To: References: Message-ID: FWaas is typically classified to L4-L7. But if they are developed standalone, it would be very difficult for implementing with a distributed manner. For example, with W-E traffic control in DVR mode, we can't rely on a external python client rest api call, the policy execution module must be loaded as the L3 agent extension, or another service-policy agent in the compute node. My suggestion is that starting with LB and VPN as a trial, which can never be distributed. FW is very tightly coupled with L3, so leaving it for discuss some time later may be more smooth. On Wed, Nov 19, 2014 at 6:31 AM, Mark McClain wrote: > All- > > Over the last several months, the members of the Networking Program have > been discussing ways to improve the management of our program. When the > Quantum project was initially launched, we envisioned a combined service > that included all things network related. This vision served us well in the > early days as the team mostly focused on building out layers 2 and 3; > however, we?ve run into growth challenges as the project started building > out layers 4 through 7. Initially, we thought that development would float > across all layers of the networking stack, but the reality is that the > development concentrates around either layer 2 and 3 or layers 4 through 7. > In the last few cycles, we?ve also discovered that these concentrations have > different velocities and a single core team forces one to match the other to > the detriment of the one forced to slow down. > > Going forward we want to divide the Neutron repository into two separate > repositories lead by a common Networking PTL. The current mission of the > program will remain unchanged [1]. The split would be as follows: > > Neutron (Layer 2 and 3) > - Provides REST service and technology agnostic abstractions for layer 2 and > layer 3 services. > > Neutron Advanced Services Library (Layers 4 through 7) > - A python library which is co-released with Neutron > - The advance service library provides controllers that can be configured to > manage the abstractions for layer 4 through 7 services. > > Mechanics of the split: > - Both repositories are members of the same program, so the specs repository > would continue to be shared during the Kilo cycle. The PTL and the drivers > team will retain approval responsibilities they now share. > - The split would occur around Kilo-1 (subject to coordination of the Infra > and Networking teams). The timing is designed to enable the proposed REST > changes to land around the time of the December development sprint. > - The core team for each repository will be determined and proposed by Kyle > Mestery for approval by the current core team. > - The Neutron Server and the Neutron Adv Services Library would be co-gated > to ensure that incompatibilities are not introduced. > - The Advance Service Library would be an optional dependency of Neutron, so > integrated cross-project checks would not be required to enable it during > testing. > - The split should not adversely impact operators and the Networking program > should maintain standard OpenStack compatibility and deprecation cycles. > > This proposal to divide into two repositories achieved a strong consensus at > the recent Paris Design Summit and it does not conflict with the current > governance model or any proposals circulating as part of the ?Big Tent? > discussion. > > Kyle and mark > > [1] > https://git.openstack.org/cgit/openstack/governance/plain/reference/programs.yaml > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From stendulker at gmail.com Mon Dec 1 03:40:56 2014 From: stendulker at gmail.com (Shivanand Tendulker) Date: Mon, 1 Dec 2014 09:10:56 +0530 Subject: [openstack-dev] [Ironic] Do we need an IntrospectionInterface? In-Reply-To: References: <5475F5B4.3080803@redhat.com> Message-ID: +1 for separate interface. --Shivanand On Fri, Nov 28, 2014 at 7:20 PM, Lucas Alvares Gomes wrote: > Hi, > > Thanks for putting it up Dmitry. I think the idea is fine too, I > understand that people may want to use in-band discovery for drivers like > iLO or DRAC and having those on a separated interface allow us to composite > a driver to do it (which is ur use case 2. ). > > So, +1. > > Lucas > > On Wed, Nov 26, 2014 at 3:45 PM, Imre Farkas wrote: > >> On 11/26/2014 02:20 PM, Dmitry Tantsur wrote: >> >>> Hi all! >>> >>> As our state machine and discovery discussion proceeds, I'd like to ask >>> your opinion on whether we need an IntrospectionInterface >>> (DiscoveryInterface?). Current proposal [1] suggests adding a method for >>> initiating a discovery to the ManagementInterface. IMO it's not 100% >>> correct, because: >>> 1. It's not management. We're not changing anything. >>> 2. I'm aware that some folks want to use discoverd-based discovery [2] >>> even for DRAC and ILO (e.g. for vendor-specific additions that can't be >>> implemented OOB). >>> >>> Any ideas? >>> >>> Dmitry. >>> >>> [1] https://review.openstack.org/#/c/100951/ >>> [2] https://review.openstack.org/#/c/135605/ >>> >>> >> Hi Dmitry, >> >> I see the value in using the composability of our driver interfaces, so I >> vote for having a separate IntrospectionInterface. Otherwise we wouldn't >> allow users to use eg. the DRAC driver with an in-band but more powerful hw >> discovery. >> >> Imre >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradip.interra at gmail.com Mon Dec 1 04:03:14 2014 From: pradip.interra at gmail.com (Pradip Mukhopadhyay) Date: Mon, 1 Dec 2014 09:33:14 +0530 Subject: [openstack-dev] [All] Programatically re-starting OpenStack services Message-ID: Hello, Are there ways (which pattern might be preferred) by which one can programatically restart different OpenStack services? For example: if one wants to restart cinder-scheduler or heat-cfn? Thanks in advance, Pradip -------------- next part -------------- An HTML attachment was scrubbed... URL: From 11msccssbashir at seecs.edu.pk Mon Dec 1 04:24:40 2014 From: 11msccssbashir at seecs.edu.pk (Sadia Bashir) Date: Mon, 1 Dec 2014 09:24:40 +0500 Subject: [openstack-dev] [All] Programatically re-starting OpenStack services In-Reply-To: References: Message-ID: Hi, What do you mean by programatically? Do you want to restart services via a script or want to orchestrate restarting services from within openstack? If it is via script, you can write a bash script as follows: service cinder-scheduler restart service heat-api-cfn restart On Mon, Dec 1, 2014 at 9:03 AM, Pradip Mukhopadhyay < pradip.interra at gmail.com> wrote: > Hello, > > > Are there ways (which pattern might be preferred) by which one can > programatically restart different OpenStack services? > > For example: if one wants to restart cinder-scheduler or heat-cfn? > > > > Thanks in advance, > Pradip > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.griffith8 at gmail.com Mon Dec 1 04:30:01 2014 From: john.griffith8 at gmail.com (John Griffith) Date: Sun, 30 Nov 2014 21:30:01 -0700 Subject: [openstack-dev] [cinder][nova] proper syncing of cinder volume state In-Reply-To: <2960F1710CFACC46AF0DBFEE85B103BB231CA461@G9W0723.americas.hpqcorp.net> References: <2960F1710CFACC46AF0DBFEE85B103BB231CA461@G9W0723.americas.hpqcorp.net> Message-ID: On Fri, Nov 28, 2014 at 11:25 AM, D'Angelo, Scott wrote: > A Cinder blueprint has been submitted to allow the python-cinderclient to > involve the back end storage driver in resetting the state of a cinder > volume: > > https://blueprints.launchpad.net/cinder/+spec/reset-state-with-driver > > and the spec: > > https://review.openstack.org/#/c/134366 > > > > This blueprint contains various use cases for a volume that may be listed in > the Cinder DataBase in state detaching|attaching|creating|deleting. > > The Proposed solution involves augmenting the python-cinderclient command > ?reset-state?, but other options are listed, including those that > > involve Nova, since the state of a volume in the Nova XML found in > /etc/libvirt/qemu/.xml may also be out-of-sync with the > > Cinder DB or storage back end. > > > > A related proposal for adding a new non-admin API for changing volume status > from ?attaching? to ?error? has also been proposed: > > https://review.openstack.org/#/c/137503/ > > > > Some questions have arisen: > > 1) Should ?reset-state? command be changed at all, since it was originally > just to modify the Cinder DB? > > 2) Should ?reset-state? be fixed to prevent the na?ve admin from changing > the CinderDB to be out-of-sync with the back end storage? > > 3) Should ?reset-state? be kept the same, but augmented with new options? > > 4) Should a new command be implemented, with possibly a new admin API to > properly sync state? > > 5) Should Nova be involved? If so, should this be done as a separate body of > work? > > > > This has proven to be a complex issue and there seems to be a good bit of > interest. Please provide feedback, comments, and suggestions. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Hey Scott, Thanks for posting this to the ML, I stated my opinion on the spec, but for completeness: My feeling is that reset-state has morphed into something entirely different than originally intended. That's actually great, nothing wrong there at all. I strongly disagree with the statements that "setting the status in the DB only is almost always the wrong thing to do". The whole point was to allow the state to be changed in the DB so the item could in most cases be deleted. There was never an intent (that I'm aware of) to make this some sort of uber resync and heal API call. All of that history aside, I think it would be great to add some driver interaction here. I am however very unclear on what that would actually include. For example, would you let a Volume's state be changed from "Error-Attaching" to "In-Use" and just run through the process of retyring an attach? To me that seems like a bad idea. I'm much happier with the current state of changing the state form "Error" to "Available" (and NOTHING else) so that an operation can be retried, or the resource can be deleted. If you start allowing any state transition (which sadly we've started to do) you're almost never going to get things correct. This also covers almost every situation even though it means you have to explicitly retry operations or steps (I don't think that's a bad thing) and make the code significantly more robust IMO (we have some issues lately with things being robust). My proposal would be to go back to limiting the things you can do with reset-state (basicly make it so you can only release the resource back to available) and add the driver interaction to clean up any mess if possible. This could be a simple driver call added like "make_volume_available" whereby the driver just ensures that there are no attachments and.... well; honestly nothing else comes to mind as being something the driver cares about here. The final option then being to add some more power to force-delete. Is there anything other than attach that matters from a driver? If people are talking error-recovery that to me is a whole different topic and frankly I think we need to spend more time preventing errors as opposed to trying to recover from them via new API calls. Curious to see if any other folks have input here? John From taget at linux.vnet.ibm.com Mon Dec 1 04:45:04 2014 From: taget at linux.vnet.ibm.com (Eli Qiao) Date: Mon, 01 Dec 2014 12:45:04 +0800 Subject: [openstack-dev] [Nova] Gate error Message-ID: <547BF250.2090208@linux.vnet.ibm.com> Got gate error today. how can we kick infrastructure team to fix it? HTTP error 404 while getting http://pypi.IAD.openstack.org/packages/source/l/logilab-common/logilab-common-0.63.2.tar.gz#md5=2bf4599ae1f2ccf4603ca02c5d7e798e (from http://pypi.IAD.openstack.org/simple/logilab-common/ ) [1]http://logs.openstack.org/03/120703/24/check/gate-nova-pep8/b745055/console.html -- Thanks, Eli (Li Yong) Qiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradip.interra at gmail.com Mon Dec 1 05:00:42 2014 From: pradip.interra at gmail.com (Pradip Mukhopadhyay) Date: Mon, 1 Dec 2014 10:30:42 +0530 Subject: [openstack-dev] [All] Programatically re-starting OpenStack services In-Reply-To: References: Message-ID: Yeah, I meant from Orchestration. Sorry if the earlier one is not clear. To be little more specific: from the Life Cycle management functions of the custom Heat resource definition. Thanks, Pradip On Mon, Dec 1, 2014 at 9:54 AM, Sadia Bashir <11msccssbashir at seecs.edu.pk> wrote: > Hi, > > What do you mean by programatically? Do you want to restart services via a > script or want to orchestrate restarting services from within openstack? > > If it is via script, you can write a bash script as follows: > > service cinder-scheduler restart > service heat-api-cfn restart > > > > On Mon, Dec 1, 2014 at 9:03 AM, Pradip Mukhopadhyay < > pradip.interra at gmail.com> wrote: > >> Hello, >> >> >> Are there ways (which pattern might be preferred) by which one can >> programatically restart different OpenStack services? >> >> For example: if one wants to restart cinder-scheduler or heat-cfn? >> >> >> >> Thanks in advance, >> Pradip >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tqtran at us.ibm.com Mon Dec 1 05:17:28 2014 From: tqtran at us.ibm.com (Thai Q Tran) Date: Sun, 30 Nov 2014 21:17:28 -0800 Subject: [openstack-dev] [horizon] REST and Django In-Reply-To: References: Message-ID: I agree that keeping the API layer thin would be ideal. I should add that having discrete API calls would allow dynamic population of table. However, I will make a case where it might be necessary to add additional APIs. Consider that you want to delete 3 items in a given table. If you do this on the client side, you would need to perform: n * (1 API request + 1 AJAX request) If you have some logic on the server side that batch delete actions: n * (1 API request) + 1 AJAX request Consider the following: n = 1, client = 2 trips, server = 2 trips n = 3, client = 6 trips, server = 4 trips n = 10, client = 20 trips, server = 11 trips n = 100, client = 200 trips, server 101 trips As you can see, this does not scale very well.... something to consider... From: Richard Jones To: "Tripp, Travis S" , OpenStack List Date: 11/27/2014 05:38 PM Subject: Re: [openstack-dev] [horizon] REST and Django On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S wrote: Hi Richard, You are right, we should put this out on the main ML, so copying thread out to there.? ML: FYI that this started after some impromptu IRC discussions about a specific patch led into an impromptu google hangout discussion with all the people on the thread below. Thanks Travis! As I mentioned in the review[1], Thai and I were mainly discussing the possible performance implications of network hops from client to horizon server and whether or not any aggregation should occur server side. ? In other words, some views ?require several APIs to be queried before any data can displayed and it would eliminate some extra network requests from client to server if some of the data was first collected on the server side across service APIs.? For example, the launch instance wizard will need to collect data from quite a few APIs before even the first step is displayed (I?ve listed those out in the blueprint [2]). The flip side to that (as you also pointed out) is that if we keep the API?s fine grained then the wizard will be able to optimize in one place the calls for data as it is needed. For example, the first step may only need half of the API calls. It also could lead to perceived performance increases just due to the wizard making a call for different data independently and displaying it as soon as it can. Indeed, looking at the current launch wizard code it seems like you wouldn't need to load all that data for the wizard to be displayed, since only some subset of it would be necessary to display any given panel of the wizard. I tend to lean towards your POV and starting with discrete API calls and letting the client optimize calls.? If there are performance problems or other reasons then doing data aggregation on the server side could be considered at that point. I'm glad to hear it. I'm a fan of optimising when necessary, and not beforehand :) Of course if anybody is able to do some performance testing between the two approaches then that could affect the direction taken. I would certainly like to see us take some measurements when performance issues pop up. Optimising without solid metrics is bad idea :) ? ? Richard [1] https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py [2] https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign -Travis From: Richard Jones Date: Wednesday, November 26, 2014 at 11:55 PM To: Travis Tripp , Thai Q Tran/Silicon Valley/IBM < tqtran at us.ibm.com>, David Lyle , Maxime Vidori < maxime.vidori at enovance.com>, "Wroblewski, Szymon" < szymon.wroblewski at intel.com>, "Wood, Matthew David (HP Cloud - Horizon)" , "Chen, Shaoquan" , "Farina, Matt (HP Cloud)" , Cindy Lu/Silicon Valley/IBM < clu at us.ibm.com>, Justin Pomeroy/Rochester/IBM , Neill Cox Subject: Re: REST and Django I'm not sure whether this is the appropriate place to discuss this, or whether I should be posting to the list under [Horizon] but I think we need to have a clear idea of what goes in the REST API and what goes in the client (angular) code. In my mind, the thinner the REST API the better. Indeed if we can get away with proxying requests through without touching any *client code, that would be great. Coding additional logic into the REST API means that a developer would need to look in two places, instead of one, to determine what was happening for a particular call. If we keep it thin then the API presented to the client developer is very, very similar to the API presented by the services. Minimum surprise. Your thoughts? ? ? ?Richard On Wed Nov 26 2014 at 2:40:52 PM Richard Jones wrote: Thanks for the great summary, Travis. I've completed the work I pledged this morning, so now the REST API change set has: - no rest framework dependency - AJAX scaffolding in openstack_dashboard.api.rest.utils - code in openstack_dashboard/api/rest/ - renamed the API from "identity" to "keystone" to be consistent - added a sample of testing, mostly for my own sanity to check things were working https://review.openstack.org/#/c/136676 ? ? ? Richard On Wed Nov 26 2014 at 12:18:25 PM Tripp, Travis S wrote: Hello all, Great discussion on the REST urls today! I think that we are on track to come to a common REST API usage pattern.? To provide quick summary: We all agreed that going to a straight REST pattern rather than through tables was a good idea. We discussed using direct get / post in Django views like what Max originally used[1][2] and Thai also started[3] with the identity table rework or to go with djangorestframework [5] like what Richard was prototyping with[4]. The main things we would use from Django Rest Framework were built in JSON serialization (avoid boilerplate), better exception handling, and some request wrapping.? However, we all weren?t sure about the need for a full new framework just for that. At the end of the conversation, we decided that it was a cleaner approach, but Richard would see if he could provide some utility code to do that much for us without requiring the full framework.? David voiced that he doesn?t want us building out a whole framework on our own either. So, Richard will do some investigation during his day today and get back to us.? Whatever the case, we?ll get a patch in horizon for the base dependency (framework or Richard?s utilities) that both Thai?s work and the launch instance work is dependent upon.? We?ll build REST style API?s using the same pattern.? We will likely put the rest api?s in horizon/openstack_dashboard/api/rest/. [1] https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/keypair.py [2] https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/launch.py [3] https://review.openstack.org/#/c/133767/8/openstack_dashboard/dashboards/identity/users/views.py [4] https://review.openstack.org/#/c/136676/4/openstack_dashboard/rest_api/identity.py [5]?http://www.django-rest-framework.org/ Thanks, Travis_______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From jay.lau.513 at gmail.com Mon Dec 1 05:54:34 2014 From: jay.lau.513 at gmail.com (Jay Lau) Date: Mon, 1 Dec 2014 13:54:34 +0800 Subject: [openstack-dev] [gerrit] Gerrit review problem Message-ID: When I review a patch for OpenStack, after review finished, I want to check more patches for this project and then after click the "Project" content for this patch, it will **not** jump to all patches but project description. I think it is not convenient for a reviewer if s/he wants to review more patches for this project. [image: ???? 1] [image: ???? 2] -- Thanks, Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 42361 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 38385 bytes Desc: not available URL: From jichenjc at cn.ibm.com Mon Dec 1 05:59:52 2014 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Mon, 1 Dec 2014 13:59:52 +0800 Subject: [openstack-dev] [gerrit] Gerrit review problem In-Reply-To: References: Message-ID: +1, I also found this inconvenient point before ,thanks Jay for bring up :) Best Regards! Kevin (Chen) Ji ? ? Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82454158 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Jay Lau To: OpenStack Development Mailing List Date: 12/01/2014 01:56 PM Subject: [openstack-dev] [gerrit] Gerrit review problem When I review a patch for OpenStack, after review finished, I want to check more patches for this project and then after click the "Project" content for this patch, it will **not** jump to all patches but project description. I think it is not convenient for a reviewer if s/he wants to review more patches for this project. ???? 1 ???? 2 -- Thanks, Jay_______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 36912227.gif Type: image/gif Size: 42361 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 36572896.gif Type: image/gif Size: 38385 bytes Desc: not available URL: From coolsvap at gmail.com Mon Dec 1 06:09:39 2014 From: coolsvap at gmail.com (OpenStack Dev) Date: Mon, 01 Dec 2014 06:09:39 +0000 Subject: [openstack-dev] [gerrit] Gerrit review problem References: Message-ID: Hi Jay, Instead you can just put the project name in the search bar to filter for all change sets related to the project. /coolsvap On Mon Dec 01 2014 at 11:30:47 AM Jay Lau wrote: > > When I review a patch for OpenStack, after review finished, I want to > check more patches for this project and then after click the "Project" > content for this patch, it will **not** jump to all patches but project > description. I think it is not convenient for a reviewer if s/he wants to > review more patches for this project. > > > > > > -- > Thanks, > > Jay > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 42361 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 38385 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2014-12-01 11:38:39.png Type: image/png Size: 59679 bytes Desc: not available URL: From stevemar at ca.ibm.com Mon Dec 1 06:13:05 2014 From: stevemar at ca.ibm.com (Steve Martinelli) Date: Mon, 1 Dec 2014 01:13:05 -0500 Subject: [openstack-dev] [gerrit] Gerrit review problem In-Reply-To: References: Message-ID: Clicking on the magnifying glass icon (next to the project name) lists all recent patches for that project (closed and open). I have a bookmark that lists all open reviews of a project and always try to keep it open in a tab, and open specific code reviews in new tabs. Steve Chen CH Ji wrote on 12/01/2014 12:59:52 AM: > From: Chen CH Ji > To: "OpenStack Development Mailing List \(not for usage questions\)" > > Date: 12/01/2014 01:09 AM > Subject: Re: [openstack-dev] [gerrit] Gerrit review problem > > +1, I also found this inconvenient point before ,thanks Jay for bring up :) > > Best Regards! > > Kevin (Chen) Ji ? ? > > Engineer, zVM Development, CSTL > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > Phone: +86-10-82454158 > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian > District, Beijing 100193, PRC > > [image removed] Jay Lau ---12/01/2014 01:56:48 PM---When I review a > patch for OpenStack, after review finished, I want to check more > patches for this pr > > From: Jay Lau > To: OpenStack Development Mailing List > Date: 12/01/2014 01:56 PM > Subject: [openstack-dev] [gerrit] Gerrit review problem > > > > > When I review a patch for OpenStack, after review finished, I want > to check more patches for this project and then after click the > "Project" content for this patch, it will **not** jump to all > patches but project description. I think it is not convenient for a > reviewer if s/he wants to review more patches for this project. > > [image removed] > > [image removed] > > -- > Thanks, > > Jay_______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From bharat.kobagana at redhat.com Mon Dec 1 06:21:22 2014 From: bharat.kobagana at redhat.com (Bharat Kumar) Date: Mon, 01 Dec 2014 11:51:22 +0530 Subject: [openstack-dev] Deploy GlusterFS server Message-ID: <547C08E2.7070402@redhat.com> Hi All, Regarding the patch "Deploy GlusterFS Server" (https://review.openstack.org/#/c/133102/). Submitted this patch long back, this patch also got Code Review +2. I think it is waiting for Workflow approval. Another task is dependent on this patch. Please review (Workflow) this patch and help me to merge this patch. -- Thanks & Regards, Bharat Kumar K From jay.lau.513 at gmail.com Mon Dec 1 06:25:21 2014 From: jay.lau.513 at gmail.com (Jay Lau) Date: Mon, 1 Dec 2014 14:25:21 +0800 Subject: [openstack-dev] [gerrit] Gerrit review problem In-Reply-To: References: Message-ID: Thanks all, yes, there are ways I can get all on-going patches for one project, I was complaining this because I can always direct to the right page before gerrit review upgrade, the upgrade broken this feature which makes not convenient for reviewers... 2014-12-01 14:13 GMT+08:00 Steve Martinelli : > Clicking on the magnifying glass icon (next to the project name) lists > all recent patches for that project (closed and open). > > I have a bookmark that lists all open reviews of a project and > always try to keep it open in a tab, and open specific code reviews in > new tabs. > > Steve > > Chen CH Ji wrote on 12/01/2014 12:59:52 AM: > > > From: Chen CH Ji > > To: "OpenStack Development Mailing List \(not for usage questions\)" > > > > Date: 12/01/2014 01:09 AM > > Subject: Re: [openstack-dev] [gerrit] Gerrit review problem > > > > +1, I also found this inconvenient point before ,thanks Jay for bring up > :) > > > > Best Regards! > > > > Kevin (Chen) Ji ? ? > > > > Engineer, zVM Development, CSTL > > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > > Phone: +86-10-82454158 > > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian > > District, Beijing 100193, PRC > > > > [image removed] Jay Lau ---12/01/2014 01:56:48 PM---When I review a > > patch for OpenStack, after review finished, I want to check more > > patches for this pr > > > > From: Jay Lau > > To: OpenStack Development Mailing List < > openstack-dev at lists.openstack.org> > > Date: 12/01/2014 01:56 PM > > Subject: [openstack-dev] [gerrit] Gerrit review problem > > > > > > > > > > When I review a patch for OpenStack, after review finished, I want > > to check more patches for this project and then after click the > > "Project" content for this patch, it will **not** jump to all > > patches but project description. I think it is not convenient for a > > reviewer if s/he wants to review more patches for this project. > > > > [image removed] > > > > [image removed] > > > > -- > > Thanks, > > > > Jay_______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Thanks, Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From coolsvap at gmail.com Mon Dec 1 06:35:37 2014 From: coolsvap at gmail.com (OpenStack Dev) Date: Mon, 01 Dec 2014 06:35:37 +0000 Subject: [openstack-dev] [gerrit] Gerrit review problem References: Message-ID: Jay this has been informed & discussed, pre & post the gerrit upgrade :) On Mon Dec 01 2014 at 12:00:02 PM Jay Lau wrote: > Thanks all, yes, there are ways I can get all on-going patches for one > project, I was complaining this because I can always direct to the right > page before gerrit review upgrade, the upgrade broken this feature which > makes not convenient for reviewers... > > 2014-12-01 14:13 GMT+08:00 Steve Martinelli : > >> Clicking on the magnifying glass icon (next to the project name) lists >> all recent patches for that project (closed and open). >> >> I have a bookmark that lists all open reviews of a project and >> always try to keep it open in a tab, and open specific code reviews in >> new tabs. >> >> Steve >> >> Chen CH Ji wrote on 12/01/2014 12:59:52 AM: >> >> > From: Chen CH Ji >> > To: "OpenStack Development Mailing List \(not for usage questions\)" >> > >> > Date: 12/01/2014 01:09 AM >> > Subject: Re: [openstack-dev] [gerrit] Gerrit review problem >> > >> > +1, I also found this inconvenient point before ,thanks Jay for bring >> up :) >> > >> > Best Regards! >> > >> > Kevin (Chen) Ji ? ? >> > >> > Engineer, zVM Development, CSTL >> > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com >> > Phone: +86-10-82454158 >> > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian >> > District, Beijing 100193, PRC >> > >> > [image removed] Jay Lau ---12/01/2014 01:56:48 PM---When I review a >> > patch for OpenStack, after review finished, I want to check more >> > patches for this pr >> > >> > From: Jay Lau >> > To: OpenStack Development Mailing List < >> openstack-dev at lists.openstack.org> >> > Date: 12/01/2014 01:56 PM >> > Subject: [openstack-dev] [gerrit] Gerrit review problem >> > >> > >> > >> > >> > When I review a patch for OpenStack, after review finished, I want >> > to check more patches for this project and then after click the >> > "Project" content for this patch, it will **not** jump to all >> > patches but project description. I think it is not convenient for a >> > reviewer if s/he wants to review more patches for this project. >> > >> > [image removed] >> > >> > [image removed] >> > >> > -- >> > Thanks, >> > >> > Jay_______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Thanks, > > Jay > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From academicgareth at gmail.com Mon Dec 1 06:46:17 2014 From: academicgareth at gmail.com (Gareth) Date: Mon, 1 Dec 2014 14:46:17 +0800 Subject: [openstack-dev] [nova] is there a way to simulate thousands or millions of compute nodes? In-Reply-To: <60A3427EF882A54BA0A1971AE6EF0388A536078A@ORD1EXD02.RACKSPACE.CORP> References: <60A3427EF882A54BA0A1971AE6EF0388A536078A@ORD1EXD02.RACKSPACE.CORP> Message-ID: @Michael Okay, focusing on 'thousands' now, I know 'millions' is not good metaphor here. I also know 'cells' functionality is nova's solution for large scale deployment. But it also makes sense to find and re-produce large scale problems in relatively small scale deployment. @Sandy All-in-all, I think you'd be better off load testing each piece independently on a fixed hardware platform and faking out all the incoming/outgoing services.... I understand and this is what I want to know. Is anyone doing the work like this? If yes, I would like to join :) On Fri, Nov 28, 2014 at 8:36 AM, Sandy Walsh wrote: > >From: Michael Still [mikal at stillhq.com] Thursday, November 27, 2014 6:57 > PM > >To: OpenStack Development Mailing List (not for usage questions) > >Subject: Re: [openstack-dev] [nova] is there a way to simulate thousands > or millions of compute nodes? > > > >I would say that supporting millions of compute nodes is not a current > >priority for nova... We are actively working on improving support for > >thousands of compute nodes, but that is via cells (so each nova deploy > >except the top is still in the hundreds of nodes). > > > > Agreed, it wouldn't make much sense to simulate this on a single machine. > > That said, if one *was* to simulate this, there are the well known > bottlenecks: > > 1. the API. How much can one node handle with given hardware specs? Which > operations hit the DB the hardest? > 2. the Scheduler. There's your API bottleneck and big load on the DB for > Create operations. > 3. the Conductor. Shouldn't be too bad, essentially just a proxy. > 4. child-to-global-cell updates. Assuming a two-cell deployment. > 5. the virt driver. YMMV. > ... and that's excluding networking, volumes, etc. > > The virt driver should be load tested independently. So FakeDriver would > be fine (with some delays added for common operations as Gareth suggests). > Something like Bees-with-MachineGuns could be used to get a baseline metric > for the API. Then it comes down to DB performance in the scheduler and > conductor (for a single cell). Finally, inter-cell loads. Who blows out the > queue first? > > All-in-all, I think you'd be better off load testing each piece > independently on a fixed hardware platform and faking out all the > incoming/outgoing services. Test the API with fake everything. Test the > Scheduler with fake API calls and fake compute nodes. Test the conductor > with fake compute nodes (not FakeDriver). Test the compute node directly. > > Probably all going to come down to the DB and I think there is some good > performance data around that already? > > But I'm just spit-ballin' ... and I agree, not something I could see the > Nova team taking on in the near term ;) > > -S > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Gareth *Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball* *OpenStack contributor, kun_huang at freenode* *My promise: if you find any spelling or grammar mistakes in my email from Mar 1 2013, notify me * *and I'll donate $1 or ?1 to an open organization you specify.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From anant.patil at hp.com Mon Dec 1 07:02:29 2014 From: anant.patil at hp.com (Anant Patil) Date: Mon, 01 Dec 2014 12:32:29 +0530 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> Message-ID: <547C1285.7090909@hp.com> On 27-Nov-14 18:03, Murugan, Visnusaran wrote: > Hi Zane, > > > > At this stage our implementation (as mentioned in wiki > ) achieves your > design goals. > > > > 1. In case of a parallel update, our implementation adjusts graph > according to new template and waits for dispatched resource tasks to > complete. > > 2. Reason for basing our PoC on Heat code: > > a. To solve contention processing parent resource by all dependent > resources in parallel. > > b. To avoid porting issue from PoC to HeatBase. (just to be aware > of potential issues asap) > > 3. Resource timeout would be helpful, but I guess its resource > specific and has to come from template and default values from plugins. > > 4. We see resource notification aggregation and processing next > level of resources without contention and with minimal DB usage as the > problem area. We are working on the following approaches in *parallel.* > > a. Use a Queue per stack to serialize notification. > > b. Get parent ProcessLog (ResourceID, EngineID) and initiate > convergence upon first child notification. Subsequent children who fail > to get parent resource lock will directly send message to waiting parent > task (topic=stack_id.parent_resource_id) > > Based on performance/feedback we can select either or a mashed version. > > > > Advantages: > > 1. Failed Resource tasks can be re-initiated after ProcessLog > table lookup. > > 2. One worker == one resource. > > 3. Supports concurrent updates > > 4. Delete == update with empty stack > > 5. Rollback == update to previous know good/completed stack. > > > > Disadvantages: > > 1. Still holds stackLock (WIP to remove with ProcessLog) > > > > Completely understand your concern on reviewing our code, since commits > are numerous and there is change of course at places. Our start commit > is [c1b3eb22f7ab6ea60b095f88982247dd249139bf] though this might not help J > > > > Your Thoughts. > > > > Happy Thanksgiving. > > Vishnu. > > > > *From:*Angus Salkeld [mailto:asalkeld at mirantis.com] > *Sent:* Thursday, November 27, 2014 9:46 AM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown > > > > On Thu, Nov 27, 2014 at 12:20 PM, Zane Bitter > wrote: > > A bunch of us have spent the last few weeks working independently on > proof of concept designs for the convergence architecture. I think > those efforts have now reached a sufficient level of maturity that > we should start working together on synthesising them into a plan > that everyone can forge ahead with. As a starting point I'm going to > summarise my take on the three efforts; hopefully the authors of the > other two will weigh in to give us their perspective. > > > Zane's Proposal > =============== > > https://github.com/zaneb/heat-convergence-prototype/tree/distributed-graph > > I implemented this as a simulator of the algorithm rather than using > the Heat codebase itself in order to be able to iterate rapidly on > the design, and indeed I have changed my mind many, many times in > the process of implementing it. Its notable departure from a > realistic simulation is that it runs only one operation at a time - > essentially giving up the ability to detect race conditions in > exchange for a completely deterministic test framework. You just > have to imagine where the locks need to be. Incidentally, the test > framework is designed so that it can easily be ported to the actual > Heat code base as functional tests so that the same scenarios could > be used without modification, allowing us to have confidence that > the eventual implementation is a faithful replication of the > simulation (which can be rapidly experimented on, adjusted and > tested when we inevitably run into implementation issues). > > This is a complete implementation of Phase 1 (i.e. using existing > resource plugins), including update-during-update, resource > clean-up, replace on update and rollback; with tests. > > Some of the design goals which were successfully incorporated: > - Minimise changes to Heat (it's essentially a distributed version > of the existing algorithm), and in particular to the database > - Work with the existing plugin API > - Limit total DB access for Resource/Stack to O(n) in the number of > resources > - Limit overall DB access to O(m) in the number of edges > - Limit lock contention to only those operations actually contending > (i.e. no global locks) > - Each worker task deals with only one resource > - Only read resource attributes once > > > Open questions: > - What do we do when we encounter a resource that is in progress > from a previous update while doing a subsequent update? Obviously we > don't want to interrupt it, as it will likely be left in an unknown > state. Making a replacement is one obvious answer, but in many cases > there could be serious down-sides to that. How long should we wait > before trying it? What if it's still in progress because the engine > processing the resource already died? > > > > Also, how do we implement resource level timeouts in general? > > > > > Micha?'s Proposal > ================= > > https://github.com/inc0/heat-convergence-prototype/tree/iterative > > Note that a version modified by me to use the same test scenario > format (but not the same scenarios) is here: > > https://github.com/zaneb/heat-convergence-prototype/tree/iterative-adapted > > This is based on my simulation framework after a fashion, but with > everything implemented synchronously and a lot of handwaving about > how the actual implementation could be distributed. The central > premise is that at each step of the algorithm, the entire graph is > examined for tasks that can be performed next, and those are then > started. Once all are complete (it's synchronous, remember), the > next step is run. Keen observers will be asking how we know when it > is time to run the next step in a distributed version of this > algorithm, where it will be run and what to do about resources that > are in an intermediate state at that time. All of these questions > remain unanswered. > > > > Yes, I was struggling to figure out how it could manage an IN_PROGRESS > state as it's stateless. So you end up treading on the other action's toes. > > Assuming we use the resource's state (IN_PROGRESS) you could get around > that. Then you kick off a converge when ever an action completes (if > there is nothing new to be > > done then do nothing). > > > > > A non-exhaustive list of concerns I have: > - Replace on update is not implemented yet > - AFAIK rollback is not implemented yet > - The simulation doesn't actually implement the proposed architecture > - This approach is punishingly heavy on the database - O(n^2) or worse > > > > Yes, re-reading the state of all resources when ever run a new converge > is worrying, but I think Michal had some ideas to minimize this. > > > > - A lot of phase 2 is mixed in with phase 1 here, making it > difficult to evaluate which changes need to be made first and > whether this approach works with existing plugins > - The code is not really based on how Heat works at the moment, so > there would be either a major redesign required or lots of radical > changes in Heat or both > > I think there's a fair chance that given another 3-4 weeks to work > on this, all of these issues and others could probably be resolved. > The question for me at this point is not so much "if" but "why". > > Micha? believes that this approach will make Phase 2 easier to > implement, which is a valid reason to consider it. However, I'm not > aware of any particular issues that my approach would cause in > implementing phase 2 (note that I have barely looked into it at all > though). In fact, I very much want Phase 2 to be entirely > encapsulated by the Resource class, so that the plugin type (legacy > vs. convergence-enabled) is transparent to the rest of the system. > Only in this way can we be sure that we'll be able to maintain > support for legacy plugins. So a phase 1 that mixes in aspects of > phase 2 is actually a bad thing in my view. > > I really appreciate the effort that has gone into this already, but > in the absence of specific problems with building phase 2 on top of > another approach that are solved by this one, I'm ready to call this > a distraction. > > > > In it's defence, I like the simplicity of it. The concepts and code are > easy to understand - tho' part of this is doesn't implement all the > stuff on your list yet. > > > > > > Anant & Friends' Proposal > ========================= > > First off, I have found this very difficult to review properly since > the code is not separate from the huge mass of Heat code and nor is > the commit history in the form that patch submissions would take > (but rather includes backtracking and iteration on the design). As a > result, most of the information here has been gleaned from > discussions about the code rather than direct review. I have > repeatedly suggested that this proof of concept work should be done > using the simulator framework instead, unfortunately so far to no avail. > > The last we heard on the mailing list about this, resource clean-up > had not yet been implemented. That was a major concern because that > is the more difficult half of the algorithm. Since then there have > been a lot more commits, but it's not yet clear whether resource > clean-up, update-during-update, replace-on-update and rollback have > been implemented, though it is clear that at least some progress has > been made on most or all of them. Perhaps someone can give us an update. > > > https://github.com/anantpatil/heat-convergence-poc > > > > AIUI this code also mixes phase 2 with phase 1, which is a concern. > For me the highest priority for phase 1 is to be sure that it works > with existing plugins. Not only because we need to continue to > support them, but because converting all of our existing > 'integration-y' unit tests to functional tests that operate in a > distributed system is virtually impossible in the time frame we have > available. So the existing test code needs to stick around, and the > existing stack create/update/delete mechanisms need to remain in > place until such time as we have equivalent functional test coverage > to begin eliminating existing unit tests. (We'll also, of course, > need to have unit tests for the individual elements of the new > distributed workflow, functional tests to confirm that the > distributed workflow works in principle as a whole - the scenarios > from the simulator can help with _part_ of this - and, not least, an > algorithm that is as similar as possible to the current one so that > our existing tests remain at least somewhat representative and don't > require too many major changes themselves.) > > Speaking of tests, I gathered that this branch included tests, but I > don't know to what extent there are automated end-to-end functional > tests of the algorithm? > > From what I can gather, the approach seems broadly similar to the > one I eventually settled on also. The major difference appears to be > in how we merge two or more streams of execution (i.e. when one > resource depends on two or more others). In my approach, the > dependencies are stored in the resources and each joining of streams > creates a database row to track it, which is easily locked with > contention on the lock extending only to those resources which are > direct dependencies of the one waiting. In this approach, both the > dependencies and the progress through the graph are stored in a > database table, necessitating (a) reading of the entire table (as it > relates to the current stack) on every resource operation, and (b) > locking of the entire table (which is hard) when marking a resource > operation complete. > > I chatted to Anant about this today and he mentioned that they had > solved the locking problem by dispatching updates to a queue that is > read by a single engine per stack. > > My approach also has the neat side-effects of pushing the data > required to resolve get_resource and get_att (without having to > reload the resources again and query them) as well as to update > dependencies (e.g. because of a replacement or deletion) along with > the flow of triggers. I don't know if anything similar is at work here. > > It's entirely possible that the best design might combine elements > of both approaches. > > The same open questions I detailed under my proposal also apply to > this one, if I understand correctly. > > > I'm certain that I won't have represented everyone's work fairly > here, so I encourage folks to dive in and correct any errors about > theirs and ask any questions you might have about mine. (In case you > have been living under a rock, note that I'll be out of the office > for the rest of the week due to Thanksgiving so don't expect > immediate replies.) > > I also think this would be a great time for the wider Heat community > to dive in and start asking questions and suggesting ideas. We need > to, ahem, converge on a shared understanding of the design so we can > all get to work delivering it for Kilo. > > > > Agree, we need to get moving on this. > > -Angus > > > > cheers, > Zane. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Thanks Zane for your e-mail Zane and summarizing everyone's work. The design goals mentioned above looks more of performance goals and constraints to me. I understand that it is unacceptable to have a poorly performing engine and Resource plug-ins broken. Convergence spec clearly mentions that the existing Resource plugins should not be changed. IMHO, and my teams' HO, the design goals of convergence would be: 1. Stability: No transient failures, either in Openstack/external services or resources themselves, should fail the stack. Therefore, we need to have Observers to check for divergence and converge a resource if needed, to bring back to stable state. 2. Resiliency: Heat engines should be able to take up tasks in case of failures/restarts. 3. Backward compatibility: "We don't break the user space." No existing stacks should break. We started the PoC with these goals in mind, any performance optimization would be a plus point for us. Note than I am neglecting the performance goal, just that it should be next in the pipeline. The questions we should ask ourselves is whether we are storing enough data (state of stack) in DB to enable resiliency? Are we distributing the load evenly to all Heat engines? Does our notification mechanism provides us some form of guarantee or acknowledgement? In retrospective, we had to struggle a lot to understand the existing Heat engine. We couldn't have done justice by just creating another project in GitHub and without any concrete understanding of existing state-of-affairs. We are not at the same page with Heat core members, we are novice and cores are experts. I am glad that we experimented with the Heat engine directly. The current Heat engine is not resilient and the messaging also lacks reliability. We (my team and I guess cores also) understand that async message passing would be the way to go as synchronous RPC calls simply wouldn't scale. But with async message passing there has to be some mechanism of ACKing back, which I think lacks in current infrastructure. How could we provide stable user defined stack if the underlying Heat core lacks it? Convergence is all about stable stacks. To make the current Heat core stable we need to have, at the least: 1. Some mechanism to ACK back messages over AMQP. Or some other solid mechanism of message passing. 2. Some mechanism for fault tolerance in Heat engine using external tools/infrastructure like Celerey/Zookeeper. Without external infrastructure/tool we will end-up bloating Heat engine with lot of boiler-plate code to achieve this. We had recommended Celery in our previous e-mail (from Vishnu.) It was due to our experiments with Heat engines for this PoC, we could come up with above recommendations. Sate of our PoC --------------- On GitHub: https://github.com/anantpatil/heat-convergence-poc Our current implementation of PoC locks the stack after each notification to mark the graph as traversed and produce next level of resources for convergence. We are facing challenges in removing/minimizing these locks. We also have two different school of thoughts for solving this lock issue as mentioned above in Vishnu's e-mail. I will these descibe in detail the Wiki. There would different branches in our GitHub for these two approaches. There are few concerns like huge mass of code and bulky changes. I am currently addressing this issue. The README at GitHub for this PoC will be helpful for you to set things up quickly and play around with a dummy resource I have created, which also provides control on simulating update-in-place. I am in the process of cleaning up all the not needed code. Only the code needed to create/update/delete a stack will be there along with the dummy resource. Most of the changes are in service.py, stack.py and resource.py file. As Heat cores, I expect that you would be in better position to review these. I will send out a separate e-mail once I am done. We learned in hard way that the graph table with True/False was not enough for handling concurrent updates. We are currently testing it with graph table having state rather than a flag. Resource versions are very handy to rollback to and to handle concurrent updates. We have solved the delete-after-update, update-on-update and rollback issues (not in master branch though, I will publish the branch soon) and we are testing it. Resource timeout will be needed and I agree with Vishnu that the default value has to come from plug-in. Single resource provisioning should not cause the entire stack to timeout. It was hard learning for me, but we are on the verge of finishing this up. I agree that there are no functional tests which you could run and see. But we do have sample templates to test the PoC from outside. Our changes are also mostly limited to the three files I mentioned above. Nevertheless, I have asked my team to spend efforts in writing the tests for our PoC and it would be ready soon. I do understand your concern here. As I said, performance was our next goal, so we are addressing this issue. The current stack loads all the resources from DB which can be simply avoided and only those which need to be converged should be loaded along with their children to realize the resource definition if it has a get_attr dependency on children. Apart from this, only stack lock seems to be a problem. It was difficult, for me personally, to completely understand Zane's PoC and how it would lay the foundation for aforementioned design goals. It would be very helpful to have Zane's understanding here. I could understand that there are ideas like async message passing and notifying the parent which we also subscribe to. At lot of places we have common approach, as far as I could understand. I agree with Zane that there is lot of scope for us to collaborate. We do need core team's help in making Heat stable and scalable. Also, I have appealed my team members to be more active on IRC to answer questions. - Anant From jay.lau.513 at gmail.com Mon Dec 1 07:16:33 2014 From: jay.lau.513 at gmail.com (Jay Lau) Date: Mon, 1 Dec 2014 15:16:33 +0800 Subject: [openstack-dev] [gerrit] Gerrit review problem In-Reply-To: References: Message-ID: I'm OK if all reviewers agree on this proposal, I may need to bookmark the projects that I want to review. ;-) 2014-12-01 14:35 GMT+08:00 OpenStack Dev : > Jay this has been informed & discussed, pre & post the gerrit upgrade :) > > > On Mon Dec 01 2014 at 12:00:02 PM Jay Lau wrote: > >> Thanks all, yes, there are ways I can get all on-going patches for one >> project, I was complaining this because I can always direct to the right >> page before gerrit review upgrade, the upgrade broken this feature which >> makes not convenient for reviewers... >> >> 2014-12-01 14:13 GMT+08:00 Steve Martinelli : >> >>> Clicking on the magnifying glass icon (next to the project name) lists >>> all recent patches for that project (closed and open). >>> >>> I have a bookmark that lists all open reviews of a project and >>> always try to keep it open in a tab, and open specific code reviews in >>> new tabs. >>> >>> Steve >>> >>> Chen CH Ji wrote on 12/01/2014 12:59:52 AM: >>> >>> > From: Chen CH Ji >>> > To: "OpenStack Development Mailing List \(not for usage questions\)" >>> > >>> > Date: 12/01/2014 01:09 AM >>> > Subject: Re: [openstack-dev] [gerrit] Gerrit review problem >>> > >>> > +1, I also found this inconvenient point before ,thanks Jay for bring >>> up :) >>> > >>> > Best Regards! >>> > >>> > Kevin (Chen) Ji ? ? >>> > >>> > Engineer, zVM Development, CSTL >>> > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com >>> > Phone: +86-10-82454158 >>> > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian >>> > District, Beijing 100193, PRC >>> > >>> > [image removed] Jay Lau ---12/01/2014 01:56:48 PM---When I review a >>> > patch for OpenStack, after review finished, I want to check more >>> > patches for this pr >>> > >>> > From: Jay Lau >>> > To: OpenStack Development Mailing List < >>> openstack-dev at lists.openstack.org> >>> > Date: 12/01/2014 01:56 PM >>> > Subject: [openstack-dev] [gerrit] Gerrit review problem >>> > >>> > >>> > >>> > >>> > When I review a patch for OpenStack, after review finished, I want >>> > to check more patches for this project and then after click the >>> > "Project" content for this patch, it will **not** jump to all >>> > patches but project description. I think it is not convenient for a >>> > reviewer if s/he wants to review more patches for this project. >>> > >>> > [image removed] >>> > >>> > [image removed] >>> > >>> > -- >>> > Thanks, >>> > >>> > Jay_______________________________________________ >>> > OpenStack-dev mailing list >>> > OpenStack-dev at lists.openstack.org >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> > _______________________________________________ >>> > OpenStack-dev mailing list >>> > OpenStack-dev at lists.openstack.org >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Thanks, >> >> Jay >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Thanks, Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Mon Dec 1 07:43:20 2014 From: aj at suse.com (Andreas Jaeger) Date: Mon, 01 Dec 2014 08:43:20 +0100 Subject: [openstack-dev] [gerrit] Gerrit review problem In-Reply-To: References: Message-ID: <547C1C18.1010807@suse.com> On 12/01/2014 08:16 AM, Jay Lau wrote: > I'm OK if all reviewers agree on this proposal, I may need to bookmark > the projects that I want to review. ;-) Add the projects you want to review to your watch list (via Settings->Watched Projects) and see all open patches in your "watched changes" list: https://review.openstack.org/#/q/is:watched+status:open,n,z You can also create a personal dashboard using http://git.openstack.org/cgit/stackforge/gerrit-dash-creator/ . Several projects have their own dashboards, see for example: http://is.gd/openstackdocsreview Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany GF:Jeff Hawn,Jennifer Guild,Felix Imend?rffer,HRB 21284 (AG N?rnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From thierry at openstack.org Mon Dec 1 08:51:31 2014 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 01 Dec 2014 09:51:31 +0100 Subject: [openstack-dev] [gerrit] Gerrit review problem In-Reply-To: References: Message-ID: <547C2C13.60102@openstack.org> Jay Lau wrote: > > When I review a patch for OpenStack, after review finished, I want to > check more patches for this project and then after click the "Project" > content for this patch, it will **not** jump to all patches but project > description. I think it is not convenient for a reviewer if s/he wants > to review more patches for this project. I usually click on the name of the branch ("master") just below to workaround that UI issue. That gives me the list of patches to review in same branch, same project, which is generally a good approximation of what I want. -- Thierry Carrez (ttx) From jay.lau.513 at gmail.com Mon Dec 1 09:04:23 2014 From: jay.lau.513 at gmail.com (Jay Lau) Date: Mon, 1 Dec 2014 17:04:23 +0800 Subject: [openstack-dev] [gerrit] Gerrit review problem In-Reply-To: <547C2C13.60102@openstack.org> References: <547C2C13.60102@openstack.org> Message-ID: Cool, Thierry! I see. This is really what I want ;-) Thanks! 2014-12-01 16:51 GMT+08:00 Thierry Carrez : > Jay Lau wrote: > > > > When I review a patch for OpenStack, after review finished, I want to > > check more patches for this project and then after click the "Project" > > content for this patch, it will **not** jump to all patches but project > > description. I think it is not convenient for a reviewer if s/he wants > > to review more patches for this project. > > I usually click on the name of the branch ("master") just below to > workaround that UI issue. That gives me the list of patches to review in > same branch, same project, which is generally a good approximation of > what I want. > > -- > Thierry Carrez (ttx) > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Thanks, Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From mvineetmenon at gmail.com Mon Dec 1 09:13:20 2014 From: mvineetmenon at gmail.com (Vineet Menon) Date: Mon, 1 Dec 2014 10:13:20 +0100 Subject: [openstack-dev] [devstack] image create mysql error In-Reply-To: References: Message-ID: Hi, Looks like the password supplied wasn't correct. Have you changed the pasword given in local.conf after a devstack installation? If yes, you need to purge mysql credentials stores in your machine. Regards, Vineet Menon On 1 December 2014 at 01:33, liuxinguo wrote: > When our CI run devstack, it occurs error when run ? image create > mysql?. Log is pasted as following: > > > > 22186 2014-11-29 21:11:48.611 | ++ basename > /opt/stack/new/devstack/files/mysql.qcow2 .qcow2 > > 22187 2014-11-29 21:11:48.623 | + image_name=mysql > > 22188 2014-11-29 21:11:48.624 | + disk_format=qcow2 > > 22189 2014-11-29 21:11:48.624 | + container_format=bare > > 22190 2014-11-29 21:11:48.624 | + is_arch ppc64 > > 22191 2014-11-29 21:11:48.628 | ++ uname -m > > 22192 2014-11-29 21:11:48.710 | + [[ i686 == \p\p\c\6\4 ]] > > 22193 2014-11-29 21:11:48.710 | + '[' bare = bare ']' > > 22194 2014-11-29 21:11:48.710 | + '[' '' = zcat ']' > > 22195 2014-11-29 21:11:48.710 | + openstack --os-token > 5387fe9c6f6d4182b09461fe232501db --os-url http://127.0.0.1:9292 image > create mysql --public --container-format=bare --disk-format qcow2 > > 22196 2014-11-29 21:11:57.275 | ERROR: openstack > > 22197 2014-11-29 21:11:57.275 | > > 22198 2014-11-29 21:11:57.275 | 401 Unauthorized > > 22199 2014-11-29 21:11:57.275 | > > 22200 2014-11-29 21:11:57.275 | > > 22201 2014-11-29 21:11:57.275 |

401 Unauthorized

> > 22202 2014-11-29 21:11:57.275 | This server could not verify that you > are authorized to access the document you requested. Either you supplied > the wrong credentials (e.g., bad password), or your browser does not > understand how to supply the credentials required.

> > 22203 2014-11-29 21:11:57.275 | > > 22204 2014-11-29 21:11:57.276 | > > 22205 2014-11-29 21:11:57.276 | (HTTP 401) > > 22206 2014-11-29 21:11:57.344 | + exit_trap > > > > ? Any one can give me some hint? > > ? Thanks. > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Mon Dec 1 09:31:05 2014 From: aj at suse.com (Andreas Jaeger) Date: Mon, 01 Dec 2014 10:31:05 +0100 Subject: [openstack-dev] [Nova][all] Gate error In-Reply-To: <547BF250.2090208@linux.vnet.ibm.com> References: <547BF250.2090208@linux.vnet.ibm.com> Message-ID: <547C3559.1020907@suse.com> On 12/01/2014 05:45 AM, Eli Qiao wrote: > Got gate error today. > how can we kick infrastructure team to fix it? > > HTTP error 404 while getting > http://pypi.IAD.openstack.org/packages/source/l/logilab-common/logilab-common-0.63.2.tar.gz#md5=2bf4599ae1f2ccf4603ca02c5d7e798e > (from > http://pypi.IAD.openstack.org/simple/logilab-common/ > ) > > [1]http://logs.openstack.org/03/120703/24/check/gate-nova-pep8/b745055/console.html This looks like a problem with our pypi mirror that effects other projects as well. ;( Sergey just checked and tried to fix it but couldn't. I hope that Jeremey or another infra admin can fix it once they are awake (US time). For now, please do not issue rechecks - let's wait until the issue is fixed, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany GF:Jeff Hawn,Jennifer Guild,Felix Imend?rffer,HRB 21284 (AG N?rnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From gilmeir at mellanox.com Mon Dec 1 10:18:53 2014 From: gilmeir at mellanox.com (Gil Meir) Date: Mon, 1 Dec 2014 10:18:53 +0000 Subject: [openstack-dev] [Fuel] [Mellanox] [5.1.1] Critical bugs found by Mellanox for v5.1.1 Message-ID: We have found 3 critical bugs for Fuel v5.1.1 here in Mellanox: 1. https://bugs.launchpad.net/fuel/+bug/1397891 This is related to https://bugs.launchpad.net/fuel/+bug/1396020 The kernel fix there is working, but there is a missing OVS service restart, since restarting Mellanox driver openibd required OVS restart. I will push a puppet fix. 2. https://bugs.launchpad.net/fuel/+bug/1397895 On our side looks like MOS-cinder added a patch to the cinder package which has a mistake on the ISER part. We investigated it here and found the cause, the solution is a small fix in the ISERTgtAdm class, which affect only Mellanox. A patch was attached to the LP bug. 3. https://bugs.launchpad.net/fuel/+bug/1397891 This was reproduced twice. Looks like it's not related specifically to Mellanox flow, but a general MOS issue - mismatching tenant owners for floating-IP and port (services/admin). Regards, Gil Meir SW Cloud Solutions Mellanox Technologies -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Mon Dec 1 10:19:10 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 01 Dec 2014 11:19:10 +0100 Subject: [openstack-dev] sqlalchemy-migrate call for reviews In-Reply-To: <20141129232815.GB2497@yuggoth.org> References: <20141129232815.GB2497@yuggoth.org> Message-ID: <547C409E.2040306@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Indeed, the review queue is non-responsive. There are other patches in the queue that bit rot there: https://review.openstack.org/#/q/status:open+project:stackforge/sqlalchemy-migrate,n,z I guess since no one with a +2 hammer systematically monitors patches there, users are on their own and better fork if blocked. Sad but true. (btw technically monitoring is not that difficult: gerrit allows to subscribe to specific projects, and this one does not look like time consuming from reviewer perspective.) /Ihar On 30/11/14 00:28, Jeremy Stanley wrote: > To anyone who reviews sqlalchemy-migrate changes, there are people > talking to themselves on GitHub about long-overdue bug fixes > because the Gerrit review queue for it is sluggish and they > apparently don't realize the SQLAM reviewers don't look at Google > Code issues[1] and GitHub pull request comments[2]. > > [1] > https://code.google.com/p/sqlalchemy-migrate/issues/detail?id=171 > [2] https://github.com/stackforge/sqlalchemy-migrate/pull/5 > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUfECeAAoJEC5aWaUY1u57cx8H/2d8urszdd3RIsU+3JyrnVg6 I92WtoCS84HdOEE7DjM5m/tgFGjIp9Gh4lovEft5JYDcnHACfd4gdhUunt+PAvVO 2usFuPdR9IJvbKc28FJAqZeXJpvMc0KSMN4j8t1dtgu6Cv4TaFZEN77G6vrV9jem b56npPlmpIaDpGP49XtFBHMcbU0pVJ0AQCWUd0wOX+NQl4EfF0stlvxd/1LWn9xf rZCzatEqyRItlAB+ATpI0TlGSgvVv0PKqrV+TnoZ4OU/TZINNoCjZELB7NkmfDMz 9rJgviCmCHRyWs+VwsbEeGKDI3nBLjX7UEk5K2f93VsZQWYpW3q6Z2rrpmH977Y= =gT/r -----END PGP SIGNATURE----- From priyanka.chopra at tcs.com Mon Dec 1 10:29:49 2014 From: priyanka.chopra at tcs.com (Priyanka Chopra) Date: Mon, 1 Dec 2014 15:59:49 +0530 Subject: [openstack-dev] Fw: #Personal# Ref: L3 service integration with service framework Message-ID: Hi Kyle, Gentle reminder. Please suggest a good time when we can discuss on blueprint - L3 router Service Type Framework I am looking for a L3 driver and plugin to enable neutron L3 calls from OS to ODL. Best Regards Priyanka ----- Forwarded by Priyanka Chopra/TVM/TCS on 12/01/2014 03:44 PM ----- From: Priyanka Chopra/TVM/TCS To: mestery at mestery.com Cc: kwatsen at juniper.net, Partha Datta/DEL/TCS at TCS, Deepankar Gupta/DEL/TCS at TCS Date: 11/27/2014 11:36 AM Subject: Re: [openstack-dev] #Personal# Ref: L3 service integration with service framework Hi Kyle, Can we setup a call to understand the current state and future developments in detail? Please suggest a good time. Will share webex/bridge details. Best Regards Priyanka From: Kyle Mestery To: "OpenStack Development Mailing List (not for usage questions)" Date: 11/27/2014 01:06 AM Subject: Re: [openstack-dev] #Personal# Ref: L3 service integration with service framework There is already an out-of-tree L3 plugin, and as part of the plugin decomposition work, I'm planning to use this as the base for the new ODL driver in Kilo. Before you file specs and BPs, we should talk a bit more. Thanks, Kyle [1] https://github.com/dave-tucker/odl-neutron-drivers On Wed, Nov 26, 2014 at 12:53 PM, Kevin Benton wrote: > +1. In the ODL case you would just want a completely separate L3 plugin. > > On Wed, Nov 26, 2014 at 7:29 AM, Mathieu Rohon > wrote: >> >> Hi, >> >> you can still add your own service plugin, as a mixin of >> L3RouterPlugin (have a look at brocade's code). >> AFAIU service framework would manage the coexistence several >> implementation of a single service plugin. >> >> This is currently not prioritized by neutron. This kind of work might >> restart in the advanced_services project. >> >> On Wed, Nov 26, 2014 at 2:28 PM, Priyanka Chopra >> wrote: >> > Hi Gary, All, >> > >> > >> > This is with reference to blueprint - L3 router Service Type Framework >> > and >> > corresponding development at github repo. >> > >> > I noticed that the patch was abandoned due to inactivity. Wanted to know >> > if >> > there is a specific reason for which the development was put on hold? >> > >> > I am working on a Use-case to enable neutron calls (L2 and L3) from >> > OpenStack to OpenDaylight neutron. However presently ML2 forwards the L2 >> > calls to ODL neutron, but not the L3 calls (router and FIP). >> > With this blueprint submission the L3 Service framework (that includes >> > L3 >> > driver, agent and plugin) will be completed and hence L3 calls from >> > OpenStack can be redirected to any controller platform. Please suggest >> > in >> > case anyone else is working on the same or if we can do the enhancements >> > required and submit the code to enable such a usecase. >> > >> > >> > Best Regards >> > Priyanka >> > >> > =====-----=====-----===== >> > Notice: The information contained in this e-mail >> > message and/or attachments to it may contain >> > confidential or privileged information. If you are >> > not the intended recipient, any dissemination, use, >> > review, distribution, printing or copying of the >> > information contained in this e-mail message >> > and/or attachments to it are strictly prohibited. If >> > you have received this communication in error, >> > please notify us by reply e-mail or telephone and >> > immediately and permanently delete the message >> > and any attachments. Thank you >> > >> > >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Kevin Benton > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From eglynn at redhat.com Mon Dec 1 10:36:57 2014 From: eglynn at redhat.com (Eoghan Glynn) Date: Mon, 1 Dec 2014 05:36:57 -0500 (EST) Subject: [openstack-dev] [api] Counting resources In-Reply-To: <18E25F50-F987-48AF-AF41-D030E2377DD5@rackspace.com> References: <953956350.6024904.1416521203060.JavaMail.zimbra@redhat.com> <18E25F50-F987-48AF-AF41-D030E2377DD5@rackspace.com> Message-ID: <1506408519.11752229.1417430217629.JavaMail.zimbra@redhat.com> > detail=concise is not a media type and looking at the grammar in the RFC it > wouldn?t be valid. > I think the grammar would allow for "application/json; detail=concise". See > the last line in the definition of the "media-range" nonterminal in the > grammar (copied below for convenience): > Accept = "Accept" ":" > #( media-range [ accept-params ] ) > media-range = ( "*/*" > | ( type "/" "*" ) > | ( type "/" subtype ) > ) *( ";" parameter ) > accept-params = ";" "q" "=" qvalue *( accept-extension ) > accept-extension = ";" token [ "=" ( token | quoted-string ) ] > The grammar does not define the "parameter" nonterminal but there is an > example in the same section that seems to suggest what it could look like: > Accept: text/*, text/html, text/html;level=1, */* > Shaunak > On Nov 26, 2014, at 2:03 PM, Everett Toews < everett.toews at RACKSPACE.COM > > wrote: > > > > > On Nov 20, 2014, at 4:06 PM, Eoghan Glynn < eglynn at redhat.com > wrote: > > > > How about allowing the caller to specify what level of detail > they require via the Accept header? > > ? GET // > Accept: application/json; detail=concise > > "The Accept request-header field can be used to specify certain media types > which are acceptable for the response.? [1] > > detail=concise is not a media type and looking at the grammar in the RFC it > wouldn?t be valid. It?s not appropriate for the Accept header. Well it's not a media type for sure, as it's intended to be an "accept-extension". (which is allowed by the spec to be specified in the Accept header, in addition to media types & q-values) Cheers, Eoghan > Everett > > [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From majopela at redhat.com Mon Dec 1 10:42:45 2014 From: majopela at redhat.com (=?utf-8?Q?Miguel_=C3=81ngel_Ajo?=) Date: Mon, 1 Dec 2014 11:42:45 +0100 Subject: [openstack-dev] [neutron] force_gateway_on_subnet, please don't deprecate Message-ID: <95DCE311E0DE421D8C881E5CCD80E5D8@redhat.com> My proposal here, is, _let?s not deprecate this setting_, as it?s a valid use case of a gateway configuration, and let?s provide it on the reference implementation. TL;DR I?ve been looking at this yesterday, during a test deployment on a site where they provide external connectivity with the gateway outside subnet. And I needed to switch it of, to actually be able to have any external connectivity. https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L121 This is handled by providing an on-link route to the gateway first, and then adding the default gateway. It looks to me very interesting (not only because it?s the only way to work on that specific site [2][3][4]), because you can dynamically wire RIPE blocks to your server, without needing to use an specific IP for external routing or broadcast purposes, and instead use the full block in openstack. I have a tiny patch to support this on the neutron l3-agent [1] I yet need to add the logic to check ?gateway outside subnet?, then add the ?onlink? route. [1] diff --git a/neutron/agent/linux/interface.py b/neutron/agent/linux/interface.py index 538527b..5a9f186 100644 --- a/neutron/agent/linux/interface.py +++ b/neutron/agent/linux/interface.py @@ -116,15 +116,16 @@ class LinuxInterfaceDriver(object): namespace=namespace, ip=ip_cidr) - if gateway: - device.route.add_gateway(gateway) - new_onlink_routes = set(s['cidr'] for s in extra_subnets) + if gateway: + new_onlink_routes.update([gateway]) existing_onlink_routes = set(device.route.list_onlink_routes()) for route in new_onlink_routes - existing_onlink_routes: device.route.add_onlink_route(route) for route in existing_onlink_routes - new_onlink_routes: device.route.delete_onlink_route(route) + if gateway: + device.route.add_gateway(gateway) def delete_conntrack_state(self, root_helper, namespace, ip): """Delete conntrack state associated with an IP address. [2] http://www.soyoustart.com/ (http://www.soyoustart.com/en/essential-servers/) [3] http://www.ovh.co.uk/ (http://www.ovh.co.uk/dedicated_servers/) [4] http://www.kimsufi.com/ (http://www.kimsufi.com/uk/) Miguel ?ngel Ajo -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpkshetty at gmail.com Mon Dec 1 10:44:24 2014 From: dpkshetty at gmail.com (Deepak Shetty) Date: Mon, 1 Dec 2014 16:14:24 +0530 Subject: [openstack-dev] [devstack] Need reviews for "Deploy GlusterFS server" patch Message-ID: Just correcting the tag and subject line in $subject, so that it gets the attention of the right group of folks (from devstack). thanx, deepak On Mon, Dec 1, 2014 at 11:51 AM, Bharat Kumar wrote: > Hi All, > > Regarding the patch "Deploy GlusterFS Server" ( > https://review.openstack.org/#/c/133102/). > Submitted this patch long back, this patch also got Code Review +2. > > I think it is waiting for Workflow approval. Another task is dependent on > this patch. > Please review (Workflow) this patch and help me to merge this patch. > > -- > Thanks & Regards, > Bharat Kumar K > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gilmeir at mellanox.com Mon Dec 1 11:09:35 2014 From: gilmeir at mellanox.com (Gil Meir) Date: Mon, 1 Dec 2014 11:09:35 +0000 Subject: [openstack-dev] [Fuel] [Mellanox] [5.1.1] Critical bugs found by Mellanox for v5.1.1 In-Reply-To: References: Message-ID: <3a137143b24944289f7f623284c67c17@AMSPR05MB343.eurprd05.prod.outlook.com> I've mistakenly put issue #1 link in issue #3, the correct link for the floating IPs issue is: https://bugs.launchpad.net/fuel/+bug/1397907 Gil From: Gil Meir [mailto:gilmeir at mellanox.com] Sent: Monday, December 01, 2014 12:19 To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [Fuel] [Mellanox] [5.1.1] Critical bugs found by Mellanox for v5.1.1 We have found 3 critical bugs for Fuel v5.1.1 here in Mellanox: 1. https://bugs.launchpad.net/fuel/+bug/1397891 This is related to https://bugs.launchpad.net/fuel/+bug/1396020 The kernel fix there is working, but there is a missing OVS service restart, since restarting Mellanox driver openibd required OVS restart. I will push a puppet fix. 2. https://bugs.launchpad.net/fuel/+bug/1397895 On our side looks like MOS-cinder added a patch to the cinder package which has a mistake on the ISER part. We investigated it here and found the cause, the solution is a small fix in the ISERTgtAdm class, which affect only Mellanox. A patch was attached to the LP bug. 3. https://bugs.launchpad.net/fuel/+bug/1397891 This was reproduced twice. Looks like it's not related specifically to Mellanox flow, but a general MOS issue - mismatching tenant owners for floating-IP and port (services/admin). Regards, Gil Meir SW Cloud Solutions Mellanox Technologies -------------- next part -------------- An HTML attachment was scrubbed... URL: From slukjanov at mirantis.com Mon Dec 1 11:12:53 2014 From: slukjanov at mirantis.com (Sergey Lukjanov) Date: Mon, 1 Dec 2014 15:12:53 +0400 Subject: [openstack-dev] [Nova][all] Gate error In-Reply-To: <547C3559.1020907@suse.com> References: <547BF250.2090208@linux.vnet.ibm.com> <547C3559.1020907@suse.com> Message-ID: I've made a temp fix for it by removing the logilab-common 0.63.2 from our mirror and start full re-sync for mirrors, hopefully it'll fix everything. Thanks to lifeless for help. On Mon, Dec 1, 2014 at 12:31 PM, Andreas Jaeger wrote: > On 12/01/2014 05:45 AM, Eli Qiao wrote: > > Got gate error today. > > how can we kick infrastructure team to fix it? > > > > HTTP error 404 while getting > > > http://pypi.IAD.openstack.org/packages/source/l/logilab-common/logilab-common-0.63.2.tar.gz#md5=2bf4599ae1f2ccf4603ca02c5d7e798e > > < > http://pypi.iad.openstack.org/packages/source/l/logilab-common/logilab-common-0.63.2.tar.gz#md5=2bf4599ae1f2ccf4603ca02c5d7e798e > >(from > > http://pypi.IAD.openstack.org/simple/logilab-common/ > > ) > > > > [1] > http://logs.openstack.org/03/120703/24/check/gate-nova-pep8/b745055/console.html > > This looks like a problem with our pypi mirror that effects other > projects as well. ;( > > Sergey just checked and tried to fix it but couldn't. I hope that > Jeremey or another infra admin can fix it once they are awake (US time). > > For now, please do not issue rechecks - let's wait until the issue is > fixed, > > Andreas > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany > GF:Jeff Hawn,Jennifer Guild,Felix Imend?rffer,HRB 21284 (AG N?rnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkaminski at mirantis.com Mon Dec 1 11:14:52 2014 From: pkaminski at mirantis.com (Przemyslaw Kaminski) Date: Mon, 01 Dec 2014 12:14:52 +0100 Subject: [openstack-dev] [Fuel] [Nailgun] Unit tests improvement meeting minutes In-Reply-To: <54789FA6.7050200@mirantis.com> References: <54789FA6.7050200@mirantis.com> Message-ID: <547C4DAC.1010906@mirantis.com> On 11/28/2014 05:15 PM, Ivan Kliuk wrote: > Hi, team! > > Let me please present ideas collected during the unit tests > improvement meeting: > 1) Rename class ``Environment`` to something more descriptive > 2) Remove hardcoded self.clusters[0], e.t.c from ``Environment``. > Let's use parameters instead > 3) run_tests.sh should invoke alternate syncdb() for cases where we > don't need to test migration procedure, i.e. create_db_schema() > 4) Consider usage of custom fixture provider. The main functionality > should combine loading from YAML/JSON source and support fixture > inheritance > 5) The project needs in a document(policy) which describes: > - Tests creation technique; > - Test categorization (integration/unit) and approaches of testing > different code base > - > 6) Review the tests and refactor unit tests as described in the test > policy > 7) Mimic Nailgun module structure in unit tests > 8) Explore Swagger tool Swagger is a great tool, we used it in my previous job. We used Tornado, attached some hand-crafted code to RequestHandler class so that it inspected all its subclasses (i.e. different endpoint with REST methods), generated swagger file and presented the Swagger UI (https://github.com/swagger-api/swagger-ui) under some /docs/ URL. What this gave us is that we could just add YAML specification directly to the docstring of the handler method and it would automatically appear in the UI. It's worth noting that the UI provides an interactive form for sending requests to the API so that tinkering with the API is easy [1]. [1] https://www.dropbox.com/s/y0nuxull9mxm5nm/Swagger%20UI%202014-12-01%2012-13-06.png?dl=0 P. > -- > Sincerely yours, > Ivan Kliuk > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From slukjanov at mirantis.com Mon Dec 1 11:15:18 2014 From: slukjanov at mirantis.com (Sergey Lukjanov) Date: Mon, 1 Dec 2014 15:15:18 +0400 Subject: [openstack-dev] logilab-common 404 in jobs Message-ID: Hey, there was a pypi mirrors issue with downloading logilab-common 0.63.2 from our mirrors: HTTP error 404 while getting http://pypi.IAD.openstack.org/packages/source/l/logilab-common/logilab-common-0.63.2.tar.gz#md5=2bf4599ae1f2ccf4603ca02c5d7e798e (from http://pypi.IAD.openstack.org/simple/logilab-common/) Example: http://logs.openstack.org/03/120703/24/check/gate-nova-pep8/b745055/console.html I've fixed it by removing the 0.63.2 version from the index and start the full re-sync of mirror that should download the new version. Thanks. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at dague.net Mon Dec 1 11:25:06 2014 From: sean at dague.net (Sean Dague) Date: Mon, 01 Dec 2014 06:25:06 -0500 Subject: [openstack-dev] [api] Counting resources In-Reply-To: <1506408519.11752229.1417430217629.JavaMail.zimbra@redhat.com> References: <953956350.6024904.1416521203060.JavaMail.zimbra@redhat.com> <18E25F50-F987-48AF-AF41-D030E2377DD5@rackspace.com> <1506408519.11752229.1417430217629.JavaMail.zimbra@redhat.com> Message-ID: <547C5012.5030300@dague.net> On 12/01/2014 05:36 AM, Eoghan Glynn wrote: > > >> detail=concise is not a media type and looking at the grammar in the RFC it >> wouldn?t be valid. >> I think the grammar would allow for "application/json; detail=concise". See >> the last line in the definition of the "media-range" nonterminal in the >> grammar (copied below for convenience): >> Accept = "Accept" ":" >> #( media-range [ accept-params ] ) >> media-range = ( "*/*" >> | ( type "/" "*" ) >> | ( type "/" subtype ) >> ) *( ";" parameter ) >> accept-params = ";" "q" "=" qvalue *( accept-extension ) >> accept-extension = ";" token [ "=" ( token | quoted-string ) ] >> The grammar does not define the "parameter" nonterminal but there is an >> example in the same section that seems to suggest what it could look like: >> Accept: text/*, text/html, text/html;level=1, */* >> Shaunak >> On Nov 26, 2014, at 2:03 PM, Everett Toews < everett.toews at RACKSPACE.COM > >> wrote: >> >> >> >> >> On Nov 20, 2014, at 4:06 PM, Eoghan Glynn < eglynn at redhat.com > wrote: >> >> >> >> How about allowing the caller to specify what level of detail >> they require via the Accept header? >> >> ? GET // >> Accept: application/json; detail=concise >> >> "The Accept request-header field can be used to specify certain media types >> which are acceptable for the response.? [1] >> >> detail=concise is not a media type and looking at the grammar in the RFC it >> wouldn?t be valid. It?s not appropriate for the Accept header. > > Well it's not a media type for sure, as it's intended to be an > "accept-extension". > > (which is allowed by the spec to be specified in the Accept header, > in addition to media types & q-values) Please lets not use the Accept field for this. It doesn't really fit into the philosophy of content negotiation. -Sean -- Sean Dague http://dague.net From slukjanov at mirantis.com Mon Dec 1 11:29:37 2014 From: slukjanov at mirantis.com (Sergey Lukjanov) Date: Mon, 1 Dec 2014 15:29:37 +0400 Subject: [openstack-dev] logilab-common 404 in jobs In-Reply-To: References: Message-ID: Launchpad issue - https://bugs.launchpad.net/openstack-ci/+bug/1397931 Elastic search query: http://logstash.openstack.org/#eyJzZWFyY2giOiJ0YWdzOlwiY29uc29sZVwiIEFORCBtZXNzYWdlOlwiSFRUUCBlcnJvciA0MDQgd2hpbGUgZ2V0dGluZ1wiIEFORCBtZXNzYWdlOiBcImxvZ2lsYWJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiODY0MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDE3NDMzMjc0MDY1fQ== Elastic-recheck CR: https://review.openstack.org/#/c/138040/ On Mon, Dec 1, 2014 at 2:15 PM, Sergey Lukjanov wrote: > Hey, > > there was a pypi mirrors issue with downloading logilab-common 0.63.2 from > our mirrors: > > HTTP error 404 > while getting > http://pypi.IAD.openstack.org/packages/source/l/logilab-common/logilab-common-0.63.2.tar.gz#md5=2bf4599ae1f2ccf4603ca02c5d7e798e > (from http://pypi.IAD.openstack.org/simple/logilab-common/) > > Example: > http://logs.openstack.org/03/120703/24/check/gate-nova-pep8/b745055/console.html > > I've fixed it by removing the 0.63.2 version from the index and start the > full re-sync of mirror that should download the new version. > > Thanks. > > -- > Sincerely yours, > Sergey Lukjanov > Sahara Technical Lead > (OpenStack Data Processing) > Principal Software Engineer > Mirantis Inc. > -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tengqim at linux.vnet.ibm.com Mon Dec 1 11:41:10 2014 From: tengqim at linux.vnet.ibm.com (Qiming Teng) Date: Mon, 1 Dec 2014 19:41:10 +0800 Subject: [openstack-dev] [Heat] Rework auto-scaling support in Heat In-Reply-To: <5C451B3D6BCE1443888323FE7B0AA3060A3F60E5@IRSMSX108.ger.corp.intel.com> References: <20141128073257.GA7165@localhost> <5C451B3D6BCE1443888323FE7B0AA3060A3F60E5@IRSMSX108.ger.corp.intel.com> Message-ID: <20141201114108.GA16648@localhost> On Fri, Nov 28, 2014 at 02:33:10PM +0000, Jastrzebski, Michal wrote: > > > > -----Original Message----- > > From: Qiming Teng [mailto:tengqim at linux.vnet.ibm.com] > > Sent: Friday, November 28, 2014 8:33 AM > > To: openstack-dev at lists.openstack.org > > Subject: [openstack-dev] [Heat] Rework auto-scaling support in Heat > > > > Dear all, > > > > Auto-Scaling is an important feature supported by Heat and needed by many > > users we talked to. There are two flavors of AutoScalingGroup resources in > > Heat today: the AWS-based one and the Heat native one. As more requests > > coming in, the team has proposed to separate auto-scaling support into a > > separate service so that people who are interested in it can jump onto it. At > > the same time, Heat engine (especially the resource type code) will be > > drastically simplified. The separated AS service could move forward more > > rapidly and efficiently. > > > > This work was proposed a while ago with the following wiki and blueprints > > (mostly approved during Havana cycle), but the progress is slow. A group of > > developers now volunteer to take over this work and move it forward. > > > > wiki: https://wiki.openstack.org/wiki/Heat/AutoScaling > > BPs: > > - https://blueprints.launchpad.net/heat/+spec/as-lib-db > > - https://blueprints.launchpad.net/heat/+spec/as-lib > > - https://blueprints.launchpad.net/heat/+spec/as-engine-db > > - https://blueprints.launchpad.net/heat/+spec/as-engine > > - https://blueprints.launchpad.net/heat/+spec/autoscaling-api > > - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-client > > - https://blueprints.launchpad.net/heat/+spec/as-api-group-resource > > - https://blueprints.launchpad.net/heat/+spec/as-api-policy-resource > > - https://blueprints.launchpad.net/heat/+spec/as-api-webhook-trigger- > > resource > > - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources > > > > Once this whole thing lands, Heat engine will talk to the AS engine in terms of > > ResourceGroup, ScalingPolicy, Webhooks. Heat engine won't care how auto- > > scaling is implemented although the AS engine may in turn ask Heat to > > create/update stacks for scaling's purpose. In theory, AS engine can > > create/destroy resources by directly invoking other OpenStack services. This > > new AutoScaling service may eventually have its own DB, engine, API, api- > > client. We can definitely aim high while work hard on real code. > > > > After reviewing the BPs/Wiki and some communication, we get two options > > to push forward this. I'm writing this to solicit ideas and comments from the > > community. > > > > Option A: Top-Down Quick Split > > ------------------------------ > > Do you want to drop support of AS from heat altogether? Many people would disagree with drop of AS (even drop of HARestarter is problem). We don't really want to support duplicate systems, so having 2 engines of autoscalling would be wrong. > That being said, I can see big gap which heat (or services around) could fill - intelligent orchiestration. By that I mean autohealing, auto-redeploying, autoscalling and pretty much auto-whatever. Clouds are fluid, we could provide framework for that. Heat would be great tool for that because it has context of whole stack, and in fact all we do would be stack update. Michael, no. Heat will continue to support AS as it does today, though the underlying implementation will be drastically changed. There won't be two engines of AUTOSCALING, but one Heat engine focusing on orchestration and another AS engine focusing on autoscaling. > > This means we will follow a roadmap shown below, which is not 100% > > accurate yet and very rough: > > > > 1) Get the separated REST service in place and working > > 2) Switch Heat resources to use the new REST service > > > > Pros: > > - Separate code base means faster review/commit cycle > > - Less code churn in Heat > > Cons: > > - A new service need to be installed/configured/launched > > - Need commitments from dedicated, experienced developers from very > > beginning > > > > Option B: Bottom-Up Slow Growth > > ------------------------------- > > Personally I'd be advocate of fixing what we have instead of making new thing. Maybe we should make it separate process (as long as we'll try to keep consistent api)? Maybe add place for new logic (autohealing?), but still keep that inside heat. > One thing - we'll need to make concurrent updates really robust when we want to make whole thing automatic (I'm talking about convergence). So basically, you opt to option B, if I'm understanding this correctly. :) I believe the convergence work (especially the parallelization chapter) is orthogonal to autoscaling. > > The roadmap is more conservative, with many (yes, many) incremental > > patches to migrate things carefully. > > > > 1) Separate some of the autoscaling logic into libraries in Heat > > 2) Augment heat-engine with new AS RPCs > > 3) Switch AS related resource types to use the new RPCs > > 4) Add new REST service that also talks to the same RPC > > (create new GIT repo, API endpoint and client lib...) > > > > Pros: > > - Less risk breaking user lands with each revision well tested > > - More smooth transition for users in terms of upgrades > > > > Cons: > > - A lot of churn within Heat code base, which means long review cycles > > - Still need commitments from cores to supervise the whole process > > > > There could be option C, D... but the two above are what we came up with > > during the discussion. > > > > Another important thing we talked about is about the open discussion on > > this. OpenStack Wiki seems a good place to document settled designs but > > not for interactive discussions. Probably we should leverage etherpad and > > the mailinglist when moving forward. Suggestions on this are also welcomed. > > Wouldn't series of specs be better+wiki/etherpad to rule them all? > > > Thanks. > > > > Regards, > > Qiming > > From vkuklin at mirantis.com Mon Dec 1 12:07:49 2014 From: vkuklin at mirantis.com (Vladimir Kuklin) Date: Mon, 1 Dec 2014 16:07:49 +0400 Subject: [openstack-dev] [Fuel] [Mellanox] [5.1.1] Critical bugs found by Mellanox for v5.1.1 In-Reply-To: <3a137143b24944289f7f623284c67c17@AMSPR05MB343.eurprd05.prod.outlook.com> References: <3a137143b24944289f7f623284c67c17@AMSPR05MB343.eurprd05.prod.outlook.com> Message-ID: Gil, I wrote a fix for 1397907. Please, test and review. On Mon, Dec 1, 2014 at 2:09 PM, Gil Meir wrote: > I?ve mistakenly put issue #1 link in issue #3, > > the correct link for the floating IPs issue is: > https://bugs.launchpad.net/fuel/+bug/1397907 > > > > Gil > > > > *From:* Gil Meir [mailto:gilmeir at mellanox.com] > *Sent:* Monday, December 01, 2014 12:19 > *To:* openstack-dev at lists.openstack.org > *Subject:* [openstack-dev] [Fuel] [Mellanox] [5.1.1] Critical bugs found > by Mellanox for v5.1.1 > > > > We have found 3 critical bugs for Fuel v5.1.1 here in Mellanox: > > > > 1. https://bugs.launchpad.net/fuel/+bug/1397891 > > This is related to https://bugs.launchpad.net/fuel/+bug/1396020 > > The kernel fix there is working, but there is a missing OVS service > restart, since restarting Mellanox driver openibd required OVS restart. > > I will push a puppet fix. > > > > 2. https://bugs.launchpad.net/fuel/+bug/1397895 > > On our side looks like MOS-cinder added a patch to the cinder package > which has a mistake on the ISER part. > > We investigated it here and found the cause, the solution is a small fix > in the ISERTgtAdm class, which affect only Mellanox. A patch was attached > to the LP bug. > > > > 3. https://bugs.launchpad.net/fuel/+bug/1397891 > > This was reproduced twice. Looks like it?s not related specifically to > Mellanox flow, but a general MOS issue ? mismatching tenant owners for > floating-IP and port (services/admin). > > > > > > > > Regards, > > > > *Gil Meir* > *SW Cloud Solutions* > > *Mellanox Technologies* > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Yours Faithfully, Vladimir Kuklin, Fuel Library Tech Lead, Mirantis, Inc. +7 (495) 640-49-04 +7 (926) 702-39-68 Skype kuklinvv 45bk3, Vorontsovskaya Str. Moscow, Russia, www.mirantis.com www.mirantis.ru vkuklin at mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From amuller at redhat.com Mon Dec 1 12:12:57 2014 From: amuller at redhat.com (Assaf Muller) Date: Mon, 1 Dec 2014 07:12:57 -0500 (EST) Subject: [openstack-dev] [neutron] force_gateway_on_subnet, please don't deprecate In-Reply-To: <95DCE311E0DE421D8C881E5CCD80E5D8@redhat.com> References: <95DCE311E0DE421D8C881E5CCD80E5D8@redhat.com> Message-ID: <1384913819.11781239.1417435977062.JavaMail.zimbra@redhat.com> ----- Original Message ----- > > My proposal here, is, _let?s not deprecate this setting_, as it?s a valid use > case of a gateway configuration, and let?s provide it on the reference > implementation. I agree. As long as the reference implementation works with the setting off there's no need to deprecate it. I still think the default should be set to True though. Keep in mind that the DHCP agent will need changes as well. > > TL;DR > > I?ve been looking at this yesterday, during a test deployment > on a site where they provide external connectivity with the > gateway outside subnet. > > And I needed to switch it of, to actually be able to have any external > connectivity. > > https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L121 > > This is handled by providing an on-link route to the gateway first, > and then adding the default gateway. > > It looks to me very interesting (not only because it?s the only way to work > on that specific site [2][3][4]), because you can dynamically wire RIPE > blocks to your server, without needing to use an specific IP for external > routing or broadcast purposes, and instead use the full block in openstack. > > > I have a tiny patch to support this on the neutron l3-agent [1] I yet need to > add the logic to check ?gateway outside subnet?, then add the ?onlink? > route. > > > [1] > > diff --git a/neutron/agent/linux/interface.py > b/neutron/agent/linux/interface.py > index 538527b..5a9f186 100644 > --- a/neutron/agent/linux/interface.py > +++ b/neutron/agent/linux/interface.py > @@ -116,15 +116,16 @@ class LinuxInterfaceDriver(object): > namespace=namespace, > ip=ip_cidr) > > - if gateway: > - device.route.add_gateway(gateway) > - > new_onlink_routes = set(s['cidr'] for s in extra_subnets) > + if gateway: > + new_onlink_routes.update([gateway]) > existing_onlink_routes = set(device.route.list_onlink_routes()) > for route in new_onlink_routes - existing_onlink_routes: > device.route.add_onlink_route(route) > for route in existing_onlink_routes - new_onlink_routes: > device.route.delete_onlink_route(route) > + if gateway: > + device.route.add_gateway(gateway) > > def delete_conntrack_state(self, root_helper, namespace, ip): > """Delete conntrack state associated with an IP address. > > [2] http://www.soyoustart.com/ > [3] http://www.ovh.co.uk/ > [4] http://www.kimsufi.com/ > > > Miguel ?ngel Ajo > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From rpodolyaka at mirantis.com Mon Dec 1 12:28:18 2014 From: rpodolyaka at mirantis.com (Roman Podoliaka) Date: Mon, 1 Dec 2014 16:28:18 +0400 Subject: [openstack-dev] [nova] Integration with Ceph In-Reply-To: References: Message-ID: Hi Sergey, AFAIU, the problem is that when Nova was designed initially, it had no notion of shared storage (e.g. Ceph), so all the resources were considered to be local to compute nodes. In that case each total value was a sum of values per node. But as we see now, that doesn't work well with Ceph, when the storage is actually shared and doesn't belong to any particular node. It seems we've got two different, but related problems here: 1) resource tracking is incorrect, as nodes shouldn't report info about storage when shared storage is used (fixing this by reporting e.g. 0 values would require changes to nova-scheduler) 2) total storage is calculated incorrectly as we just sum the values reported by each node >From my point of view, in order to fix both, it might make sense for nova-api/nova-scheduler to actually know, if shared storage is used and access Ceph directly (otherwise, it's not clear, which compute node we should ask for this data, and what exactly we should ask, as we don't actually know if the storage is shared in the context of nova-api/nova-scheduler processes). Thanks, Roman On Mon, Nov 24, 2014 at 3:45 PM, Sergey Nikitin wrote: > Hi, > As you know we can use Ceph as ephemeral storage in nova. But we have some > problems with its integration. First of all, total storage of compute nodes > is calculated incorrectly. (more details here > https://bugs.launchpad.net/nova/+bug/1387812). I want to fix this problem. > Now size of total storage is only a sum of storage of all compute nodes. And > information about the total storage is got directly from db. > (https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L663-L691). > To fix the problem we should check type of using storage. If type of storage > is RBD we should get information about total storage directly from Ceph > storage. > I proposed a patch (https://review.openstack.org/#/c/132084/) which should > fix this problem, but I got the fair comment that we shouldn't check type of > storage on the API layer. > > The other problem is that information about size of compute node incorrect > too. Now size of each node equal to size of whole Ceph cluster. > > On one hand it is good to do not check type of storage on the API layer, on > the other hand there are some reasons to check it on API layer: > 1. It would be useful for live migration because now a user has to send > information about storage with API request. > 2. It helps to fix problem with total storage. > 3. It helps to fix problem with size of compute nodes. > > So I want to ask you: "Is this a good idea to get information about type of > storage on API layer? If no - Is there are any ideas to get correct > information about Ceph storage?" > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mathieu.rohon at gmail.com Mon Dec 1 12:43:01 2014 From: mathieu.rohon at gmail.com (Mathieu Rohon) Date: Mon, 1 Dec 2014 13:43:01 +0100 Subject: [openstack-dev] [Neutron] Edge-VPN and Edge-Id In-Reply-To: References: Message-ID: Hi, On Sun, Nov 30, 2014 at 8:35 AM, Ian Wells wrote: > On 27 November 2014 at 12:11, Mohammad Hanif wrote: >> >> Folks, >> >> Recently, as part of the L2 gateway thread, there was some discussion on >> BGP/MPLS/Edge VPN and how to bridge any overlay networks to the neutron >> network. Just to update everyone in the community, Ian and I have >> separately submitted specs which make an attempt to address the cloud edge >> connectivity. Below are the links describing it: >> >> Edge-Id: https://review.openstack.org/#/c/136555/ >> Edge-VPN: https://review.openstack.org/#/c/136929 . This is a resubmit of >> https://review.openstack.org/#/c/101043/ for the kilo release under the >> ?Edge VPN? title. ?Inter-datacenter connectivity orchestration? was just >> too long and just too generic of a title to continue discussing about :-( > > > Per the summit discussions, the difference is one of approach. > > The Edge-VPN case addresses MPLS attachments via a set of APIs to be added > to the core of Neutron. Those APIs are all new objects and don't really > change the existing API so much as extend it. There's talk of making it a > 'service plugin' but if it were me I would simply argue for a new service > endpoint. Keystone's good at service discovery, endpoints are pretty easy > to create and I don't see why you need to fold it in. > > The edge-id case says 'Neutron doesn't really care about what happens > outside of the cloud at this point in time, there are loads of different > edge termination types, and so the best solution would be one where the > description of the actual edge datamodel does not make its way into core > Neutron'. This avoids us folding in the information about edges in the same > way that we folded in the information about services and later regretted it. > The notable downside is that this method would work with an external network > controller such as ODL, but probably will never make its way into the > inbuilt OVS/ML2 network controller if it's implemented as described > (explicitly *because* it's designed in such a way as to keep the > functionality out of core Neutron). Basically, it's not completely > incompatible with the datamodel that the Edge-VPN change describes, but > pushes that datamodel out to an independent service which would have its own > service endpoint to avoid complicating the Neutron API with information > that, likely, Neutron itself could probably only ever validate, store and > pass on to an external controller. This is not entirely true, as soon as a reference implementation, based on existing Neutron components (L2agent/L3agent...) can exist. But even if it were true, this could at least give a standardized API to Operators that want to connect their Neutron networks to external VPNs, without coupling their cloud solution with whatever SDN controller. And to me, this is the main issue that we want to solve by proposing some neutron specs. > Also, the Edge-VPN case is specified for only MPLS VPNs, and doesn't > consider other edge cases such as Kevin's switch-based edges in > https://review.openstack.org/#/c/87825/ . The edge-ID one is agnostic of > termination types (since it absolves Neutron of all of that responsibility) > and would leave the edge type description to the determination of an > external service. > > Obviously, I'm biased, having written the competing spec; but I prefer the > simple change that pushes complexity out of the core to the larger but > comprehensive change that keeps it as a part of Neutron. And in fact if you > look at the two specs with that in mind, they do go together; the Edge-VPN > model is almost precisely what you need to describe an endpoint that you > could then associate with an Edge-ID to attach it to Neutron. > -- > Ian. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sandy.walsh at RACKSPACE.COM Mon Dec 1 13:29:43 2014 From: sandy.walsh at RACKSPACE.COM (Sandy Walsh) Date: Mon, 1 Dec 2014 13:29:43 +0000 Subject: [openstack-dev] Where should Schema files live? In-Reply-To: References: <60A3427EF882A54BA0A1971AE6EF0388A535151D@ORD1EXD02.RACKSPACE.CORP> <547644AA.1010207@gmail.com> <60A3427EF882A54BA0A1971AE6EF0388A5360749@ORD1EXD02.RACKSPACE.CORP>, Message-ID: <60A3427EF882A54BA0A1971AE6EF0388A5362028@ORD1EXD02.RACKSPACE.CORP> >From: Duncan Thomas [duncan.thomas at gmail.com] >Sent: Sunday, November 30, 2014 5:40 AM >To: OpenStack Development Mailing List >Subject: Re: [openstack-dev] Where should Schema files live? > >Duncan Thomas >On Nov 27, 2014 10:32 PM, "Sandy Walsh" wrote: >> >> We were thinking each service API would expose their schema via a new /schema resource (or something). Nova would expose its schema. Glance its own. etc. This would also work well for installations still using older deployments. >This feels like externally exposing info that need not be external (since the notifications are not external to the deploy) and it sounds like it will potentially leak fine detailed version and maybe deployment config details that you don't want to make public - either for commercial reasons or to make targeted attacks harder > Yep, good point. Makes a good case for standing up our own service or just relying on the tarballs being in a well know place. Thanks for the feedback. -S -------------- next part -------------- An HTML attachment was scrubbed... URL: From duncan.thomas at gmail.com Mon Dec 1 13:30:25 2014 From: duncan.thomas at gmail.com (Duncan Thomas) Date: Mon, 1 Dec 2014 15:30:25 +0200 Subject: [openstack-dev] [cinder][nova] proper syncing of cinder volume state In-Reply-To: References: <2960F1710CFACC46AF0DBFEE85B103BB231CA461@G9W0723.americas.hpqcorp.net> Message-ID: John: States that the driver can/should do some cleanup work during the transition: attaching -> available or error detaching -> available or error error -> available or error deleting -> deleted or error_deleting Also in possibly wanted in future but much harder: backing_up -> available or error (need to make sure the backup service copes) restoring -> error (need to make sure the backup service copes) I haven't looked at the entire state space yet, these are the obvious ones off the top of my head On 1 December 2014 at 06:30, John Griffith wrote: > On Fri, Nov 28, 2014 at 11:25 AM, D'Angelo, Scott > wrote: > > A Cinder blueprint has been submitted to allow the python-cinderclient to > > involve the back end storage driver in resetting the state of a cinder > > volume: > > > > https://blueprints.launchpad.net/cinder/+spec/reset-state-with-driver > > > > and the spec: > > > > https://review.openstack.org/#/c/134366 > > > > > > > > This blueprint contains various use cases for a volume that may be > listed in > > the Cinder DataBase in state detaching|attaching|creating|deleting. > > > > The Proposed solution involves augmenting the python-cinderclient command > > ?reset-state?, but other options are listed, including those that > > > > involve Nova, since the state of a volume in the Nova XML found in > > /etc/libvirt/qemu/.xml may also be out-of-sync with the > > > > Cinder DB or storage back end. > > > > > > > > A related proposal for adding a new non-admin API for changing volume > status > > from ?attaching? to ?error? has also been proposed: > > > > https://review.openstack.org/#/c/137503/ > > > > > > > > Some questions have arisen: > > > > 1) Should ?reset-state? command be changed at all, since it was > originally > > just to modify the Cinder DB? > > > > 2) Should ?reset-state? be fixed to prevent the na?ve admin from changing > > the CinderDB to be out-of-sync with the back end storage? > > > > 3) Should ?reset-state? be kept the same, but augmented with new options? > > > > 4) Should a new command be implemented, with possibly a new admin API to > > properly sync state? > > > > 5) Should Nova be involved? If so, should this be done as a separate > body of > > work? > > > > > > > > This has proven to be a complex issue and there seems to be a good bit of > > interest. Please provide feedback, comments, and suggestions. > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > Hey Scott, > > Thanks for posting this to the ML, I stated my opinion on the spec, > but for completeness: > My feeling is that reset-state has morphed into something entirely > different than originally intended. That's actually great, nothing > wrong there at all. I strongly disagree with the statements that > "setting the status in the DB only is almost always the wrong thing to > do". The whole point was to allow the state to be changed in the DB > so the item could in most cases be deleted. There was never an intent > (that I'm aware of) to make this some sort of uber resync and heal API > call. > > All of that history aside, I think it would be great to add some > driver interaction here. I am however very unclear on what that would > actually include. For example, would you let a Volume's state be > changed from "Error-Attaching" to "In-Use" and just run through the > process of retyring an attach? To me that seems like a bad idea. I'm > much happier with the current state of changing the state form "Error" > to "Available" (and NOTHING else) so that an operation can be retried, > or the resource can be deleted. If you start allowing any state > transition (which sadly we've started to do) you're almost never going > to get things correct. This also covers almost every situation even > though it means you have to explicitly retry operations or steps (I > don't think that's a bad thing) and make the code significantly more > robust IMO (we have some issues lately with things being robust). > > My proposal would be to go back to limiting the things you can do with > reset-state (basicly make it so you can only release the resource back > to available) and add the driver interaction to clean up any mess if > possible. This could be a simple driver call added like > "make_volume_available" whereby the driver just ensures that there are > no attachments and.... well; honestly nothing else comes to mind as > being something the driver cares about here. The final option then > being to add some more power to force-delete. > > Is there anything other than attach that matters from a driver? If > people are talking error-recovery that to me is a whole different > topic and frankly I think we need to spend more time preventing errors > as opposed to trying to recover from them via new API calls. > > Curious to see if any other folks have input here? > > John > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Duncan Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Mon Dec 1 14:02:55 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Mon, 1 Dec 2014 20:02:55 +0600 Subject: [openstack-dev] [mistral] Team meeting - 12/01/2014 Message-ID: <8DEBECB2-5B9C-43AE-84A3-8D34409176FB@mirantis.com> Hi, This is a reminder about team meeting that we?ll have today at #openstack-meeting at 16.00 UTC. Agenda: Review action items Current status (progress, issues, roadblocks, further plans) Release "Kilo-1" progress Open discussion (see [0] to see the agenda as well as the meeting archieve) [0] https://wiki.openstack.org/wiki/Meetings/MistralAgenda Renat Akhmerov @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Dec 1 14:09:33 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 1 Dec 2014 09:09:33 -0500 Subject: [openstack-dev] [keystone][oslo] Handling contexts and policy enforcement in services In-Reply-To: <1417398680.3087.1.camel@redhat.com> References: <1417398680.3087.1.camel@redhat.com> Message-ID: <8E09079C-B047-4319-9660-B1118FD9DC00@doughellmann.com> On Nov 30, 2014, at 8:51 PM, Jamie Lennox wrote: > TL;DR: I think we can handle most of oslo.context with some additions to > auth_token middleware and simplify policy enforcement (from a service > perspective) at the same time. > > There is currently a push to release oslo.context as a > library, for reference: > https://github.com/openstack/oslo.context/blob/master/oslo_context/context.py > > Whilst I love the intent to standardize this > functionality I think that many of the requirements in there > are incorrect and don't apply to all services. It is my > understanding for example that read_only, show_deleted are > essentially nova requirements, and the use of is_admin needs > to be killed off, not standardized. > > Currently each service builds a context based on headers > made available from auth_token middleware and some > additional interpretations based on that user > authentication. Each service does this slightly differently > based on its needs/when it copied it from nova. > > I propose that auth_token middleware essentially handle the > creation and management of an authentication object that > will be passed and used by all services. This will > standardize so much of the oslo.context library that I'm not > sure it will be still needed. I bring this up now as I am > wanting to push this way and don't want to change things > after everyone has adopted oslo.context. We put the context class in its own library because both oslo.messaging and oslo.log [1] need to have input into its API. If the middleware wants to get involved in adding to the API that?s fine, but context is not only used for authentication so the middleware can?t own it. [1] https://review.openstack.org/132551 > > The current release of auth_token middleware creates and > passes to services (via env['keystone.token_auth']) an auth > plugin that can be passed to clients to use the current user > authentication. My intention here is to expand that object > to expose all of the authentication information required for > the services to operate. > > There are two components to context that I can see: > > - The current authentication information that is retrieved > from auth_token middleware. > - service specific context added based on that user > information eg read_only, show_deleted, is_admin, > resource_id > > Regarding the first point of current authentication there > are three places I can see this used: > > - communicating with other services as that user > - associating resources with a user/project > - policy enforcement One of the specific logging requests we?ve had from operators is to have the logs show the authentication context clearly and consistently (i.e., the same format whether domains are used in the deployment or not). That?s an aspects of the spec linked above. > > Addressing each of the 'current authentication' needs: > > - As mentioned for service to service communication > auth_token middleware already provides an auth_plugin > that can be used with (at this point most) of the > clients. This greatly simplifies reusing an existing > token and correctly using the service catalog as each > client would do this differently. In future this plugin > will be extended to provide support for concepts such as > filling in the X-Service-Token [1] on behalf of the > service, managing the request id, and generally > standardizing service->service communication without > requiring explicit support from every project and client. > > - Given that this authentication plugin is built within > auth_token middleware it is a fairly trivial step to > provide public properties on this object to give access > to the current user_id, project_id and other relevant > authentication data that the services can access. This is > fairly well handled today but it means it is done without > the service having to fetch all these objects from > headers. That sounds like a good source of data to populate the context object. > > - With upcoming changes to policy to handle features such > as the X-Service-Token the existing context will need to > gain a bunch of new entries. With the keystone team > looking to wrap policy enforcement into its own > standalone library it makes more sense to provide this > authentication object directly to policy enforcement. > This will allow the keystone team to manipulate policy > data from both auth_token and the enforcement side, > letting us introduce new features to policy transparent > to the services. It will also standardize the naming of > variables within these policy files. > > What is left for a context object after this is managing > serialization and deserialization of this auth object and > any additional fields (read_only etc) that are generally > calculated at context creation time. This would be a very > small library. That?s not all it will do, but it will be small. As I mentioned above, we isolated it in its own library to control dependencies because several aspects of the system want to add to the API. > > There are still a number of steps to getting there: > > - Adding enough data to the existing authentication plugin > to allow policy enforcement and general usage. > - Making the authentication object serializable for > transmitting between services. > - Extracting policy enforcement into a library. > > However I think that this approach brings enough benefits to > hold off on releasing and standardizing the use of the > current context objects. > > I'd love to hear everyone thoughts on this, and where it > would fall down. I see there could be some issues with how > the context would fit into nova's versioned objects for > example - but I think this would be the same issues that an > oslo.context library would face anyway. > > Jamie > > > [1] This is where service->service communication includes > the service token as well as the user token to allow smarter > policy and resource access. For example, a user can't access > certain neutron functions directly however it should be > allowed when nova calls neutron on behalf of a user, or an > object that a service made on behalf of a user can only be > deleted when the service makes the request on behalf of that > user. > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Mon Dec 1 14:19:26 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 1 Dec 2014 09:19:26 -0500 Subject: [openstack-dev] [oslo] change to Oslo release process to support stable branch requirement caps Message-ID: <07A9234D-90F5-4131-9FA0-1A5D6BCF119C@doughellmann.com> As part of setting up version caps for Oslo and client libraries in the stable branches, we discovered that the fact that we do not always create stable branches of the Oslo libraries means the check-requirements-integration-dsvm job in stable branches is actually looking at the requirements in master branches of the libraries, and failing in a lot of cases. With the move away from alpha version numbers toward version caps and patch releases, we?re going to change the Oslo release processes so that at the end of a cycle we always create a stable branch from the final version of each library released in the cycle. Thanks to Clark and Thierry for helping by creating stable/icehouse and stable/juno branches for most of the libraries, though we?ve discovered one or two that we missed so we?re still working on a few cases. For Kilo, we will branch all of the library repositories at the end of the cycle, probably following the same process as is used for the other projects (though the details remain to be worked out). Thanks, Doug From ihrachys at redhat.com Mon Dec 1 14:31:14 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 01 Dec 2014 15:31:14 +0100 Subject: [openstack-dev] [oslo] change to Oslo release process to support stable branch requirement caps In-Reply-To: <07A9234D-90F5-4131-9FA0-1A5D6BCF119C@doughellmann.com> References: <07A9234D-90F5-4131-9FA0-1A5D6BCF119C@doughellmann.com> Message-ID: <547C7BB2.6080009@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Are we going to have stable releases for those branches? On 01/12/14 15:19, Doug Hellmann wrote: > As part of setting up version caps for Oslo and client libraries in > the stable branches, we discovered that the fact that we do not > always create stable branches of the Oslo libraries means the > check-requirements-integration-dsvm job in stable branches is > actually looking at the requirements in master branches of the > libraries, and failing in a lot of cases. With the move away from > alpha version numbers toward version caps and patch releases, we?re > going to change the Oslo release processes so that at the end of a > cycle we always create a stable branch from the final version of > each library released in the cycle. > > Thanks to Clark and Thierry for helping by creating stable/icehouse > and stable/juno branches for most of the libraries, though we?ve > discovered one or two that we missed so we?re still working on a > few cases. > > For Kilo, we will branch all of the library repositories at the end > of the cycle, probably following the same process as is used for > the other projects (though the details remain to be worked out). > > Thanks, Doug _______________________________________________ > OpenStack-dev mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUfHuyAAoJEC5aWaUY1u57yVAH/Al27yWpaEFSuRMuni+ItdTW +avoRoVgpeYLR8kJqo/P2YBdht12ddVjU/JZh1VBvcN7KHClvd6gyBBpAXlq66aQ lRG2uNSy6+ufcaE7UTyt/beEmyNpZvW/yyaknvwmAZaU1+h/9ZnFByf6WgtuwsYr o3N02GzUJkUF0MNGj14eWKGTTTO1M/xbj20ZttKLn1fifPp+pjg2dLrnXYqXdEVW rzenlcCifPLWhHRh4PPJmPrgJYjFgLp5FYEbxMAEhxOdPHR+UiLDmifrYYLzMcMy hDBjp38ej35LyfHzxxHUkm7Km4P/p/Cuib4Zw77FIVnspTBORNnPovOh/Lf/yJ8= =GJIn -----END PGP SIGNATURE----- From doug at doughellmann.com Mon Dec 1 14:37:51 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 1 Dec 2014 09:37:51 -0500 Subject: [openstack-dev] [oslo] change to Oslo release process to support stable branch requirement caps In-Reply-To: <547C7BB2.6080009@redhat.com> References: <07A9234D-90F5-4131-9FA0-1A5D6BCF119C@doughellmann.com> <547C7BB2.6080009@redhat.com> Message-ID: <3B0EFE04-B1A7-4C45-8C18-073A670318D9@doughellmann.com> On Dec 1, 2014, at 9:31 AM, Ihar Hrachyshka wrote: > Signed PGP part > Are we going to have stable releases for those branches? We will possibly have point or patch releases based on those branches, as fixes end up needing to be backported. We have already done this in a few cases, which is why some libraries had stable branches already. > > On 01/12/14 15:19, Doug Hellmann wrote: > > As part of setting up version caps for Oslo and client libraries in > > the stable branches, we discovered that the fact that we do not > > always create stable branches of the Oslo libraries means the > > check-requirements-integration-dsvm job in stable branches is > > actually looking at the requirements in master branches of the > > libraries, and failing in a lot of cases. With the move away from > > alpha version numbers toward version caps and patch releases, we?re > > going to change the Oslo release processes so that at the end of a > > cycle we always create a stable branch from the final version of > > each library released in the cycle. > > > > Thanks to Clark and Thierry for helping by creating stable/icehouse > > and stable/juno branches for most of the libraries, though we?ve > > discovered one or two that we missed so we?re still working on a > > few cases. > > > > For Kilo, we will branch all of the library repositories at the end > > of the cycle, probably following the same process as is used for > > the other projects (though the details remain to be worked out). > > > > Thanks, Doug _______________________________________________ > > OpenStack-dev mailing list OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kragniz at gmail.com Mon Dec 1 14:37:58 2014 From: kragniz at gmail.com (Louis Taylor) Date: Mon, 1 Dec 2014 14:37:58 +0000 Subject: [openstack-dev] [glance] Deprecating osprofiler option 'enabled' in favour of 'profiler_enabled' Message-ID: <20141201143756.GA1814@gmail.com> Hi all, In order to enable or disable osprofiler in Glance, we currently have an option: [profiler] # If False fully disable profiling feature. enabled = False However, all other services with osprofiler integration use a similar option named profiler_enabled. For consistency, I'm proposing we deprecate this option's name in favour of profiler_enabled. This should make it easier for someone to configure osprofiler across projects with less confusion. Does anyone have any thoughts or concerns about this? Thanks, Louis -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: Digital signature URL: From thierry at openstack.org Mon Dec 1 14:57:09 2014 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 01 Dec 2014 15:57:09 +0100 Subject: [openstack-dev] [oslo] change to Oslo release process to support stable branch requirement caps In-Reply-To: <3B0EFE04-B1A7-4C45-8C18-073A670318D9@doughellmann.com> References: <07A9234D-90F5-4131-9FA0-1A5D6BCF119C@doughellmann.com> <547C7BB2.6080009@redhat.com> <3B0EFE04-B1A7-4C45-8C18-073A670318D9@doughellmann.com> Message-ID: <547C81C5.4030701@openstack.org> Doug Hellmann wrote: > On Dec 1, 2014, at 9:31 AM, Ihar Hrachyshka wrote: > >> Are we going to have stable releases for those branches? > > We will possibly have point or patch releases based on those branches, as fixes end up needing to be backported. We have already done this in a few cases, which is why some libraries had stable branches already. We won't do stable coordinated point releases though, since those libraries are on their own semver versioning scheme. Cheers, -- Thierry Carrez (ttx) From dpyzhov at mirantis.com Mon Dec 1 14:58:44 2014 From: dpyzhov at mirantis.com (Dmitry Pyzhov) Date: Mon, 1 Dec 2014 18:58:44 +0400 Subject: [openstack-dev] [Fuel] rpm packages versions Message-ID: Just FYI. We have updated versions of nailgun-related packages in 6.0: https://review.openstack.org/#/c/137886/ https://review.openstack.org/#/c/137887/ https://review.openstack.org/#/c/137888/ https://review.openstack.org/#/c/137889/ We need it in order to support packages updates both in current version and in stable releases. I've updated our HCF template, so we will not forget to update it in next releases. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Mon Dec 1 15:12:44 2014 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 1 Dec 2014 15:12:44 +0000 Subject: [openstack-dev] [nova] Integration with Ceph In-Reply-To: References: Message-ID: Hi, One additional issue to take into account here is that the maximum free disk space reported by the backend may not be contiguous. Nova should also be made aware of that too. Thanks Gary On 12/1/14, 2:28 PM, "Roman Podoliaka" wrote: >Hi Sergey, > >AFAIU, the problem is that when Nova was designed initially, it had no >notion of shared storage (e.g. Ceph), so all the resources were >considered to be local to compute nodes. In that case each total value >was a sum of values per node. But as we see now, that doesn't work >well with Ceph, when the storage is actually shared and doesn't belong >to any particular node. > >It seems we've got two different, but related problems here: > >1) resource tracking is incorrect, as nodes shouldn't report info >about storage when shared storage is used (fixing this by reporting >e.g. 0 values would require changes to nova-scheduler) > >2) total storage is calculated incorrectly as we just sum the values >reported by each node > >From my point of view, in order to fix both, it might make sense for >nova-api/nova-scheduler to actually know, if shared storage is used >and access Ceph directly (otherwise, it's not clear, which compute >node we should ask for this data, and what exactly we should ask, as >we don't actually know if the storage is shared in the context of >nova-api/nova-scheduler processes). > >Thanks, >Roman > >On Mon, Nov 24, 2014 at 3:45 PM, Sergey Nikitin >wrote: >> Hi, >> As you know we can use Ceph as ephemeral storage in nova. But we have >>some >> problems with its integration. First of all, total storage of compute >>nodes >> is calculated incorrectly. (more details here >> https://bugs.launchpad.net/nova/+bug/1387812). I want to fix this >>problem. >> Now size of total storage is only a sum of storage of all compute >>nodes. And >> information about the total storage is got directly from db. >> >>(https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstac >>k_nova_blob_master_nova_db_sqlalchemy_api.py-23L663-2DL691&d=AAICAg&c=Sqc >>l0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N >>3-diTlNj4GyNc&m=yG941EUIWm9AJnlLt_Jhg1nk2Ah-oV5JUQaeKf-2HFI&s=ZAWcTlVZkVr >>j7SprThKMUMV7r3vpEQzGBB9ojx1GWE0&e= ). >> To fix the problem we should check type of using storage. If type of >>storage >> is RBD we should get information about total storage directly from Ceph >> storage. >> I proposed a patch (https://review.openstack.org/#/c/132084/) which >>should >> fix this problem, but I got the fair comment that we shouldn't check >>type of >> storage on the API layer. >> >> The other problem is that information about size of compute node >>incorrect >> too. Now size of each node equal to size of whole Ceph cluster. >> >> On one hand it is good to do not check type of storage on the API >>layer, on >> the other hand there are some reasons to check it on API layer: >> 1. It would be useful for live migration because now a user has to send >> information about storage with API request. >> 2. It helps to fix problem with total storage. >> 3. It helps to fix problem with size of compute nodes. >> >> So I want to ask you: "Is this a good idea to get information about >>type of >> storage on API layer? If no - Is there are any ideas to get correct >> information about Ceph storage?" >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From matt.wood at hp.com Mon Dec 1 15:31:25 2014 From: matt.wood at hp.com (Wood, Matthew David (HP Cloud - Horizon)) Date: Mon, 1 Dec 2014 15:31:25 +0000 Subject: [openstack-dev] [horizon] REST and Django In-Reply-To: References: Message-ID: In theory, for many cases, the service(s) will allow this to happen with ~1 rest call. I THINK this was a big part of the batch action code, atleast at the beginning, though I think we?ve (unfortunately) started moving away from that idea. -- Matthew Wood HP Cloud Services Full-Stack Engineer Python Lover matt.wood at hp.com 303.818.7497 From: Thai Q Tran Reply: OpenStack Development Mailing List (not for usage questions) > Date: November 30, 2014 at 10:20:29 PM To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [horizon] REST and Django I agree that keeping the API layer thin would be ideal. I should add that having discrete API calls would allow dynamic population of table. However, I will make a case where it might be necessary to add additional APIs. Consider that you want to delete 3 items in a given table. If you do this on the client side, you would need to perform: n * (1 API request + 1 AJAX request) If you have some logic on the server side that batch delete actions: n * (1 API request) + 1 AJAX request Consider the following: n = 1, client = 2 trips, server = 2 trips n = 3, client = 6 trips, server = 4 trips n = 10, client = 20 trips, server = 11 trips n = 100, client = 200 trips, server 101 trips As you can see, this does not scale very well.... something to consider... [Inactive hide details for Richard Jones ---11/27/2014 05:38:53 PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S wrote: From: Richard Jones To: "Tripp, Travis S" , OpenStack List Date: 11/27/2014 05:38 PM Subject: Re: [openstack-dev] [horizon] REST and Django ________________________________ On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S > wrote: Hi Richard, You are right, we should put this out on the main ML, so copying thread out to there. ML: FYI that this started after some impromptu IRC discussions about a specific patch led into an impromptu google hangout discussion with all the people on the thread below. Thanks Travis! As I mentioned in the review[1], Thai and I were mainly discussing the possible performance implications of network hops from client to horizon server and whether or not any aggregation should occur server side. In other words, some views require several APIs to be queried before any data can displayed and it would eliminate some extra network requests from client to server if some of the data was first collected on the server side across service APIs. For example, the launch instance wizard will need to collect data from quite a few APIs before even the first step is displayed (I?ve listed those out in the blueprint [2]). The flip side to that (as you also pointed out) is that if we keep the API?s fine grained then the wizard will be able to optimize in one place the calls for data as it is needed. For example, the first step may only need half of the API calls. It also could lead to perceived performance increases just due to the wizard making a call for different data independently and displaying it as soon as it can. Indeed, looking at the current launch wizard code it seems like you wouldn't need to load all that data for the wizard to be displayed, since only some subset of it would be necessary to display any given panel of the wizard. I tend to lean towards your POV and starting with discrete API calls and letting the client optimize calls. If there are performance problems or other reasons then doing data aggregation on the server side could be considered at that point. I'm glad to hear it. I'm a fan of optimising when necessary, and not beforehand :) Of course if anybody is able to do some performance testing between the two approaches then that could affect the direction taken. I would certainly like to see us take some measurements when performance issues pop up. Optimising without solid metrics is bad idea :) Richard [1] https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py [2] https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign -Travis From: Richard Jones > Date: Wednesday, November 26, 2014 at 11:55 PM To: Travis Tripp >, Thai Q Tran/Silicon Valley/IBM >, David Lyle >, Maxime Vidori >, "Wroblewski, Szymon" >, "Wood, Matthew David (HP Cloud - Horizon)" >, "Chen, Shaoquan" >, "Farina, Matt (HP Cloud)" >, Cindy Lu/Silicon Valley/IBM >, Justin Pomeroy/Rochester/IBM >, Neill Cox > Subject: Re: REST and Django I'm not sure whether this is the appropriate place to discuss this, or whether I should be posting to the list under [Horizon] but I think we need to have a clear idea of what goes in the REST API and what goes in the client (angular) code. In my mind, the thinner the REST API the better. Indeed if we can get away with proxying requests through without touching any *client code, that would be great. Coding additional logic into the REST API means that a developer would need to look in two places, instead of one, to determine what was happening for a particular call. If we keep it thin then the API presented to the client developer is very, very similar to the API presented by the services. Minimum surprise. Your thoughts? Richard On Wed Nov 26 2014 at 2:40:52 PM Richard Jones > wrote: Thanks for the great summary, Travis. I've completed the work I pledged this morning, so now the REST API change set has: - no rest framework dependency - AJAX scaffolding in openstack_dashboard.api.rest.utils - code in openstack_dashboard/api/rest/ - renamed the API from "identity" to "keystone" to be consistent - added a sample of testing, mostly for my own sanity to check things were working https://review.openstack.org/#/c/136676 Richard On Wed Nov 26 2014 at 12:18:25 PM Tripp, Travis S > wrote: Hello all, Great discussion on the REST urls today! I think that we are on track to come to a common REST API usage pattern. To provide quick summary: We all agreed that going to a straight REST pattern rather than through tables was a good idea. We discussed using direct get / post in Django views like what Max originally used[1][2] and Thai also started[3] with the identity table rework or to go with djangorestframework [5] like what Richard was prototyping with[4]. The main things we would use from Django Rest Framework were built in JSON serialization (avoid boilerplate), better exception handling, and some request wrapping. However, we all weren?t sure about the need for a full new framework just for that. At the end of the conversation, we decided that it was a cleaner approach, but Richard would see if he could provide some utility code to do that much for us without requiring the full framework. David voiced that he doesn?t want us building out a whole framework on our own either. So, Richard will do some investigation during his day today and get back to us. Whatever the case, we?ll get a patch in horizon for the base dependency (framework or Richard?s utilities) that both Thai?s work and the launch instance work is dependent upon. We?ll build REST style API?s using the same pattern. We will likely put the rest api?s in horizon/openstack_dashboard/api/rest/. [1] https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/keypair.py [2] https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/launch.py [3] https://review.openstack.org/#/c/133767/8/openstack_dashboard/dashboards/identity/users/views.py [4] https://review.openstack.org/#/c/136676/4/openstack_dashboard/rest_api/identity.py [5] http://www.django-rest-framework.org/ Thanks, Travis_______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: 1__=07BBF732DF9D58788f9e8a93df938 at us.ibm Type: application/octet-stream Size: 105 bytes Desc: 1__=07BBF732DF9D58788f9e8a93df938 at us.ibm URL: From ian.cordasco at RACKSPACE.COM Mon Dec 1 15:43:21 2014 From: ian.cordasco at RACKSPACE.COM (Ian Cordasco) Date: Mon, 1 Dec 2014 15:43:21 +0000 Subject: [openstack-dev] [glance] Deprecating osprofiler option 'enabled' in favour of 'profiler_enabled' In-Reply-To: <20141201143756.GA1814@gmail.com> Message-ID: On 12/1/14, 08:37, "Louis Taylor" wrote: >Hi all, > >In order to enable or disable osprofiler in Glance, we currently have an >option: > > [profiler] > # If False fully disable profiling feature. > enabled = False > >However, all other services with osprofiler integration use a similar >option >named profiler_enabled. > >For consistency, I'm proposing we deprecate this option's name in favour >of >profiler_enabled. This should make it easier for someone to configure >osprofiler across projects with less confusion. Does anyone have any >thoughts >or concerns about this? > >Thanks, >Louis We *just* introduced this if I remember the IRC discussion from last month. I?m not sure how many people will be immediately making use of it. I?m in favor of consistency where possible and while this would require a deprecation, I think it?s a worthwhile change. +1 from me ? Ian From ijw.ubuntu at cack.org.uk Mon Dec 1 15:46:02 2014 From: ijw.ubuntu at cack.org.uk (Ian Wells) Date: Mon, 1 Dec 2014 07:46:02 -0800 Subject: [openstack-dev] [Neutron] Edge-VPN and Edge-Id In-Reply-To: References: Message-ID: On 1 December 2014 at 04:43, Mathieu Rohon wrote: > This is not entirely true, as soon as a reference implementation, > based on existing Neutron components (L2agent/L3agent...) can exist. > The specific thing I was saying is that that's harder with an edge-id mechanism than one incorporated into Neutron, because the point of the edge-id proposal is to make tunnelling explicitly *not* a responsibility of Neutron. So how do you get the agents to terminate tunnels when Neutron doesn't know anything about tunnels and the agents are a part of Neutron? Conversely, you can add a mechanism to the OVS subsystem so that you can tap an L2 bridge into a network, which would probably be more straightforward. But even if it were true, this could at least give a standardized API > to Operators that want to connect their Neutron networks to external > VPNs, without coupling their cloud solution with whatever SDN > controller. And to me, this is the main issue that we want to solve by > proposing some neutron specs. > So the issue I worry about here is that if we start down the path of adding the MPLS datamodels to Neutron we have to add Kevin's switch control work. And the L2VPN descriptions for GRE, L2TPv3, VxLAN, and EVPN. And whatever else comes along. And we get back to 'that's a lot of big changes that aren't interesting to 90% of Neutron users' - difficult to get in and a lot of overhead to maintain for the majority of Neutron developers who don't want or need it. -- Ian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Dec 1 16:15:21 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 1 Dec 2014 16:15:21 +0000 Subject: [openstack-dev] [Nova] Gate error In-Reply-To: <547BF250.2090208@linux.vnet.ibm.com> References: <547BF250.2090208@linux.vnet.ibm.com> Message-ID: <20141201161521.GC2497@yuggoth.org> On 2014-12-01 12:45:04 +0800 (+0800), Eli Qiao wrote: [...] > HTTP error 404 while getting http://pypi.IAD.openstack.org/ > packages/source/l/logilab-common/logilab-common-0.63.2.tar.gz [...] This was reported as https://launchpad.net/bugs/1397931 and handled a few hours ago. -- Jeremy Stanley From fungi at yuggoth.org Mon Dec 1 16:18:32 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 1 Dec 2014 16:18:32 +0000 Subject: [openstack-dev] [gerrit] Gerrit review problem In-Reply-To: References: <547C2C13.60102@openstack.org> Message-ID: <20141201161832.GD2497@yuggoth.org> On 2014-12-01 17:04:23 +0800 (+0800), Jay Lau wrote: > Cool, Thierry! I see. This is really what I want ;-) Also note that we're not running a modified version of Gerrit, so if a behavior change is desired it likely needs to be reported at http://code.google.com/p/gerrit/issues instead. -- Jeremy Stanley From thierry at openstack.org Mon Dec 1 16:39:45 2014 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 01 Dec 2014 17:39:45 +0100 Subject: [openstack-dev] Cross-Project meeting tomorrow, Tue December 2nd, 21:00 UTC Message-ID: <547C99D1.3020907@openstack.org> Dear PTLs, cross-project liaisons and anyone else interested, We'll have a cross-project meeting tomorrow at 21:00 UTC, with the following agenda: * Convergence on specs process (johnthetubaguy) * Incompatible rework of client libraries (notmyname, morganfainberg) * New work in openstack-sdk, keep python client library for compatibility and CLI * 2014.1.2 point release status * Open discussion & announcements See you there ! NB: This meeting replaces the previous Project/Release meeting with a more obviously cross-project agenda. For more details, please see: https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting -- Thierry Carrez (ttx) From lucasagomes at gmail.com Mon Dec 1 16:44:17 2014 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 1 Dec 2014 16:44:17 +0000 Subject: [openstack-dev] [Ironic] A mascot for Ironic In-Reply-To: <546BA197.9040605@redhat.com> References: <546BA197.9040605@redhat.com> Message-ID: Hi all, I'm sorry for the long delay on this I've been dragged into some other stuff :) But anyway, now it's time!!!! I've asked the core Ironic team to narrow down the name options (we had too many, thanks to everyone that contributed) the list of finalists is in the poll right here: http://doodle.com/9h4ncgx4etkyfgdw. So please vote and help us choose the best name for the new mascot! Cheers, Lucas On Tue, Nov 18, 2014 at 7:44 PM, Nathan Kinder wrote: > > > On 11/16/2014 10:51 AM, David Shrewsbury wrote: > > > >> On Nov 16, 2014, at 8:57 AM, Chris K >> > wrote: > >> > >> How cute. > >> > >> maybe we could call him bear-thoven. > >> > >> Chris > >> > > > > I like Blaze Bearly, lead singer for Ironic Maiden. :) > > > > https://en.wikipedia.org/wiki/Blaze_Bayley > > Good call! I never thought I'd see a Blaze Bayley reference on this > list. :) Just watch out for imposters... > > http://en.wikipedia.org/wiki/Slow_Riot_for_New_Zer%C3%B8_Kanada#BBF3 > > > > > > >> > >> On Sun, Nov 16, 2014 at 5:14 AM, Lucas Alvares Gomes > >> > wrote: > >> > >> Hi Ironickers, > >> > >> I was thinking this weekend: All the cool projects does have a > mascot > >> so I thought that we could have one for Ironic too. > >> > >> The idea about what the mascot would be was easy because the RAX > guys > >> put "bear metal" their presentation[1] and that totally rocks! So I > >> drew a bear. It also needed an instrument, at first I thought about > a > >> guitar, but drums is actually my favorite instrument so I drew a > pair > >> of drumsticks instead. > >> > >> The drawing thing wasn't that hard, the problem was to digitalize > it. > >> So I scanned the thing and went to youtube to watch some tutorials > >> about gimp and inkspace to learn how to vectorize it. Magic, it > >> worked! > >> > >> Attached in the email there's the original draw, the vectorized > >> version without colors and the final version of it (with colors). > >> > >> Of course, I know some people does have better skills than I do, so > I > >> also attached the inkspace file of the final version in case people > >> want to tweak it :) > >> > >> So, what you guys think about making this little drummer bear the > >> mascot of the Ironic project? > >> > >> Ahh he also needs a name. So please send some suggestions and we can > >> vote on the best name for him. > >> > >> [1] http://www.youtube.com/watch?v=2Oi2T2pSGDU#t=90 > >> > >> Lucas > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Mon Dec 1 16:47:26 2014 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 1 Dec 2014 16:47:26 +0000 Subject: [openstack-dev] [Ironic] A mascot for Ironic In-Reply-To: References: <546BA197.9040605@redhat.com> Message-ID: Ah forgot to say, Please add your launchpad ID on the Name Field. And I will close the poll on Wednesday at 18:00 UTC (I think it's enough time to everyone take a look at it) Cheers, Lucas On Mon, Dec 1, 2014 at 4:44 PM, Lucas Alvares Gomes wrote: > Hi all, > > I'm sorry for the long delay on this I've been dragged into some other > stuff :) But anyway, now it's time!!!! > > I've asked the core Ironic team to narrow down the name options (we had > too many, thanks to everyone that contributed) the list of finalists is in > the poll right here: http://doodle.com/9h4ncgx4etkyfgdw. So please vote > and help us choose the best name for the new mascot! > > Cheers, > Lucas > > On Tue, Nov 18, 2014 at 7:44 PM, Nathan Kinder wrote: > >> >> >> On 11/16/2014 10:51 AM, David Shrewsbury wrote: >> > >> >> On Nov 16, 2014, at 8:57 AM, Chris K > >> > wrote: >> >> >> >> How cute. >> >> >> >> maybe we could call him bear-thoven. >> >> >> >> Chris >> >> >> > >> > I like Blaze Bearly, lead singer for Ironic Maiden. :) >> > >> > https://en.wikipedia.org/wiki/Blaze_Bayley >> >> Good call! I never thought I'd see a Blaze Bayley reference on this >> list. :) Just watch out for imposters... >> >> http://en.wikipedia.org/wiki/Slow_Riot_for_New_Zer%C3%B8_Kanada#BBF3 >> >> > >> > >> >> >> >> On Sun, Nov 16, 2014 at 5:14 AM, Lucas Alvares Gomes >> >> > wrote: >> >> >> >> Hi Ironickers, >> >> >> >> I was thinking this weekend: All the cool projects does have a >> mascot >> >> so I thought that we could have one for Ironic too. >> >> >> >> The idea about what the mascot would be was easy because the RAX >> guys >> >> put "bear metal" their presentation[1] and that totally rocks! So I >> >> drew a bear. It also needed an instrument, at first I thought >> about a >> >> guitar, but drums is actually my favorite instrument so I drew a >> pair >> >> of drumsticks instead. >> >> >> >> The drawing thing wasn't that hard, the problem was to digitalize >> it. >> >> So I scanned the thing and went to youtube to watch some tutorials >> >> about gimp and inkspace to learn how to vectorize it. Magic, it >> >> worked! >> >> >> >> Attached in the email there's the original draw, the vectorized >> >> version without colors and the final version of it (with colors). >> >> >> >> Of course, I know some people does have better skills than I do, >> so I >> >> also attached the inkspace file of the final version in case people >> >> want to tweak it :) >> >> >> >> So, what you guys think about making this little drummer bear the >> >> mascot of the Ironic project? >> >> >> >> Ahh he also needs a name. So please send some suggestions and we >> can >> >> vote on the best name for him. >> >> >> >> [1] http://www.youtube.com/watch?v=2Oi2T2pSGDU#t=90 >> >> >> >> Lucas >> >> >> >> _______________________________________________ >> >> OpenStack-dev mailing list >> >> OpenStack-dev at lists.openstack.org >> >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> _______________________________________________ >> >> OpenStack-dev mailing list >> >> OpenStack-dev at lists.openstack.org >> >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mathieu.rohon at gmail.com Mon Dec 1 17:01:01 2014 From: mathieu.rohon at gmail.com (Mathieu Rohon) Date: Mon, 1 Dec 2014 18:01:01 +0100 Subject: [openstack-dev] [Neutron] Edge-VPN and Edge-Id In-Reply-To: References: Message-ID: On Mon, Dec 1, 2014 at 4:46 PM, Ian Wells wrote: > On 1 December 2014 at 04:43, Mathieu Rohon wrote: >> >> This is not entirely true, as soon as a reference implementation, >> based on existing Neutron components (L2agent/L3agent...) can exist. > > > The specific thing I was saying is that that's harder with an edge-id > mechanism than one incorporated into Neutron, because the point of the > edge-id proposal is to make tunnelling explicitly *not* a responsibility of > Neutron. So how do you get the agents to terminate tunnels when Neutron > doesn't know anything about tunnels and the agents are a part of Neutron? by having modular agents that can drive the dataplane with pluggable components that would be part of any advanced service. This is a way to move forward on splitting out advanced services. > Conversely, you can add a mechanism to the OVS subsystem so that you can tap > an L2 bridge into a network, which would probably be more straightforward. This is an alternative that would say : you want an advanced service for your VM, please stretch your l2 network to this external component, that is driven by an external controller, and make your traffic goes to this component to take benefit of this advanced service. This is a valid alternative of course, but distributing the service directly to each compute node is much more valuable, ASA it is doable. >> But even if it were true, this could at least give a standardized API >> to Operators that want to connect their Neutron networks to external >> VPNs, without coupling their cloud solution with whatever SDN >> controller. And to me, this is the main issue that we want to solve by >> proposing some neutron specs. > > > So the issue I worry about here is that if we start down the path of adding > the MPLS datamodels to Neutron we have to add Kevin's switch control work. > And the L2VPN descriptions for GRE, L2TPv3, VxLAN, and EVPN. And whatever > else comes along. And we get back to 'that's a lot of big changes that > aren't interesting to 90% of Neutron users' - difficult to get in and a lot > of overhead to maintain for the majority of Neutron developers who don't > want or need it. This shouldn't be a lot of big changes, once interfaces between advanced services and neutron core services will be cleaner. The description of the interconnection has to to be done somewhere, and neutron and its advanced services are a good candidate for that. > -- > Ian. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From rakhmerov at mirantis.com Mon Dec 1 17:04:37 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Mon, 1 Dec 2014 23:04:37 +0600 Subject: [openstack-dev] [mistral] Team meeting minutes/log - 12/01/2014 Message-ID: Thanks for joining our team meeting today! Meeting minutes: http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-01-16.00.html Meeting log: http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-01-16.00.log.html The next meeting is scheduled for Dec 8 at the same time. Renat Akhmerov @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Dec 1 17:25:10 2014 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 01 Dec 2014 11:25:10 -0600 Subject: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023) In-Reply-To: References: <54755DE7.2070107@redhat.com> <547607C3.3090604@nemebean.com> Message-ID: <547CA476.8080203@nemebean.com> Okay, boiling my thoughts down further: James's (valid, IMHO) concerns aside, I want to see one of two things before I'm anything but -1 on this change: 1) A specific reason SHELLOPTS can't be used. Nobody has given me one besides hand-wavy "it might not work" stuff. FTR, as I noted in my previous message, the set -e thing can be easily addressed if we think it necessary so I don't consider that a valid answer here. Also, http://stackoverflow.com/questions/4325444/bash-recursive-xtrace 2) A specific use case that can only be addressed via this implementation. I don't personally have one, but if someone does then I'd like to hear it. I'm all for improving in this area, but before we make an intrusive change with an ongoing cost that won't work with anything not explicitly enabled for it, I want to make sure it's the right thing to do. As yet I'm not convinced. -Ben On 11/27/2014 12:29 PM, Sullivan, Jon Paul wrote: >> -----Original Message----- >> From: Ben Nemec [mailto:openstack at nemebean.com] >> Sent: 26 November 2014 17:03 >> To: OpenStack Development Mailing List (not for usage questions) >> Subject: Re: [openstack-dev] [diskimage-builder] Tracing levels for >> scripts (119023) >> >> On 11/25/2014 10:58 PM, Ian Wienand wrote: >>> Hi, >>> >>> My change [1] to enable a consistent tracing mechanism for the many >>> scripts diskimage-builder runs during its build seems to have hit a >>> stalemate. >>> >>> I hope we can agree that the current situation is not good. When >>> trying to develop with diskimage-builder, I find myself constantly >>> going and fiddling with "set -x" in various scripts, requiring me >>> re-running things needlessly as I try and trace what's happening. >>> Conversley some scripts set -x all the time and give output when you >>> don't want it. >>> >>> Now nodepool is using d-i-b more, it would be even nicer to have >>> consistency in the tracing so relevant info is captured in the image >>> build logs. >>> >>> The crux of the issue seems to be some disagreement between reviewers >>> over having a single "trace everything" flag or a more fine-grained >>> approach, as currently implemented after it was asked for in reviews. >>> >>> I must be honest, I feel a bit silly calling out essentially a >>> four-line patch here. >> >> My objections are documented in the review, but basically boil down to >> the fact that it's not a four line patch, it's a 500+ line patch that >> does essentially the same thing as: >> >> set +e >> set -x >> export SHELLOPTS > > I don't think this is true, as there are many more things in SHELLOPTS than just xtrace. I think it is wrong to call the two approaches equivalent. > >> >> in disk-image-create. You do lose set -e in disk-image-create itself on >> debug runs because that's not something we can safely propagate, >> although we could work around that by unsetting it before calling hooks. >> FWIW I've used this method locally and it worked fine. > > So this does say that your alternative implementation has a difference from the proposed one. And that the difference has a negative impact. > >> >> The only drawback is it doesn't allow the granularity of an if block in >> every script, but I don't personally see that as a particularly useful >> feature anyway. I would like to hear from someone who requested that >> functionality as to what their use case is and how they would define the >> different debug levels before we merge an intrusive patch that would >> need to be added to every single new script in dib or tripleo going >> forward. > > So currently we have boilerplate to be added to all new elements, and that boilerplate is: > > set -eux > set -o pipefail > > This patch would change that boilerplate to: > > if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then > set -x > fi > set -eu > set -o pipefail > > So it's adding 3 lines. It doesn't seem onerous, especially as most people creating a new element will either copy an existing one or copy/paste the header anyway. > > I think that giving control over what is effectively debug or non-debug output is a desirable feature. > > We have a patch that implements that desirable feature. > > I don't see a compelling technical reason to reject that patch. > >> >> -Ben >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Thanks, > Jon-Paul Sullivan ? Cloud Services - @hpcloud > > Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway. > Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's Quay, Dublin 2. > Registered Number: 361933 > > The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error you should delete it from your system immediately and advise the sender. > > To any recipient of this message within HP, unless otherwise stated, you should consider this message and attachments as "HP CONFIDENTIAL". > From victor.lowther at gmail.com Mon Dec 1 17:35:11 2014 From: victor.lowther at gmail.com (Victor Lowther) Date: Mon, 1 Dec 2014 11:35:11 -0600 Subject: [openstack-dev] [Ironic] Do we need an IntrospectionInterface? In-Reply-To: References: <5475F5B4.3080803@redhat.com> Message-ID: +9001 for introspection On Sun, Nov 30, 2014 at 9:40 PM, Shivanand Tendulker wrote: > +1 for separate interface. > > --Shivanand > > On Fri, Nov 28, 2014 at 7:20 PM, Lucas Alvares Gomes < > lucasagomes at gmail.com> wrote: > >> Hi, >> >> Thanks for putting it up Dmitry. I think the idea is fine too, I >> understand that people may want to use in-band discovery for drivers like >> iLO or DRAC and having those on a separated interface allow us to composite >> a driver to do it (which is ur use case 2. ). >> >> So, +1. >> >> Lucas >> >> On Wed, Nov 26, 2014 at 3:45 PM, Imre Farkas wrote: >> >>> On 11/26/2014 02:20 PM, Dmitry Tantsur wrote: >>> >>>> Hi all! >>>> >>>> As our state machine and discovery discussion proceeds, I'd like to ask >>>> your opinion on whether we need an IntrospectionInterface >>>> (DiscoveryInterface?). Current proposal [1] suggests adding a method for >>>> initiating a discovery to the ManagementInterface. IMO it's not 100% >>>> correct, because: >>>> 1. It's not management. We're not changing anything. >>>> 2. I'm aware that some folks want to use discoverd-based discovery [2] >>>> even for DRAC and ILO (e.g. for vendor-specific additions that can't be >>>> implemented OOB). >>>> >>>> Any ideas? >>>> >>>> Dmitry. >>>> >>>> [1] https://review.openstack.org/#/c/100951/ >>>> [2] https://review.openstack.org/#/c/135605/ >>>> >>>> >>> Hi Dmitry, >>> >>> I see the value in using the composability of our driver interfaces, so >>> I vote for having a separate IntrospectionInterface. Otherwise we wouldn't >>> allow users to use eg. the DRAC driver with an in-band but more powerful hw >>> discovery. >>> >>> Imre >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Dec 1 18:15:30 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 1 Dec 2014 13:15:30 -0500 Subject: [openstack-dev] [oslo] change to Oslo release process to support stable branch requirement caps In-Reply-To: <547C81C5.4030701@openstack.org> References: <07A9234D-90F5-4131-9FA0-1A5D6BCF119C@doughellmann.com> <547C7BB2.6080009@redhat.com> <3B0EFE04-B1A7-4C45-8C18-073A670318D9@doughellmann.com> <547C81C5.4030701@openstack.org> Message-ID: <135BC36F-C8A5-49EA-B7B7-FEC238019C07@doughellmann.com> On Dec 1, 2014, at 9:57 AM, Thierry Carrez wrote: > Doug Hellmann wrote: >> On Dec 1, 2014, at 9:31 AM, Ihar Hrachyshka wrote: >> >>> Are we going to have stable releases for those branches? >> >> We will possibly have point or patch releases based on those branches, as fixes end up needing to be backported. We have already done this in a few cases, which is why some libraries had stable branches already. > > We won't do stable coordinated point releases though, since those > libraries are on their own semver versioning scheme. Yes, good point. We would release as needed, from the stable branches, bumping the least significant value in the version number. Doug > > Cheers, > > -- > Thierry Carrez (ttx) > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at nemebean.com Mon Dec 1 18:19:17 2014 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 01 Dec 2014 12:19:17 -0600 Subject: [openstack-dev] How to debug test using pdb In-Reply-To: References: Message-ID: <547CB125.5060007@nemebean.com> I don't personally use the debugger much, but there is a helper script that is supposed to allow debugging: https://github.com/openstack/oslotest/blob/master/tools/oslo_debug_helper On 11/30/2014 05:08 AM, Saju M wrote: > Hi, > > How to debug test using pdb > > I want to debug tests and tried following methods, but didn't work > (could not see pdb> console). > I could see only the message "Tests running..." and command got stuck. > > I tried this with python-neutronclient, that does not have run_test.sh > > Method-1: > #source .tox/py27/bin/activate > #.tox/py27/bin/python -m testtools.run > neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON.test_create_network > > Method-2: > #testr list-tests '(CLITestV20NetworkJSON.test_create_network)' > my-list > #python -m testtools.run discover --load-list my-list > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mbayer at redhat.com Mon Dec 1 18:24:43 2014 From: mbayer at redhat.com (Mike Bayer) Date: Mon, 1 Dec 2014 13:24:43 -0500 Subject: [openstack-dev] sqlalchemy-migrate call for reviews In-Reply-To: <547C409E.2040306@redhat.com> References: <20141129232815.GB2497@yuggoth.org> <547C409E.2040306@redhat.com> Message-ID: <97A9EA75-2A73-4641-975D-97BD497852C1@redhat.com> I can +2 whichever patches are needed by Openstack projects, or that are critically needed in general, that you can point me towards directly. Overall I?m not the ?maintainer? of sqlalchemy-migrate, I?ve only volunteered to have a +2 role for critically needed issues, so in the absence of someone willing to take on a real maintainer role (bug triage, etc.), for users on the outside of immediate Openstack use cases I?d prefer if they can continue working towards moving to Alembic; the major features I?ve introduced in Alembic including the SQLite support are intended to make transition much more feasible. > On Dec 1, 2014, at 5:19 AM, Ihar Hrachyshka wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > Indeed, the review queue is non-responsive. There are other patches in > the queue that bit rot there: > > https://review.openstack.org/#/q/status:open+project:stackforge/sqlalchemy-migrate,n,z > > I guess since no one with a +2 hammer systematically monitors patches > there, users are on their own and better fork if blocked. Sad but true. > > (btw technically monitoring is not that difficult: gerrit allows to > subscribe to specific projects, and this one does not look like time > consuming from reviewer perspective.) > > /Ihar > > On 30/11/14 00:28, Jeremy Stanley wrote: >> To anyone who reviews sqlalchemy-migrate changes, there are people >> talking to themselves on GitHub about long-overdue bug fixes >> because the Gerrit review queue for it is sluggish and they >> apparently don't realize the SQLAM reviewers don't look at Google >> Code issues[1] and GitHub pull request comments[2]. >> >> [1] >> https://code.google.com/p/sqlalchemy-migrate/issues/detail?id=171 >> [2] https://github.com/stackforge/sqlalchemy-migrate/pull/5 >> > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > > iQEcBAEBCgAGBQJUfECeAAoJEC5aWaUY1u57cx8H/2d8urszdd3RIsU+3JyrnVg6 > I92WtoCS84HdOEE7DjM5m/tgFGjIp9Gh4lovEft5JYDcnHACfd4gdhUunt+PAvVO > 2usFuPdR9IJvbKc28FJAqZeXJpvMc0KSMN4j8t1dtgu6Cv4TaFZEN77G6vrV9jem > b56npPlmpIaDpGP49XtFBHMcbU0pVJ0AQCWUd0wOX+NQl4EfF0stlvxd/1LWn9xf > rZCzatEqyRItlAB+ATpI0TlGSgvVv0PKqrV+TnoZ4OU/TZINNoCjZELB7NkmfDMz > 9rJgviCmCHRyWs+VwsbEeGKDI3nBLjX7UEk5K2f93VsZQWYpW3q6Z2rrpmH977Y= > =gT/r > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From stevemar at ca.ibm.com Mon Dec 1 18:32:05 2014 From: stevemar at ca.ibm.com (Steve Martinelli) Date: Mon, 1 Dec 2014 13:32:05 -0500 Subject: [openstack-dev] How to debug test using pdb In-Reply-To: <547CB125.5060007@nemebean.com> References: <547CB125.5060007@nemebean.com> Message-ID: Link to the docs for the debugger script: http://docs.openstack.org/developer/oslotest/features.html#debugging-with-oslo-debug-helper We have support for it in some of the Keystone related projects and folks seem to find it useful. If may not fit your needs, but as Ben suggested, give it a whirl, it should help. Steve Ben Nemec wrote on 12/01/2014 01:19:17 PM: > From: Ben Nemec > To: "OpenStack Development Mailing List (not for usage questions)" > > Date: 12/01/2014 01:25 PM > Subject: Re: [openstack-dev] How to debug test using pdb > > I don't personally use the debugger much, but there is a helper script > that is supposed to allow debugging: > https://github.com/openstack/oslotest/blob/master/tools/oslo_debug_helper > > On 11/30/2014 05:08 AM, Saju M wrote: > > Hi, > > > > How to debug test using pdb > > > > I want to debug tests and tried following methods, but didn't work > > (could not see pdb> console). > > I could see only the message "Tests running..." and command got stuck. > > > > I tried this with python-neutronclient, that does not have run_test.sh > > > > Method-1: > > #source .tox/py27/bin/activate > > #.tox/py27/bin/python -m testtools.run > > > neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON.test_create_network > > > > Method-2: > > #testr list-tests '(CLITestV20NetworkJSON.test_create_network)' > my-list > > #python -m testtools.run discover --load-list my-list > > > > > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dshulyak at mirantis.com Mon Dec 1 19:00:51 2014 From: dshulyak at mirantis.com (Dmitriy Shulyak) Date: Mon, 1 Dec 2014 21:00:51 +0200 Subject: [openstack-dev] [Fuel] [Nailgun] Unit tests improvement meeting minutes In-Reply-To: <547C4DAC.1010906@mirantis.com> References: <54789FA6.7050200@mirantis.com> <547C4DAC.1010906@mirantis.com> Message-ID: Swagger is not related to test improvement, but we started to discuss it here so.. @Przemyslaw, how hard it will be to integrate it with nailgun rest api (web.py and handlers hierarchy)? Also is there any way to use auth with swagger? On Mon, Dec 1, 2014 at 1:14 PM, Przemyslaw Kaminski wrote: > > On 11/28/2014 05:15 PM, Ivan Kliuk wrote: > > Hi, team! > > Let me please present ideas collected during the unit tests improvement > meeting: > 1) Rename class ``Environment`` to something more descriptive > 2) Remove hardcoded self.clusters[0], e.t.c from ``Environment``. Let's > use parameters instead > 3) run_tests.sh should invoke alternate syncdb() for cases where we don't > need to test migration procedure, i.e. create_db_schema() > 4) Consider usage of custom fixture provider. The main functionality > should combine loading from YAML/JSON source and support fixture inheritance > 5) The project needs in a document(policy) which describes: > - Tests creation technique; > - Test categorization (integration/unit) and approaches of testing > different code base > - > 6) Review the tests and refactor unit tests as described in the test policy > 7) Mimic Nailgun module structure in unit tests > 8) Explore Swagger tool > > > Swagger is a great tool, we used it in my previous job. We used Tornado, > attached some hand-crafted code to RequestHandler class so that it > inspected all its subclasses (i.e. different endpoint with REST methods), > generated swagger file and presented the Swagger UI ( > https://github.com/swagger-api/swagger-ui) under some /docs/ URL. > What this gave us is that we could just add YAML specification directly to > the docstring of the handler method and it would automatically appear in > the UI. It's worth noting that the UI provides an interactive form for > sending requests to the API so that tinkering with the API is easy [1]. > > [1] > https://www.dropbox.com/s/y0nuxull9mxm5nm/Swagger%20UI%202014-12-01%2012-13-06.png?dl=0 > > P. > > -- > Sincerely yours, > Ivan Kliuk > > > > _______________________________________________ > OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyz at princessleia.com Mon Dec 1 19:20:41 2014 From: lyz at princessleia.com (Elizabeth K. Joseph) Date: Mon, 1 Dec 2014 11:20:41 -0800 Subject: [openstack-dev] [Infra] Meeting Tuesday December 2nd at 19:00 UTC Message-ID: Hi everyone, The OpenStack Infrastructure (Infra) team is hosting our weekly meeting on Tuesday December 2nd, at 19:00 UTC in #openstack-meeting Meeting agenda available here: https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is welcome to to add agenda items) Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend. And in case you missed it, meeting log and minutes from the last meeting are available here: Minutes: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-11-25-19.01.html Minutes (text): http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-11-25-19.01.txt Log: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-11-25-19.01.log.html -- Elizabeth Krumbach Joseph || Lyz || pleia2 From mbayer at redhat.com Mon Dec 1 19:35:38 2014 From: mbayer at redhat.com (Mike Bayer) Date: Mon, 1 Dec 2014 14:35:38 -0500 Subject: [openstack-dev] [neutron] alembic 0.7.1 will break neutron's "heal" feature which assumes a fixed set of potential autogenerate types Message-ID: hey neutron - Just an FYI, I?ve added https://review.openstack.org/#/c/137989/ / https://launchpad.net/bugs/1397796 to refer to an issue in neutron?s ?heal? script that is going to start failing when I put out Alembic 0.7.1, which is potentially later today / this week. The issue is pretty straightforward, Alembic 0.7.1 is adding foreign key autogenerate (and really, could add more types of autogenerate at any time), and as these new commands are revealed within the execute_alembic_command(), they are not accounted for, so it fails. I?d recommend folks try to push this one through or otherwise decide how this issue (which should be expected to occur many more times) should be handled. Just a heads up in case you start seeing builds failing! - mike From jaypipes at gmail.com Mon Dec 1 19:40:45 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 01 Dec 2014 14:40:45 -0500 Subject: [openstack-dev] [Fuel] Please enable everyone to see patches and reviews on http://review.fuel-infra.org Message-ID: <547CC43D.4090008@gmail.com> Hi Fuel Devs, I'm not entirely sure why we are running our own infrastructure Gerrit for Fuel, as opposed to using the main review.openstack.org site that all OpenStack and Stackforge projects use (including Fuel repositories on stackforge...). Could someone please advise on why we are doing that? In the meantime, can we please have access to view review.fuel-infra.org code reviews and patches? I went today to track down a bug [1] and clicked on a link in the bug report [2] and after signing in with my Launchpad SSO account, got a permission denied page. Please advise, -jay [1] https://bugs.launchpad.net/mos/+bug/1378081 [2] https://review.fuel-infra.org/#/c/940/ From fungi at yuggoth.org Mon Dec 1 19:44:08 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 1 Dec 2014 19:44:08 +0000 Subject: [openstack-dev] sqlalchemy-migrate call for reviews In-Reply-To: <97A9EA75-2A73-4641-975D-97BD497852C1@redhat.com> References: <20141129232815.GB2497@yuggoth.org> <547C409E.2040306@redhat.com> <97A9EA75-2A73-4641-975D-97BD497852C1@redhat.com> Message-ID: <20141201194407.GE2497@yuggoth.org> On 2014-12-01 13:24:43 -0500 (-0500), Mike Bayer wrote: [...] > for users on the outside of immediate Openstack use cases I?d > prefer if they can continue working towards moving to Alembic; the > major features I?ve introduced in Alembic including the SQLite > support are intended to make transition much more feasible. Agreed, perhaps the documentation for sqlalchemy-migrate needs to make that statement visible (if it doesn't already). That's also a reasonable message for someone to pass along to those other non-OpenStack users discussing it in external forums... "this is deprecated, on limited life support until OpenStack can complete its transition to Alembic, and other developers using it should look hard at performing similar updates to their software." -- Jeremy Stanley From sross at redhat.com Mon Dec 1 19:52:27 2014 From: sross at redhat.com (Solly Ross) Date: Mon, 1 Dec 2014 14:52:27 -0500 (EST) Subject: [openstack-dev] [nova] How should libvirt pools work with distributed storage drivers? In-Reply-To: References: <2146525823.5845403.1417033570759.JavaMail.zimbra@redhat.com> Message-ID: <202973990.8208444.1417463547361.JavaMail.zimbra@redhat.com> Hi Peter, > Right. So just one more question now - seeing as the plan is to > deprecate non-libvirt-pool drivers in Kilo and then drop them entirely > in L, would it still make sense for me to submit a spec today for a > driver that would keep the images in our own proprietary distributed > storage format? It would certainly seem to make sense for us and for > our customers right now and in the upcoming months - a bird in the > hand and so on; and we would certainly prefer it to be upstreamed in > OpenStack, since subclassing imagebackend.Backend is a bit difficult > right now without modifying the installed imagebackend.py (and of > course I meant Backend when I spoke about subclassing DiskImage in my > previous message). So is there any chance that such a spec would be > accepted for Kilo? It doesn't hurt to try submitting a spec. On the one hand, the driver would "come into life" (so to speak) as deprecated, which seems kind of silly (if there's no libvirt support at all for your driver, you couldn't just subclass the libvirt storage pool backend). On the other hand, it's preferable to have code be upstream, and since you don't have a libvirt storage driver yet, the only way to have support is to use a legacy-style driver. Personally, I wouldn't mind having a new legacy driver as long as you're committed to getting your storage driver into libvirt, so that we don't have to do extra work when the time comes to remove the legacy drivers. If you do end up submitting a spec, keep in mind is that, for ease of migration to the libvirt storage pool driver, you should have volume names of '{instance_uuid}_{disk_name}' (similarly to the way that LVM does it). If you have a spec or some code, I'd be happy to give some feedback, if you'd like (post it on Gerrit as WIP, or something like that). Best Regards, Solly Ross From anteaya at anteaya.info Mon Dec 1 20:07:05 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Mon, 01 Dec 2014 15:07:05 -0500 Subject: [openstack-dev] [third-party]Time for Additional Meeting for third-party Message-ID: <547CCA69.4000909@anteaya.info> One of the actions from the Kilo Third-Party CI summit session was to start up an additional meeting for CI operators to participate from non-North American time zones. Please reply to this email with times/days that would work for you. The current third party meeting is on Mondays at 1800 utc which works well since Infra meetings are on Tuesdays. If we could find a time that works for Europe and APAC that is also on Monday that would be ideal. Josh Hesketh has said he will try to be available for these meetings, he is in Australia. Let's get a sense of what days and timeframes work for those interested and then we can narrow it down and pick a channel. Thanks everyone, Anita. From clint at fewbar.com Mon Dec 1 20:19:16 2014 From: clint at fewbar.com (Clint Byrum) Date: Mon, 01 Dec 2014 12:19:16 -0800 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <547C1285.7090909@hp.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> Message-ID: <1417464757-sup-2646@fewbar.com> Excerpts from Anant Patil's message of 2014-11-30 23:02:29 -0800: > On 27-Nov-14 18:03, Murugan, Visnusaran wrote: > > Hi Zane, > > > > > > > > At this stage our implementation (as mentioned in wiki > > ) achieves your > > design goals. > > > > > > > > 1. In case of a parallel update, our implementation adjusts graph > > according to new template and waits for dispatched resource tasks to > > complete. > > > > 2. Reason for basing our PoC on Heat code: > > > > a. To solve contention processing parent resource by all dependent > > resources in parallel. > > > > b. To avoid porting issue from PoC to HeatBase. (just to be aware > > of potential issues asap) > > > > 3. Resource timeout would be helpful, but I guess its resource > > specific and has to come from template and default values from plugins. > > > > 4. We see resource notification aggregation and processing next > > level of resources without contention and with minimal DB usage as the > > problem area. We are working on the following approaches in *parallel.* > > > > a. Use a Queue per stack to serialize notification. > > > > b. Get parent ProcessLog (ResourceID, EngineID) and initiate > > convergence upon first child notification. Subsequent children who fail > > to get parent resource lock will directly send message to waiting parent > > task (topic=stack_id.parent_resource_id) > > > > Based on performance/feedback we can select either or a mashed version. > > > > > > > > Advantages: > > > > 1. Failed Resource tasks can be re-initiated after ProcessLog > > table lookup. > > > > 2. One worker == one resource. > > > > 3. Supports concurrent updates > > > > 4. Delete == update with empty stack > > > > 5. Rollback == update to previous know good/completed stack. > > > > > > > > Disadvantages: > > > > 1. Still holds stackLock (WIP to remove with ProcessLog) > > > > > > > > Completely understand your concern on reviewing our code, since commits > > are numerous and there is change of course at places. Our start commit > > is [c1b3eb22f7ab6ea60b095f88982247dd249139bf] though this might not help J > > > > > > > > Your Thoughts. > > > > > > > > Happy Thanksgiving. > > > > Vishnu. > > > > > > > > *From:*Angus Salkeld [mailto:asalkeld at mirantis.com] > > *Sent:* Thursday, November 27, 2014 9:46 AM > > *To:* OpenStack Development Mailing List (not for usage questions) > > *Subject:* Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown > > > > > > > > On Thu, Nov 27, 2014 at 12:20 PM, Zane Bitter > > wrote: > > > > A bunch of us have spent the last few weeks working independently on > > proof of concept designs for the convergence architecture. I think > > those efforts have now reached a sufficient level of maturity that > > we should start working together on synthesising them into a plan > > that everyone can forge ahead with. As a starting point I'm going to > > summarise my take on the three efforts; hopefully the authors of the > > other two will weigh in to give us their perspective. > > > > > > Zane's Proposal > > =============== > > > > https://github.com/zaneb/heat-convergence-prototype/tree/distributed-graph > > > > I implemented this as a simulator of the algorithm rather than using > > the Heat codebase itself in order to be able to iterate rapidly on > > the design, and indeed I have changed my mind many, many times in > > the process of implementing it. Its notable departure from a > > realistic simulation is that it runs only one operation at a time - > > essentially giving up the ability to detect race conditions in > > exchange for a completely deterministic test framework. You just > > have to imagine where the locks need to be. Incidentally, the test > > framework is designed so that it can easily be ported to the actual > > Heat code base as functional tests so that the same scenarios could > > be used without modification, allowing us to have confidence that > > the eventual implementation is a faithful replication of the > > simulation (which can be rapidly experimented on, adjusted and > > tested when we inevitably run into implementation issues). > > > > This is a complete implementation of Phase 1 (i.e. using existing > > resource plugins), including update-during-update, resource > > clean-up, replace on update and rollback; with tests. > > > > Some of the design goals which were successfully incorporated: > > - Minimise changes to Heat (it's essentially a distributed version > > of the existing algorithm), and in particular to the database > > - Work with the existing plugin API > > - Limit total DB access for Resource/Stack to O(n) in the number of > > resources > > - Limit overall DB access to O(m) in the number of edges > > - Limit lock contention to only those operations actually contending > > (i.e. no global locks) > > - Each worker task deals with only one resource > > - Only read resource attributes once > > > > > > Open questions: > > - What do we do when we encounter a resource that is in progress > > from a previous update while doing a subsequent update? Obviously we > > don't want to interrupt it, as it will likely be left in an unknown > > state. Making a replacement is one obvious answer, but in many cases > > there could be serious down-sides to that. How long should we wait > > before trying it? What if it's still in progress because the engine > > processing the resource already died? > > > > > > > > Also, how do we implement resource level timeouts in general? > > > > > > > > > > Micha?'s Proposal > > ================= > > > > https://github.com/inc0/heat-convergence-prototype/tree/iterative > > > > Note that a version modified by me to use the same test scenario > > format (but not the same scenarios) is here: > > > > https://github.com/zaneb/heat-convergence-prototype/tree/iterative-adapted > > > > This is based on my simulation framework after a fashion, but with > > everything implemented synchronously and a lot of handwaving about > > how the actual implementation could be distributed. The central > > premise is that at each step of the algorithm, the entire graph is > > examined for tasks that can be performed next, and those are then > > started. Once all are complete (it's synchronous, remember), the > > next step is run. Keen observers will be asking how we know when it > > is time to run the next step in a distributed version of this > > algorithm, where it will be run and what to do about resources that > > are in an intermediate state at that time. All of these questions > > remain unanswered. > > > > > > > > Yes, I was struggling to figure out how it could manage an IN_PROGRESS > > state as it's stateless. So you end up treading on the other action's toes. > > > > Assuming we use the resource's state (IN_PROGRESS) you could get around > > that. Then you kick off a converge when ever an action completes (if > > there is nothing new to be > > > > done then do nothing). > > > > > > > > > > A non-exhaustive list of concerns I have: > > - Replace on update is not implemented yet > > - AFAIK rollback is not implemented yet > > - The simulation doesn't actually implement the proposed architecture > > - This approach is punishingly heavy on the database - O(n^2) or worse > > > > > > > > Yes, re-reading the state of all resources when ever run a new converge > > is worrying, but I think Michal had some ideas to minimize this. > > > > > > > > - A lot of phase 2 is mixed in with phase 1 here, making it > > difficult to evaluate which changes need to be made first and > > whether this approach works with existing plugins > > - The code is not really based on how Heat works at the moment, so > > there would be either a major redesign required or lots of radical > > changes in Heat or both > > > > I think there's a fair chance that given another 3-4 weeks to work > > on this, all of these issues and others could probably be resolved. > > The question for me at this point is not so much "if" but "why". > > > > Micha? believes that this approach will make Phase 2 easier to > > implement, which is a valid reason to consider it. However, I'm not > > aware of any particular issues that my approach would cause in > > implementing phase 2 (note that I have barely looked into it at all > > though). In fact, I very much want Phase 2 to be entirely > > encapsulated by the Resource class, so that the plugin type (legacy > > vs. convergence-enabled) is transparent to the rest of the system. > > Only in this way can we be sure that we'll be able to maintain > > support for legacy plugins. So a phase 1 that mixes in aspects of > > phase 2 is actually a bad thing in my view. > > > > I really appreciate the effort that has gone into this already, but > > in the absence of specific problems with building phase 2 on top of > > another approach that are solved by this one, I'm ready to call this > > a distraction. > > > > > > > > In it's defence, I like the simplicity of it. The concepts and code are > > easy to understand - tho' part of this is doesn't implement all the > > stuff on your list yet. > > > > > > > > > > > > Anant & Friends' Proposal > > ========================= > > > > First off, I have found this very difficult to review properly since > > the code is not separate from the huge mass of Heat code and nor is > > the commit history in the form that patch submissions would take > > (but rather includes backtracking and iteration on the design). As a > > result, most of the information here has been gleaned from > > discussions about the code rather than direct review. I have > > repeatedly suggested that this proof of concept work should be done > > using the simulator framework instead, unfortunately so far to no avail. > > > > The last we heard on the mailing list about this, resource clean-up > > had not yet been implemented. That was a major concern because that > > is the more difficult half of the algorithm. Since then there have > > been a lot more commits, but it's not yet clear whether resource > > clean-up, update-during-update, replace-on-update and rollback have > > been implemented, though it is clear that at least some progress has > > been made on most or all of them. Perhaps someone can give us an update. > > > > > > https://github.com/anantpatil/heat-convergence-poc > > > > > > > > AIUI this code also mixes phase 2 with phase 1, which is a concern. > > For me the highest priority for phase 1 is to be sure that it works > > with existing plugins. Not only because we need to continue to > > support them, but because converting all of our existing > > 'integration-y' unit tests to functional tests that operate in a > > distributed system is virtually impossible in the time frame we have > > available. So the existing test code needs to stick around, and the > > existing stack create/update/delete mechanisms need to remain in > > place until such time as we have equivalent functional test coverage > > to begin eliminating existing unit tests. (We'll also, of course, > > need to have unit tests for the individual elements of the new > > distributed workflow, functional tests to confirm that the > > distributed workflow works in principle as a whole - the scenarios > > from the simulator can help with _part_ of this - and, not least, an > > algorithm that is as similar as possible to the current one so that > > our existing tests remain at least somewhat representative and don't > > require too many major changes themselves.) > > > > Speaking of tests, I gathered that this branch included tests, but I > > don't know to what extent there are automated end-to-end functional > > tests of the algorithm? > > > > From what I can gather, the approach seems broadly similar to the > > one I eventually settled on also. The major difference appears to be > > in how we merge two or more streams of execution (i.e. when one > > resource depends on two or more others). In my approach, the > > dependencies are stored in the resources and each joining of streams > > creates a database row to track it, which is easily locked with > > contention on the lock extending only to those resources which are > > direct dependencies of the one waiting. In this approach, both the > > dependencies and the progress through the graph are stored in a > > database table, necessitating (a) reading of the entire table (as it > > relates to the current stack) on every resource operation, and (b) > > locking of the entire table (which is hard) when marking a resource > > operation complete. > > > > I chatted to Anant about this today and he mentioned that they had > > solved the locking problem by dispatching updates to a queue that is > > read by a single engine per stack. > > > > My approach also has the neat side-effects of pushing the data > > required to resolve get_resource and get_att (without having to > > reload the resources again and query them) as well as to update > > dependencies (e.g. because of a replacement or deletion) along with > > the flow of triggers. I don't know if anything similar is at work here. > > > > It's entirely possible that the best design might combine elements > > of both approaches. > > > > The same open questions I detailed under my proposal also apply to > > this one, if I understand correctly. > > > > > > I'm certain that I won't have represented everyone's work fairly > > here, so I encourage folks to dive in and correct any errors about > > theirs and ask any questions you might have about mine. (In case you > > have been living under a rock, note that I'll be out of the office > > for the rest of the week due to Thanksgiving so don't expect > > immediate replies.) > > > > I also think this would be a great time for the wider Heat community > > to dive in and start asking questions and suggesting ideas. We need > > to, ahem, converge on a shared understanding of the design so we can > > all get to work delivering it for Kilo. > > > > > > > > Agree, we need to get moving on this. > > > > -Angus > > > > > > > > cheers, > > Zane. > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > Thanks Zane for your e-mail Zane and summarizing everyone's work. > > The design goals mentioned above looks more of performance goals and > constraints to me. I understand that it is unacceptable to have a poorly > performing engine and Resource plug-ins broken. Convergence spec clearly > mentions that the existing Resource plugins should not be changed. > > IMHO, and my teams' HO, the design goals of convergence would be: > 1. Stability: No transient failures, either in Openstack/external > services or resources themselves, should fail the stack. Therefore, we > need to have Observers to check for divergence and converge a resource > if needed, to bring back to stable state. > 2. Resiliency: Heat engines should be able to take up tasks in case of > failures/restarts. > 3. Backward compatibility: "We don't break the user space." No existing > stacks should break. > > We started the PoC with these goals in mind, any performance > optimization would be a plus point for us. Note than I am neglecting the > performance goal, just that it should be next in the pipeline. The > questions we should ask ourselves is whether we are storing enough data > (state of stack) in DB to enable resiliency? Are we distributing the > load evenly to all Heat engines? Does our notification mechanism > provides us some form of guarantee or acknowledgement? > > In retrospective, we had to struggle a lot to understand the existing > Heat engine. We couldn't have done justice by just creating another > project in GitHub and without any concrete understanding of existing > state-of-affairs. We are not at the same page with Heat core members, we > are novice and cores are experts. > > I am glad that we experimented with the Heat engine directly. The > current Heat engine is not resilient and the messaging also lacks > reliability. We (my team and I guess cores also) understand that async > message passing would be the way to go as synchronous RPC calls simply > wouldn't scale. But with async message passing there has to be some > mechanism of ACKing back, which I think lacks in current infrastructure. > > How could we provide stable user defined stack if the underlying Heat > core lacks it? Convergence is all about stable stacks. To make the > current Heat core stable we need to have, at the least: > 1. Some mechanism to ACK back messages over AMQP. Or some other solid > mechanism of message passing. > 2. Some mechanism for fault tolerance in Heat engine using external > tools/infrastructure like Celerey/Zookeeper. Without external > infrastructure/tool we will end-up bloating Heat engine with lot of > boiler-plate code to achieve this. We had recommended Celery in our > previous e-mail (from Vishnu.) > > It was due to our experiments with Heat engines for this PoC, we could > come up with above recommendations. > > Sate of our PoC > --------------- > > On GitHub: https://github.com/anantpatil/heat-convergence-poc > > Our current implementation of PoC locks the stack after each > notification to mark the graph as traversed and produce next level of > resources for convergence. We are facing challenges in > removing/minimizing these locks. We also have two different school of > thoughts for solving this lock issue as mentioned above in Vishnu's > e-mail. I will these descibe in detail the Wiki. There would different > branches in our GitHub for these two approaches. > It would be helpful if you explained why you need to _lock_ the stack. MVCC in the database should be enough here. Basically you need to: begin transaction update traversal information select resolvable nodes {in code not sql -- send converge commands into async queue} commit Any failure inside this transation should rollback the transaction and retry this. It is o-k to have duplicate converge commands for a resource. This should be the single point of synchronization between workers that are resolving resources. Or perhaps this is the lock you meant? Either way, this isn't avoidable if you want to make sure everything is attempted at least once without having to continuously poll and re-poll the stack to look for unresolved resources. That is an option, but not one that I think is going to be as simple as the transactional method. From mestery at mestery.com Mon Dec 1 20:19:31 2014 From: mestery at mestery.com (Kyle Mestery) Date: Mon, 1 Dec 2014 14:19:31 -0600 Subject: [openstack-dev] [neutron] force_gateway_on_subnet, please don't deprecate In-Reply-To: <1384913819.11781239.1417435977062.JavaMail.zimbra@redhat.com> References: <95DCE311E0DE421D8C881E5CCD80E5D8@redhat.com> <1384913819.11781239.1417435977062.JavaMail.zimbra@redhat.com> Message-ID: On Mon, Dec 1, 2014 at 6:12 AM, Assaf Muller wrote: > > > ----- Original Message ----- >> >> My proposal here, is, _let?s not deprecate this setting_, as it?s a valid use >> case of a gateway configuration, and let?s provide it on the reference >> implementation. > > I agree. As long as the reference implementation works with the setting off > there's no need to deprecate it. I still think the default should be set to True > though. > > Keep in mind that the DHCP agent will need changes as well. > ++ to both suggestions Assaf. Thanks for bringing this up Miguel! Kyle >> >> TL;DR >> >> I?ve been looking at this yesterday, during a test deployment >> on a site where they provide external connectivity with the >> gateway outside subnet. >> >> And I needed to switch it of, to actually be able to have any external >> connectivity. >> >> https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L121 >> >> This is handled by providing an on-link route to the gateway first, >> and then adding the default gateway. >> >> It looks to me very interesting (not only because it?s the only way to work >> on that specific site [2][3][4]), because you can dynamically wire RIPE >> blocks to your server, without needing to use an specific IP for external >> routing or broadcast purposes, and instead use the full block in openstack. >> >> >> I have a tiny patch to support this on the neutron l3-agent [1] I yet need to >> add the logic to check ?gateway outside subnet?, then add the ?onlink? >> route. >> >> >> [1] >> >> diff --git a/neutron/agent/linux/interface.py >> b/neutron/agent/linux/interface.py >> index 538527b..5a9f186 100644 >> --- a/neutron/agent/linux/interface.py >> +++ b/neutron/agent/linux/interface.py >> @@ -116,15 +116,16 @@ class LinuxInterfaceDriver(object): >> namespace=namespace, >> ip=ip_cidr) >> >> - if gateway: >> - device.route.add_gateway(gateway) >> - >> new_onlink_routes = set(s['cidr'] for s in extra_subnets) >> + if gateway: >> + new_onlink_routes.update([gateway]) >> existing_onlink_routes = set(device.route.list_onlink_routes()) >> for route in new_onlink_routes - existing_onlink_routes: >> device.route.add_onlink_route(route) >> for route in existing_onlink_routes - new_onlink_routes: >> device.route.delete_onlink_route(route) >> + if gateway: >> + device.route.add_gateway(gateway) >> >> def delete_conntrack_state(self, root_helper, namespace, ip): >> """Delete conntrack state associated with an IP address. >> >> [2] http://www.soyoustart.com/ >> [3] http://www.ovh.co.uk/ >> [4] http://www.kimsufi.com/ >> >> >> Miguel ?ngel Ajo >> >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Mon Dec 1 21:05:42 2014 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 01 Dec 2014 16:05:42 -0500 Subject: [openstack-dev] [Heat] Using Job Queues for timeout ops In-Reply-To: <1415902420-sup-7384@fewbar.com> References: <4641310AFBEE10419D0A020273367C140C9F27AA@G2W2436.americas.hpqcorp.net> <5464B7FB.4050900@redhat.com> <1415889808-sup-7496@fewbar.com> <5464F09F.8090107@redhat.com> <1415902420-sup-7384@fewbar.com> Message-ID: <547CD826.6010204@redhat.com> On 13/11/14 13:59, Clint Byrum wrote: > I'm not sure we have the same understanding of AMQP, so hopefully we can > clarify here. This stackoverflow answer echoes my understanding: > > http://stackoverflow.com/questions/17841843/rabbitmq-does-one-consumer-block-the-other-consumers-of-the-same-queue > > Not ack'ing just means they might get retransmitted if we never ack. It > doesn't block other consumers. And as the link above quotes from the > AMQP spec, when there are multiple consumers, FIFO is not guaranteed. > Other consumers get other messages. Thanks, obviously my recollection of how AMQP works was coloured too much by oslo.messaging. > So just add the ability for a consumer to read, work, ack to > oslo.messaging, and this is mostly handled via AMQP. Of course that > also likely means no zeromq for Heat without accepting that messages > may be lost if workers die. > > Basically we need to add something that is not "RPC" but instead > "jobqueue" that mimics this: > > http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messaging/rpc/dispatcher.py#n131 > > I've always been suspicious of this bit of code, as it basically means > that if anything fails between that call, and the one below it, we have > lost contact, but as long as clients are written to re-send when there > is a lack of reply, there shouldn't be a problem. But, for a job queue, > there is no reply, and so the worker would dispatch, and then > acknowledge after the dispatched call had returned (including having > completed the step where new messages are added to the queue for any > newly-possible children). I'm curious how people are deploying Rabbit at the moment. Are they setting up multiple brokers and writing messages to disk before accepting them? I assume yes on the former but no on the latter, since there's no particular point in having e.g. 5 nines durability in the queue when the overall system is as weak as your flakiest node. OTOH if we were to add what you're proposing, then we would need folks to deploy Rabbit that way (at least for Heat), since waiting for Acks on receipt is insufficient to make messaging reliable if the broker can easily outright lose the message. I think all of the proposed approaches would benefit from this feature, but I'm concerned about any increased burden on deployers too. cheers, Zane. From ayoung at redhat.com Mon Dec 1 21:54:52 2014 From: ayoung at redhat.com (Adam Young) Date: Mon, 01 Dec 2014 16:54:52 -0500 Subject: [openstack-dev] [Keystone] Functional Testing Message-ID: <547CE3AC.1060907@redhat.com> Keystone doesn't depend on other services, but every other service in OpenStack depends on Keystone. So, I propose we treat all of the Admin aspects of the Keystone server just like any other consumer of auth token middleware, but only for testing. It means that we would be using a different middleware for /v3/auth than we use for, say /v3/users and /v3/policy. /auth would use the mechanism that Keystone uses today (Middleware inside of Keystone) but the rest of Keystone would use keystonemiddleware.auth_token. Ok, so we would probably need to work around fetching certificates, too. The idea is that Keystone should be run just like any other service, and perform all of the same checks on tokens and policy for anything that is an administrative task; creating users etc. In doing so, we would provide a framework for functional tests. Doing this would require a bit of reworking of the past pipeline, but that is stuff we've considered doing before. Does this fit in with the vision of each project doing our own functional testing? From zbitter at redhat.com Mon Dec 1 22:15:15 2014 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 01 Dec 2014 17:15:15 -0500 Subject: [openstack-dev] [Heat] Rework auto-scaling support in Heat In-Reply-To: <20141128073257.GA7165@localhost> References: <20141128073257.GA7165@localhost> Message-ID: <547CE873.1020205@redhat.com> On 28/11/14 02:33, Qiming Teng wrote: > Dear all, > > Auto-Scaling is an important feature supported by Heat and needed by > many users we talked to. There are two flavors of AutoScalingGroup > resources in Heat today: the AWS-based one and the Heat native one. As > more requests coming in, the team has proposed to separate auto-scaling > support into a separate service so that people who are interested in it > can jump onto it. At the same time, Heat engine (especially the resource > type code) will be drastically simplified. The separated AS service > could move forward more rapidly and efficiently. > > This work was proposed a while ago with the following wiki and > blueprints (mostly approved during Havana cycle), but the progress is > slow. A group of developers now volunteer to take over this work and > move it forward. Thank you! > wiki: https://wiki.openstack.org/wiki/Heat/AutoScaling > BPs: > - https://blueprints.launchpad.net/heat/+spec/as-lib-db > - https://blueprints.launchpad.net/heat/+spec/as-lib > - https://blueprints.launchpad.net/heat/+spec/as-engine-db > - https://blueprints.launchpad.net/heat/+spec/as-engine > - https://blueprints.launchpad.net/heat/+spec/autoscaling-api > - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-client > - https://blueprints.launchpad.net/heat/+spec/as-api-group-resource > - https://blueprints.launchpad.net/heat/+spec/as-api-policy-resource > - https://blueprints.launchpad.net/heat/+spec/as-api-webhook-trigger-resource > - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources > > Once this whole thing lands, Heat engine will talk to the AS engine in > terms of ResourceGroup, ScalingPolicy, Webhooks. Heat engine won't care > how auto-scaling is implemented although the AS engine may in turn ask > Heat to create/update stacks for scaling's purpose. In theory, AS > engine can create/destroy resources by directly invoking other OpenStack > services. This new AutoScaling service may eventually have its own DB, > engine, API, api-client. We can definitely aim high while work hard on > real code. > > After reviewing the BPs/Wiki and some communication, we get two options > to push forward this. I'm writing this to solicit ideas and comments > from the community. > > Option A: Top-Down Quick Split > ------------------------------ > > This means we will follow a roadmap shown below, which is not 100% > accurate yet and very rough: > > 1) Get the separated REST service in place and working > 2) Switch Heat resources to use the new REST service > > Pros: > - Separate code base means faster review/commit cycle > - Less code churn in Heat > Cons: > - A new service need to be installed/configured/launched > - Need commitments from dedicated, experienced developers from very > beginning Anything that involves a kind of flag-day switchover like this (maintaining the implementation in two different places) will be very hard to land, and if by some miracle it does will likely cause a lot of user-breaking bugs. > Option B: Bottom-Up Slow Growth > ------------------------------- > > The roadmap is more conservative, with many (yes, many) incremental > patches to migrate things carefully. > > 1) Separate some of the autoscaling logic into libraries in Heat > 2) Augment heat-engine with new AS RPCs > 3) Switch AS related resource types to use the new RPCs > 4) Add new REST service that also talks to the same RPC > (create new GIT repo, API endpoint and client lib...) > > Pros: > - Less risk breaking user lands with each revision well tested > - More smooth transition for users in terms of upgrades > > Cons: > - A lot of churn within Heat code base, which means long review cycles > - Still need commitments from cores to supervise the whole process I vote for option B (surprise!), and I will sign up right now to as many nagging emails as you care to send when you need reviews if you will take on this work :) > There could be option C, D... but the two above are what we came up with > during the discussion. > > Another important thing we talked about is about the open discussion on > this. OpenStack Wiki seems a good place to document settled designs but > not for interactive discussions. Probably we should leverage etherpad > and the mailinglist when moving forward. Suggestions on this are also > welcomed. +1 cheers, Zane. From xarses at gmail.com Mon Dec 1 22:26:50 2014 From: xarses at gmail.com (Andrew Woodward) Date: Mon, 1 Dec 2014 14:26:50 -0800 Subject: [openstack-dev] [Fuel] Please enable everyone to see patches and reviews on http://review.fuel-infra.org In-Reply-To: <547CC43D.4090008@gmail.com> References: <547CC43D.4090008@gmail.com> Message-ID: Jay, AFAIU review.fuel-infra.org is only for packages and specs being built to include in the 'distro' packages. This is done so that we don't have to pollute review.openstack.org / stackforge with every package we need to rebuild (There are more than 925 repos). As to the link you attempted to follow, it looks like it's applied against keystone. My understanding is that the only private repos on the site are the openstack code ones. On Mon, Dec 1, 2014 at 11:40 AM, Jay Pipes wrote: > Hi Fuel Devs, > > I'm not entirely sure why we are running our own infrastructure Gerrit for > Fuel, as opposed to using the main review.openstack.org site that all > OpenStack and Stackforge projects use (including Fuel repositories on > stackforge...). Could someone please advise on why we are doing that? > > In the meantime, can we please have access to view review.fuel-infra.org > code reviews and patches? I went today to track down a bug [1] and clicked > on a link in the bug report [2] and after signing in with my Launchpad SSO > account, got a permission denied page. > > Please advise, > -jay > > [1] https://bugs.launchpad.net/mos/+bug/1378081 > [2] https://review.fuel-infra.org/#/c/940/ > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Andrew Mirantis Ceph community From openstack at nemebean.com Mon Dec 1 22:38:11 2014 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 01 Dec 2014 16:38:11 -0600 Subject: [openstack-dev] oslo.serialization 1.1.0 released Message-ID: <547CEDD3.6080402@nemebean.com> The Oslo team is pleased to announce the release of oslo.serialization 1.1.0. This is primarily a bug fix and requirements update release. Further details of the changes included are below. For more details, please see the git log history below and https://launchpad.net/oslo.serialization/+milestone/1.1.0 Please report issues through launchpad: https://launchpad.net/oslo.serialization openstack/oslo.serialization 1.0.0..HEAD a7bade1 Add pbr to installation requirements 9701670 Updated from global requirements f3aa93c Fix pep8, docs, requirements issues in jsonutils and tests d4e3609 Remove extraneous vim editor configuration comments ce89925 Support building wheels (PEP-427) 472e6c9 Fix coverage testing ddde5a5 Updated from global requirements 0929bde Support 'built-in' datetime module 9498865 Add history/changelog to docs diffstat (except docs and test files): oslo/__init__.py | 2 -- oslo/serialization/jsonutils.py | 20 +++++++++++--------- requirements.txt | 4 +++- setup.cfg | 3 +++ test-requirements.txt | 11 ++++++----- tests/test_jsonutils.py | 5 +++-- 8 files changed, 27 insertions(+), 20 deletions(-) Requirements updates: diff --git a/requirements.txt b/requirements.txt index 2dc5dea..176ce3c 100644 --- a/requirements.txt +++ b/requirements.txt @@ -3,0 +4,2 @@ + +pbr>=0.6,!=0.7,<1.0 @@ -9 +11 @@ iso8601>=0.1.9 -oslo.utils>=0.3.0 # Apache-2.0 +oslo.utils>=1.0.0 # Apache-2.0 diff --git a/test-requirements.txt b/test-requirements.txt index a0ed1c5..f4c82b9 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -4 +4 @@ -hacking>=0.5.6,<0.8 +hacking>=0.9.2,<0.10 @@ -9,2 +9,2 @@ netaddr>=0.7.12 -sphinx>=1.1.2,!=1.2.0,<1.3 -oslosphinx>=2.2.0.0a2 +sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 +oslosphinx>=2.2.0 # Apache-2.0 @@ -12 +12 @@ oslosphinx>=2.2.0.0a2 -oslotest>=1.1.0.0a2 +oslotest>=1.2.0 # Apache-2.0 @@ -14 +14,2 @@ simplejson>=2.2.0 -oslo.i18n>=0.3.0 # Apache-2.0 +oslo.i18n>=1.0.0 # Apache-2.0 +coverage>=3.6 From zigo at debian.org Mon Dec 1 22:40:51 2014 From: zigo at debian.org (Thomas Goirand) Date: Tue, 02 Dec 2014 06:40:51 +0800 Subject: [openstack-dev] sqlalchemy-migrate call for reviews In-Reply-To: <547C409E.2040306@redhat.com> References: <20141129232815.GB2497@yuggoth.org> <547C409E.2040306@redhat.com> Message-ID: <547CEE73.4000200@debian.org> On 12/01/2014 06:19 PM, Ihar Hrachyshka wrote: > Indeed, the review queue is non-responsive. There are other patches in > the queue that bit rot there: > > https://review.openstack.org/#/q/status:open+project:stackforge/sqlalchemy-migrate,n,z I did +2 some of the patches which I thought were totally unharmful, but none passed the gate. Now, it looks like the Python 3.3 gate for it is broken... :( Where are those "module not found" issues from? Thomas From sorlando at nicira.com Mon Dec 1 22:43:20 2014 From: sorlando at nicira.com (Salvatore Orlando) Date: Mon, 1 Dec 2014 23:43:20 +0100 Subject: [openstack-dev] [neutron] alembic 0.7.1 will break neutron's "heal" feature which assumes a fixed set of potential autogenerate types In-Reply-To: References: Message-ID: Thanks Mike! I've left some comments on the patch. Just out of curiosity, since now alembic can autogenerate foreign keys, are we be able to remove the logic for identifying foreign keys to add/remove [1]? Salvatore [1] http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/migration/alembic_migrations/heal_script.py#n205 On 1 December 2014 at 20:35, Mike Bayer wrote: > hey neutron - > > Just an FYI, I?ve added https://review.openstack.org/#/c/137989/ / > https://launchpad.net/bugs/1397796 to refer to an issue in neutron?s > ?heal? script that is going to start failing when I put out Alembic 0.7.1, > which is potentially later today / this week. > > The issue is pretty straightforward, Alembic 0.7.1 is adding foreign key > autogenerate (and really, could add more types of autogenerate at any > time), and as these new commands are revealed within the > execute_alembic_command(), they are not accounted for, so it fails. I?d > recommend folks try to push this one through or otherwise decide how this > issue (which should be expected to occur many more times) should be handled. > > Just a heads up in case you start seeing builds failing! > > - mike > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Mon Dec 1 22:58:58 2014 From: clint at fewbar.com (Clint Byrum) Date: Mon, 01 Dec 2014 14:58:58 -0800 Subject: [openstack-dev] [TripleO] mid-cycle details final draft Message-ID: <1417474177-sup-8420@fewbar.com> Hello! I've received confirmation that our venue, the HP offices in downtown Seattle, will be available for the most-often-preferred least-often-cannot week of Feb 16 - 20. Our venue has a maximum of 20 participants, but I only have 16 possible attendees now. Please add yourself to that list _now_ if you will be joining us. I've asked our office staff to confirm Feb 18 - 20 (Wed-Fri). When they do, I will reply to this thread to let everyone know so you can all start to book travel. See the etherpad for travel details. https://etherpad.openstack.org/p/kilo-tripleo-midcycle-meetup From jsbryant at electronicjungle.net Mon Dec 1 23:28:33 2014 From: jsbryant at electronicjungle.net (Jay S. Bryant) Date: Mon, 01 Dec 2014 17:28:33 -0600 Subject: [openstack-dev] [stable] New config options, no default change In-Reply-To: <54748DC0.50400@redhat.com> References: <20141111103053.GD7366@redhat.com> <5461FC66.6060103@openstack.org> <5468F85B.10901@electronicjungle.net> <54748DC0.50400@redhat.com> Message-ID: <547CF9A1.4070009@electronicjungle.net> On 11/25/2014 08:10 AM, Ihar Hrachyshka wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > On 16/11/14 20:17, Jay S. Bryant wrote: >> All, >> >> This is a question I have been struggling with for Cinder recently. >> Where do we draw the line on backports. How do we handle config changes? >> >> One thing for Cinder I am also considering, in addition to whether it >> changes the default functionality, is whether it is specific to the >> submitter's driver. If it is only going to affect the driver I am more >> likely to consider it than something that is going to impact all of Cinder. >> >> What is the text that should be included in the commit messages to make >> sure that it is picked up for release notes? I want to make sure that >> people use that. > I'm not sure anyone tracks commit messages to create release notes. A > better way to handle this is to create a draft, post it in review > comments, and copy to release notes draft right before/after pushing the > patch into gate. Ihar, Forgive me, I think my question is more basic then. Where are the release notes for a stable branch located to make such changes? Thanks! > >> Thanks! >> Jay >> >> >> On 11/11/2014 06:50 AM, Alan Pevec wrote: >>>> New config options may not change behavior (if default value preserves >>>> behavior), they still make documentation more incomplete (doc, books, >>>> and/or blogposts about Juno won't mention that option). >>> That's why we definitely need such changes described clearly in stable >>> release notes. >>> I also lean to accept this as an exception for stable/juno, I'll >>> request relnote text in the review. >>> >>> Cheers, >>> Alan >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > > iQEcBAEBCgAGBQJUdI3AAAoJEC5aWaUY1u57YDkH/1LL/Y0gL2egRs1F6pgoaVbk > bEREvavZMyV6JFrojfJz2HKVxeo//0AdCcr3+W7KH5pXtposhg3Xf5v6bhF+n0gO > NX8u23z2zBLh6xdYcJHiRtMz1zhXT66xDhZso4bMNAL98glGOv1rrbkmkj43pR2L > TKSgRyes75nEBOlvPi79Co+2Ti3Z60HbS1NwgqCTGb9yRV3o0JDMZ3+zdFKlrTTf > 0ZkrqEHtDaS0wEJmi7vqDAflNBPPn4lo8mAcju9k80lwrCs7g6VdqYJec0Nb/1gJ > Foj6vWRPHDH1ftph3am4yhY6Gs+dXQ1nmhEK0zFucDeLXz01Gql3vKX0xK18Rho= > =6ABy > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From asalkeld at mirantis.com Mon Dec 1 23:34:45 2014 From: asalkeld at mirantis.com (Angus Salkeld) Date: Tue, 2 Dec 2014 09:34:45 +1000 Subject: [openstack-dev] [Heat] Rework auto-scaling support in Heat In-Reply-To: <547CE873.1020205@redhat.com> References: <20141128073257.GA7165@localhost> <547CE873.1020205@redhat.com> Message-ID: On Tue, Dec 2, 2014 at 8:15 AM, Zane Bitter wrote: > On 28/11/14 02:33, Qiming Teng wrote: > >> Dear all, >> >> Auto-Scaling is an important feature supported by Heat and needed by >> many users we talked to. There are two flavors of AutoScalingGroup >> resources in Heat today: the AWS-based one and the Heat native one. As >> more requests coming in, the team has proposed to separate auto-scaling >> support into a separate service so that people who are interested in it >> can jump onto it. At the same time, Heat engine (especially the resource >> type code) will be drastically simplified. The separated AS service >> could move forward more rapidly and efficiently. >> >> This work was proposed a while ago with the following wiki and >> blueprints (mostly approved during Havana cycle), but the progress is >> slow. A group of developers now volunteer to take over this work and >> move it forward. >> > > Thank you! > > > wiki: https://wiki.openstack.org/wiki/Heat/AutoScaling >> BPs: >> - https://blueprints.launchpad.net/heat/+spec/as-lib-db >> - https://blueprints.launchpad.net/heat/+spec/as-lib >> - https://blueprints.launchpad.net/heat/+spec/as-engine-db >> - https://blueprints.launchpad.net/heat/+spec/as-engine >> - https://blueprints.launchpad.net/heat/+spec/autoscaling-api >> - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-client >> - https://blueprints.launchpad.net/heat/+spec/as-api-group-resource >> - https://blueprints.launchpad.net/heat/+spec/as-api-policy-resource >> - https://blueprints.launchpad.net/heat/+spec/as-api-webhook- >> trigger-resource >> - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources >> >> Once this whole thing lands, Heat engine will talk to the AS engine in >> terms of ResourceGroup, ScalingPolicy, Webhooks. Heat engine won't care >> how auto-scaling is implemented although the AS engine may in turn ask >> Heat to create/update stacks for scaling's purpose. In theory, AS >> engine can create/destroy resources by directly invoking other OpenStack >> services. This new AutoScaling service may eventually have its own DB, >> engine, API, api-client. We can definitely aim high while work hard on >> real code. >> >> After reviewing the BPs/Wiki and some communication, we get two options >> to push forward this. I'm writing this to solicit ideas and comments >> from the community. >> >> Option A: Top-Down Quick Split >> ------------------------------ >> >> This means we will follow a roadmap shown below, which is not 100% >> accurate yet and very rough: >> >> 1) Get the separated REST service in place and working >> 2) Switch Heat resources to use the new REST service >> >> Pros: >> - Separate code base means faster review/commit cycle >> - Less code churn in Heat >> Cons: >> - A new service need to be installed/configured/launched >> - Need commitments from dedicated, experienced developers from very >> beginning >> > > Anything that involves a kind of flag-day switchover like this > (maintaining the implementation in two different places) will be very hard > to land, and if by some miracle it does will likely cause a lot of > user-breaking bugs. > Well we can use the environment to provide the two options for a cycle (like the cloud watch lite) and the operator can switch when they feel comfortable. The reason I'd like to keep the door somewhat open to this is the huge burden of work we will put on Qiming and his team for option B (and the load on the core team). As you know this has been thought of before and fizzled out, I don't want that to happen again. If we can make this more manageable for the team doing this, then I think that is a good thing. We could implement the guts of the AS in a library and import it from both places (to prevent duplicate implementations). > > Option B: Bottom-Up Slow Growth >> ------------------------------- >> >> The roadmap is more conservative, with many (yes, many) incremental >> patches to migrate things carefully. >> >> 1) Separate some of the autoscaling logic into libraries in Heat >> 2) Augment heat-engine with new AS RPCs >> 3) Switch AS related resource types to use the new RPCs >> 4) Add new REST service that also talks to the same RPC >> (create new GIT repo, API endpoint and client lib...) >> >> Pros: >> - Less risk breaking user lands with each revision well tested >> - More smooth transition for users in terms of upgrades >> >> I think this is only true up until "4)", at that point it's the same pain as option A (the operator needs a new REST endpoint, daemons to run, etc) - so delayed pain. > Cons: >> - A lot of churn within Heat code base, which means long review cycles >> - Still need commitments from cores to supervise the whole process >> > > I vote for option B (surprise!), and I will sign up right now to as many > nagging emails as you care to send when you need reviews if you will take > on this work :) > > There could be option C, D... but the two above are what we came up with >> during the discussion. >> > I'd suggest a combination between A and B. 1) Separate some of the autoscaling logic into libraries in Heat 2) Get the separated REST service in place and working (using the above heat library) 3) Add an environment option to be able to switch Heat resources to use the new REST service 4) after a cycle remove the internal support within Heat (open to other suggestions tho') - I am not convinced of the usefulness of the RPC step when that can be bypassed. -Angus >> Another important thing we talked about is about the open discussion on >> this. OpenStack Wiki seems a good place to document settled designs but >> not for interactive discussions. Probably we should leverage etherpad >> and the mailinglist when moving forward. Suggestions on this are also >> welcomed. >> > > +1 > > cheers, > Zane. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gus at inodes.org Mon Dec 1 23:52:46 2014 From: gus at inodes.org (Angus Lees) Date: Mon, 01 Dec 2014 23:52:46 +0000 Subject: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories References: Message-ID: On Mon Dec 01 2014 at 2:06:18 PM henry hly wrote: > My suggestion is that starting with LB and VPN as a trial, which can > never be distributed. > .. Sure they can! Loadbalancing in particular _should_ be distributed if both the clients and backends are in the same cluster... (I agree with your suggestion to start with LB+VPN btw, just not your reasoning ;) - Gus -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Tue Dec 2 00:05:28 2014 From: clint at fewbar.com (Clint Byrum) Date: Mon, 01 Dec 2014 16:05:28 -0800 Subject: [openstack-dev] [Heat] Using Job Queues for timeout ops In-Reply-To: <547CD826.6010204@redhat.com> References: <4641310AFBEE10419D0A020273367C140C9F27AA@G2W2436.americas.hpqcorp.net> <5464B7FB.4050900@redhat.com> <1415889808-sup-7496@fewbar.com> <5464F09F.8090107@redhat.com> <1415902420-sup-7384@fewbar.com> <547CD826.6010204@redhat.com> Message-ID: <1417478279-sup-9084@fewbar.com> Excerpts from Zane Bitter's message of 2014-12-01 13:05:42 -0800: > On 13/11/14 13:59, Clint Byrum wrote: > > I'm not sure we have the same understanding of AMQP, so hopefully we can > > clarify here. This stackoverflow answer echoes my understanding: > > > > http://stackoverflow.com/questions/17841843/rabbitmq-does-one-consumer-block-the-other-consumers-of-the-same-queue > > > > Not ack'ing just means they might get retransmitted if we never ack. It > > doesn't block other consumers. And as the link above quotes from the > > AMQP spec, when there are multiple consumers, FIFO is not guaranteed. > > Other consumers get other messages. > > Thanks, obviously my recollection of how AMQP works was coloured too > much by oslo.messaging. > > > So just add the ability for a consumer to read, work, ack to > > oslo.messaging, and this is mostly handled via AMQP. Of course that > > also likely means no zeromq for Heat without accepting that messages > > may be lost if workers die. > > > > Basically we need to add something that is not "RPC" but instead > > "jobqueue" that mimics this: > > > > http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messaging/rpc/dispatcher.py#n131 > > > > I've always been suspicious of this bit of code, as it basically means > > that if anything fails between that call, and the one below it, we have > > lost contact, but as long as clients are written to re-send when there > > is a lack of reply, there shouldn't be a problem. But, for a job queue, > > there is no reply, and so the worker would dispatch, and then > > acknowledge after the dispatched call had returned (including having > > completed the step where new messages are added to the queue for any > > newly-possible children). > > I'm curious how people are deploying Rabbit at the moment. Are they > setting up multiple brokers and writing messages to disk before > accepting them? I assume yes on the former but no on the latter, since > there's no particular point in having e.g. 5 nines durability in the > queue when the overall system is as weak as your flakiest node. > Usually the pseudo-code should be: msg = queue.read() do_something_idempotent_with(msg.payload) msg.ack() The idea is to ack only after you've done _everything_ with the payload, but to not freak out if somebody already did _some_ of what you did with the payload. > OTOH if we were to add what you're proposing, then we would need folks > to deploy Rabbit that way (at least for Heat), since waiting for Acks on > receipt is insufficient to make messaging reliable if the broker can > easily outright lose the message. > If you ask RabbitMQ to make a message durable, it writes it to a durable queue storage. If your broker is in a cluster, it makes sure it's written into _many_ queue storages. Currently if you deploy TripleO w/ 3 controllers, you get a clustered RabbitMQ and sufficient durability for the pattern I cited. Users may not be deploying this way, but they should be. I'm sort of assuming qpid's clustering works the same. 0mq will likely not work at all for this. Other options are feasible too, like a simple redis queue that you abuse as a job queue. > I think all of the proposed approaches would benefit from this feature, > but I'm concerned about any increased burden on deployers too. Right now they have the burden of supporting coarse timeouts which seems like it will fail often. That seems worse in my head. From ijw.ubuntu at cack.org.uk Tue Dec 2 00:35:16 2014 From: ijw.ubuntu at cack.org.uk (Ian Wells) Date: Mon, 1 Dec 2014 16:35:16 -0800 Subject: [openstack-dev] [Neutron] Edge-VPN and Edge-Id In-Reply-To: References: Message-ID: On 1 December 2014 at 09:01, Mathieu Rohon wrote: This is an alternative that would say : you want an advanced service > for your VM, please stretch your l2 network to this external > component, that is driven by an external controller, and make your > traffic goes to this component to take benefit of this advanced > service. This is a valid alternative of course, but distributing the > service directly to each compute node is much more valuable, ASA it is > doable. > Right, so a lot rides on the interpretation of 'advanced service' here, and also 'attachment'. Firstly, the difference between this and the 'advanced services' (including the L3 functionality, though it's not generally considered an 'advanced service') is that advanced services that exist today attach via an addressed port. This bridges in. That's quite a signifcant difference, which is to an extent why I've avoided lumping the two together and haven't called this an advanced service itself, although it's clearly similar. Secondly, 'attachment' has historically meant a connection to that port. But in DVRs, it can be a multipoint connection to the network - manifested on several hosts - all through the auspices of a single port. In the edge-id proposal you'll note that I've carefully avoided defining what an attachment is, largely because I have a natural tendency to want to see the interface at the API level before I worry about the backend, I admit. Your point about distributed services is well taken, and I think would be addressed by one of these distributed attachment types. > So the issue I worry about here is that if we start down the path of > adding > > the MPLS datamodels to Neutron we have to add Kevin's switch control > work. > > And the L2VPN descriptions for GRE, L2TPv3, VxLAN, and EVPN. And > whatever > > else comes along. And we get back to 'that's a lot of big changes that > > aren't interesting to 90% of Neutron users' - difficult to get in and a > lot > > of overhead to maintain for the majority of Neutron developers who don't > > want or need it. > > This shouldn't be a lot of big changes, once interfaces between > advanced services and neutron core services will be cleaner. Well, incorporating a lot of models into Neutron *is*, clearly, quite a bit of change, for starters. The edge-id concept says 'the data models live outside neutron in a separate system' and there, yes, absolutely, this proposes a clean model for edge/Neutron separation in the way you're alluding to with advanced services. I think your primary complaint is that it doesn't define that interface for an OVS driver based system. The edge-vpn concept says 'the data models exists within neutron in an integrated fashion' and, if you agree that separation is the way to go, this seems to me to be exactly the wrong approach to be using. It's the way advanced services are working - for now - but that's because we believe it would be hard to pull them out because the interfaces between service and Neutron don't currently exist. The argument for this seems to be 'we should incorporate it so that we can pull it out at the same time as advanced services' but it feels like that's making more work now so that we can do even more work in the future. For an entirely new thing that is in many respects not like a service I would prefer not to integrate it in the first place, thus skipping over that whole question of how to break it out in the future. It's an open question whether the work to make it play nicely with the existing ML2 model is worth the effort or not, because I didn't study that. It's not relevant to my needs, but if you're interested then we could talk about what other specs would be required. -- Ian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dklyle0 at gmail.com Tue Dec 2 00:49:20 2014 From: dklyle0 at gmail.com (David Lyle) Date: Mon, 1 Dec 2014 17:49:20 -0700 Subject: [openstack-dev] [Horizon] Reminder Meeting moved to Wed Message-ID: We recently changed the Horizon meeting times to be friendlier for more timezones. This week the Horizon team meeting will be at 2000 UTC on Wed in #openstack-meeting-3 https://wiki.openstack.org/wiki/Meetings/Horizon David -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Tue Dec 2 01:27:10 2014 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 02 Dec 2014 12:27:10 +1100 Subject: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023) In-Reply-To: <547CA476.8080203@nemebean.com> References: <54755DE7.2070107@redhat.com> <547607C3.3090604@nemebean.com> <547CA476.8080203@nemebean.com> Message-ID: <547D156E.6080401@redhat.com> On 12/02/2014 04:25 AM, Ben Nemec wrote: > 1) A specific reason SHELLOPTS can't be used. IMO leave this alone as it changes global behaviour at a low-level and that is a vector for unintended side-effects. Some thoughts: - We don't want tracing output of various well-known scripts that might run from /bin. - SHELLOPTS is read-only, so you have to "set -x; export SHELLOPTS" which means to turn it on for children you have to start tracing yourself. It's unintuitive and a bit weird. - Following from that, "DIB_DEBUG_TRACE=n disk-image-create" is the same as "disk-image-create -x" which is consistent. This can be useful for CI wrappers - pretty sure SHELLOPTS doesn't survive sudo, which might add another layer of complication for users - A known env variable can be usefully overloaded to signal to scripts not in bash rather than parsing SHELLOPTS > I'm all for improving in this area, but before we make an intrusive > change with an ongoing cost that won't work with anything not > explicitly enabled for it, I want to make sure it's the right thing > to do. As yet I'm not convinced. For "ongoing cost" -- I've rebased this about 15 times and there just isn't that much change in practice. In reality everyone copy-pastes another script to get started, so at least they'll copy-paste something consistent. That and dib-lint barfs if they don't. This makes "disk-image-create -x" do something actually useful by standardising the inconsistent existing defacto headers in all files. How is this *worse* than the status quo? -i From dannchoi at cisco.com Tue Dec 2 02:04:27 2014 From: dannchoi at cisco.com (Danny Choi (dannchoi)) Date: Tue, 2 Dec 2014 02:04:27 +0000 Subject: [openstack-dev] [qa] Should it be allowed to attach 2 interfaces from the same subnet to a VM? Message-ID: Hi, When I attach 2 interfaces from the same subnet to a VM, there is no error returned and both interfaces come up. lab at tme211:/opt/stack/logs$ nova interface-attach --net-id e38dba4a-74ed-4312-ba21-2a04b5c5a5b5 cirros-1 lab at tme211:/opt/stack/logs$ nova list +--------------------------------------+----------+--------+------------+-------------+-------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------+--------+------------+-------------+-------------------+ | 9d88d0b5-2453-4657-8058-987980ec7744 | cirros-1 | ACTIVE | - | Running | private=10.0.0.10 | +--------------------------------------+----------+--------+------------+-------------+-------------------+ lab at tme211:/opt/stack/logs$ nova interface-attach --net-id e38dba4a-74ed-4312-ba21-2a04b5c5a5b5 cirros-1 lab at tme211:/opt/stack/logs$ nova list +--------------------------------------+----------+--------+------------+-------------+------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------+--------+------------+-------------+------------------------------+ | 9d88d0b5-2453-4657-8058-987980ec7744 | cirros-1 | ACTIVE | - | Running | private=10.0.0.10, 10.0.0.11 | +--------------------------------------+----------+--------+------------+-------------+------------------------------+ $ ifconfig eth0 Link encap:Ethernet HWaddr FA:16:3E:92:2D:2B inet addr:10.0.0.10 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe92:2d2b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:514 errors:0 dropped:0 overruns:0 frame:0 TX packets:307 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:48342 (47.2 KiB) TX bytes:41750 (40.7 KiB) eth1 Link encap:Ethernet HWaddr FA:16:3E:EF:55:BC inet addr:10.0.0.11 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:feef:55bc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:49 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3556 (3.4 KiB) TX bytes:1120 (1.0 KiB) Should this operation be allowed? Thanks, Danny -------------- next part -------------- An HTML attachment was scrubbed... URL: From legiangthanh at gmail.com Tue Dec 2 02:26:05 2014 From: legiangthanh at gmail.com (thanh le giang) Date: Tue, 2 Dec 2014 09:26:05 +0700 Subject: [openstack-dev] [CI]Setup CI system behind proxy Message-ID: Dear all I have set up a CI system successfully with directly access to internet. Now I have another requirement which requires setting up CI system behind proxy but i can't find any way to configure zuul to connect to gerrit through proxy. Any advice is appreciated. Thanks and Regards -- L.G.Thanh Email: legiangthan at gmail.com lgthanh at fit.hcmus.edu.vn -------------- next part -------------- An HTML attachment was scrubbed... URL: From m4d.coder at gmail.com Tue Dec 2 03:10:53 2014 From: m4d.coder at gmail.com (W Chan) Date: Mon, 1 Dec 2014 19:10:53 -0800 Subject: [openstack-dev] [Mistral] Event Subscription Message-ID: Renat, To clarify on the shortcut solution, are you saying 1) we add an adhoc event subscription to the workflow spec OR 2) add a one time event subscription to the workflow execution OR both? I propose a separate worker/executor to process the events so to not disrupt the workflow executions. What if there are a lot of subscribers? What if one or more subscribers are offline? Do we retry and how many times? These activities will likely disrupt the throughput of the workflows and I rather handle these activities separately. Winson -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven at wedontsleep.org Tue Dec 2 03:55:27 2014 From: steven at wedontsleep.org (Steve Kowalik) Date: Tue, 02 Dec 2014 14:55:27 +1100 Subject: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023) In-Reply-To: <547D156E.6080401@redhat.com> References: <54755DE7.2070107@redhat.com> <547607C3.3090604@nemebean.com> <547CA476.8080203@nemebean.com> <547D156E.6080401@redhat.com> Message-ID: <547D382F.3080201@wedontsleep.org> On 02/12/14 12:27, Ian Wienand wrote: > - pretty sure SHELLOPTS doesn't survive sudo, which might add another > layer of complication for users sudo is well-known to strip out all but a well-defined list of environment variables when you use it. sudo -E turns that off, but the configuration can still prohibit certain variables from propagating. Cheers, -- Steve "Why does everyone say 'Relax' when they're about to do something terrible?" - Ensign Harry Kim, USS Voyager From daya_k at yahoo.com Tue Dec 2 04:01:05 2014 From: daya_k at yahoo.com (daya kamath) Date: Tue, 2 Dec 2014 04:01:05 +0000 (UTC) Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: <547CCA69.4000909@anteaya.info> References: <547CCA69.4000909@anteaya.info> Message-ID: <333765003.850985.1417492865541.JavaMail.yahoo@jws10710.mail.gq1.yahoo.com> hi anita,i am located in india, so, would prefer something slightly earlier, maybe 1-2 hours.thanks for checking. From: Anita Kuno To: openstack Development Mailing List ; "openstack-infra at lists.openstack.org" Sent: Tuesday, December 2, 2014 1:37 AM Subject: [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party One of the actions from the Kilo Third-Party CI summit session was to start up an additional meeting for CI operators to participate from non-North American time zones. Please reply to this email with times/days that would work for you. The current third party meeting is on Mondays at 1800 utc which works well since Infra meetings are on Tuesdays. If we could find a time that works for Europe and APAC that is also on Monday that would be ideal. Josh Hesketh has said he will try to be available for these meetings, he is in Australia. Let's get a sense of what days and timeframes work for those interested and then we can narrow it down and pick a channel. Thanks everyone, Anita. _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: From joshua.hesketh at rackspace.com Tue Dec 2 04:13:57 2014 From: joshua.hesketh at rackspace.com (Joshua Hesketh) Date: Tue, 2 Dec 2014 15:13:57 +1100 Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: <547CCA69.4000909@anteaya.info> References: <547CCA69.4000909@anteaya.info> Message-ID: <547D3C85.9050106@rackspace.com> Hey, So to suit most APAC business hours we'd be looking somewhere between 00:00am -> 09:00am UTC. So we have a starting point, how would 00:00am Tuesday UTC suit people? For China that would be 8am, for Australia 11am (when in daylight savings). Cheers, Josh Rackspace Australia On 12/2/14 7:07 AM, Anita Kuno wrote: > One of the actions from the Kilo Third-Party CI summit session was to > start up an additional meeting for CI operators to participate from > non-North American time zones. > > Please reply to this email with times/days that would work for you. The > current third party meeting is on Mondays at 1800 utc which works well > since Infra meetings are on Tuesdays. If we could find a time that works > for Europe and APAC that is also on Monday that would be ideal. > > Josh Hesketh has said he will try to be available for these meetings, he > is in Australia. > > Let's get a sense of what days and timeframes work for those interested > and then we can narrow it down and pick a channel. > > Thanks everyone, > Anita. > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra From lzy.dev at gmail.com Tue Dec 2 04:16:44 2014 From: lzy.dev at gmail.com (Zhi Yan Liu) Date: Tue, 2 Dec 2014 12:16:44 +0800 Subject: [openstack-dev] [glance] Deprecating osprofiler option 'enabled' in favour of 'profiler_enabled' In-Reply-To: References: <20141201143756.GA1814@gmail.com> Message-ID: Why not change other services instead of glance? I see one reason is "glance is the only one service use this option name", but to me one reason to keep it as-it in glance is that original name makes more sense due to the option already under "profiler" group, adding "profiler" prefix to it is really redundant, imo, and in other existing config group there's no one go this naming way. Then in the code we can just use a clear way: CONF.profiler.enabled instead of: CONF.profiler.profiler_enabled thanks, zhiyan On Mon, Dec 1, 2014 at 11:43 PM, Ian Cordasco wrote: > On 12/1/14, 08:37, "Louis Taylor" wrote: > >>Hi all, >> >>In order to enable or disable osprofiler in Glance, we currently have an >>option: >> >> [profiler] >> # If False fully disable profiling feature. >> enabled = False >> >>However, all other services with osprofiler integration use a similar >>option >>named profiler_enabled. >> >>For consistency, I'm proposing we deprecate this option's name in favour >>of >>profiler_enabled. This should make it easier for someone to configure >>osprofiler across projects with less confusion. Does anyone have any >>thoughts >>or concerns about this? >> >>Thanks, >>Louis > > We *just* introduced this if I remember the IRC discussion from last > month. I?m not sure how many people will be immediately making use of it. > I?m in favor of consistency where possible and while this would require a > deprecation, I think it?s a worthwhile change. > > +1 from me > > ? > > Ian > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From clint at fewbar.com Tue Dec 2 04:46:14 2014 From: clint at fewbar.com (Clint Byrum) Date: Mon, 01 Dec 2014 20:46:14 -0800 Subject: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023) In-Reply-To: References: <54755DE7.2070107@redhat.com> <547607C3.3090604@nemebean.com> Message-ID: <1417495283-sup-444@fewbar.com> Excerpts from James Slagle's message of 2014-11-28 11:27:20 -0800: > On Thu, Nov 27, 2014 at 1:29 PM, Sullivan, Jon Paul > wrote: > >> -----Original Message----- > >> From: Ben Nemec [mailto:openstack at nemebean.com] > >> Sent: 26 November 2014 17:03 > >> To: OpenStack Development Mailing List (not for usage questions) > >> Subject: Re: [openstack-dev] [diskimage-builder] Tracing levels for > >> scripts (119023) > >> > >> On 11/25/2014 10:58 PM, Ian Wienand wrote: > >> > Hi, > >> > > >> > My change [1] to enable a consistent tracing mechanism for the many > >> > scripts diskimage-builder runs during its build seems to have hit a > >> > stalemate. > >> > > >> > I hope we can agree that the current situation is not good. When > >> > trying to develop with diskimage-builder, I find myself constantly > >> > going and fiddling with "set -x" in various scripts, requiring me > >> > re-running things needlessly as I try and trace what's happening. > >> > Conversley some scripts set -x all the time and give output when you > >> > don't want it. > >> > > >> > Now nodepool is using d-i-b more, it would be even nicer to have > >> > consistency in the tracing so relevant info is captured in the image > >> > build logs. > >> > > >> > The crux of the issue seems to be some disagreement between reviewers > >> > over having a single "trace everything" flag or a more fine-grained > >> > approach, as currently implemented after it was asked for in reviews. > >> > > >> > I must be honest, I feel a bit silly calling out essentially a > >> > four-line patch here. > >> > >> My objections are documented in the review, but basically boil down to > >> the fact that it's not a four line patch, it's a 500+ line patch that > >> does essentially the same thing as: > >> > >> set +e > >> set -x > >> export SHELLOPTS > > > > I don't think this is true, as there are many more things in SHELLOPTS than just xtrace. I think it is wrong to call the two approaches equivalent. > > > >> > >> in disk-image-create. You do lose set -e in disk-image-create itself on > >> debug runs because that's not something we can safely propagate, > >> although we could work around that by unsetting it before calling hooks. > >> FWIW I've used this method locally and it worked fine. > > > > So this does say that your alternative implementation has a difference from the proposed one. And that the difference has a negative impact. > > > >> > >> The only drawback is it doesn't allow the granularity of an if block in > >> every script, but I don't personally see that as a particularly useful > >> feature anyway. I would like to hear from someone who requested that > >> functionality as to what their use case is and how they would define the > >> different debug levels before we merge an intrusive patch that would > >> need to be added to every single new script in dib or tripleo going > >> forward. > > > > So currently we have boilerplate to be added to all new elements, and that boilerplate is: > > > > set -eux > > set -o pipefail > > > > This patch would change that boilerplate to: > > > > if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then > > set -x > > fi > > set -eu > > set -o pipefail > > > > So it's adding 3 lines. It doesn't seem onerous, especially as most people creating a new element will either copy an existing one or copy/paste the header anyway. > > > > I think that giving control over what is effectively debug or non-debug output is a desirable feature. > > I don't think it's debug vs non-debug. I think script writers that > have explicitly used set -x previously have then operated under the > assumption that they don't need to add any useful logging since it's > running -x. In that case, this patch is actually harmful. > I believe James has hit the nail squarely on the head with the paragraph above. I propose a way forward for this: 1) Conform all o-r-c scripts to the logging standards we have in OpenStack, or write new standards for diskimage-builder and conform them to those standards. Abolish non-conditional xtrace in any script conforming to the standards. 2) Once that is done, implement optional -x. I rather prefer the explicit conditional set -x implementation over SHELLOPTS. As somebody else pointed out, it feels like asking for unintended side-effects. But the "how" is far less important than the "what" in this case, which step 1 will better define. Anyone else have a better plan? From russell.sim at gmail.com Tue Dec 2 05:14:08 2014 From: russell.sim at gmail.com (Russell Sim) Date: Tue, 02 Dec 2014 16:14:08 +1100 Subject: [openstack-dev] [docs] Older Developer documentation Icehouse? Havana? Message-ID: <87a9363a73.fsf@sparky.home> Hi, >From what I can see it seems like the developer documentation available on the OpenStack website is generated from the git repositories. http://docs.openstack.org/developer/openstack-projects.html Are older versions of this documentation currently generated and hosted somewhere? Or is it possible to generate versions of this developer documentation for each release and host it on the same website? -- Cheers, Russell From mhanif at brocade.com Tue Dec 2 05:26:09 2014 From: mhanif at brocade.com (Mohammad Hanif) Date: Tue, 2 Dec 2014 05:26:09 +0000 Subject: [openstack-dev] [Neutron] Edge-VPN and Edge-Id In-Reply-To: References: Message-ID: <119A1974-380C-461E-9937-65B9763E39E6@brocade.com> I hope we all understand how edge VPN works and what interactions are introduced as part of this spec. I see references to neutron-network mapping to the tunnel which is not at all case and the edge-VPN spec doesn?t propose it. At a very high level, there are two main concepts: 1. Creation of a per tenant VPN ?service? on a PE (physical router) which has a connectivity to other PEs using some tunnel (not known to tenant or tenant-facing). An attachment circuit for this VPN service is also created which carries a ?list" of tenant networks (the list is initially empty) . 2. Tenant ?updates? the list of tenant networks in the attachment circuit which essentially allows the VPN ?service? to add or remove the network from being part of that VPN. A service plugin implements what is described in (1) and provides an API which is called by what is described in (2). The Neutron driver only ?updates? the attachment circuit using an API (attachment circuit is also part of the service plugin? data model). I don?t see where we are introducing large data model changes to Neutron? How else one introduces a network service in OpenStack if it is not through a service plugin? As we can see, tenant needs to communicate (explicit or otherwise) to add/remove its networks to/from the VPN. There has to be a channel and the APIs to achieve this. Thanks, ?Hanif. From: Ian Wells > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Monday, December 1, 2014 at 4:35 PM To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id On 1 December 2014 at 09:01, Mathieu Rohon > wrote: This is an alternative that would say : you want an advanced service for your VM, please stretch your l2 network to this external component, that is driven by an external controller, and make your traffic goes to this component to take benefit of this advanced service. This is a valid alternative of course, but distributing the service directly to each compute node is much more valuable, ASA it is doable. Right, so a lot rides on the interpretation of 'advanced service' here, and also 'attachment'. Firstly, the difference between this and the 'advanced services' (including the L3 functionality, though it's not generally considered an 'advanced service') is that advanced services that exist today attach via an addressed port. This bridges in. That's quite a signifcant difference, which is to an extent why I've avoided lumping the two together and haven't called this an advanced service itself, although it's clearly similar. Secondly, 'attachment' has historically meant a connection to that port. But in DVRs, it can be a multipoint connection to the network - manifested on several hosts - all through the auspices of a single port. In the edge-id proposal you'll note that I've carefully avoided defining what an attachment is, largely because I have a natural tendency to want to see the interface at the API level before I worry about the backend, I admit. Your point about distributed services is well taken, and I think would be addressed by one of these distributed attachment types. > So the issue I worry about here is that if we start down the path of adding > the MPLS datamodels to Neutron we have to add Kevin's switch control work. > And the L2VPN descriptions for GRE, L2TPv3, VxLAN, and EVPN. And whatever > else comes along. And we get back to 'that's a lot of big changes that > aren't interesting to 90% of Neutron users' - difficult to get in and a lot > of overhead to maintain for the majority of Neutron developers who don't > want or need it. This shouldn't be a lot of big changes, once interfaces between advanced services and neutron core services will be cleaner. Well, incorporating a lot of models into Neutron *is*, clearly, quite a bit of change, for starters. The edge-id concept says 'the data models live outside neutron in a separate system' and there, yes, absolutely, this proposes a clean model for edge/Neutron separation in the way you're alluding to with advanced services. I think your primary complaint is that it doesn't define that interface for an OVS driver based system. The edge-vpn concept says 'the data models exists within neutron in an integrated fashion' and, if you agree that separation is the way to go, this seems to me to be exactly the wrong approach to be using. It's the way advanced services are working - for now - but that's because we believe it would be hard to pull them out because the interfaces between service and Neutron don't currently exist. The argument for this seems to be 'we should incorporate it so that we can pull it out at the same time as advanced services' but it feels like that's making more work now so that we can do even more work in the future. For an entirely new thing that is in many respects not like a service I would prefer not to integrate it in the first place, thus skipping over that whole question of how to break it out in the future. It's an open question whether the work to make it play nicely with the existing ML2 model is worth the effort or not, because I didn't study that. It's not relevant to my needs, but if you're interested then we could talk about what other specs would be required. -- Ian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From m4d.coder at gmail.com Tue Dec 2 05:26:19 2014 From: m4d.coder at gmail.com (W Chan) Date: Mon, 1 Dec 2014 21:26:19 -0800 Subject: [openstack-dev] [Mistral] Event Subscription Message-ID: Renat, Alternately, what do you think if mistral just post the events to given exchange(s) on the same transport backend and let the subscribers decide how to consume the events (i.e post to webhook, etc.) from these exchanges? This will simplify implementation somewhat. The engine can just take care of publishing the events to the exchanges and call it done. Winson -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Tue Dec 2 05:39:24 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Tue, 02 Dec 2014 05:39:24 +0000 Subject: [openstack-dev] [horizon] REST and Django References: Message-ID: On Mon Dec 01 2014 at 4:18:42 PM Thai Q Tran wrote: > I agree that keeping the API layer thin would be ideal. I should add that > having discrete API calls would allow dynamic population of table. However, > I will make a case where it *might* be necessary to add additional APIs. > Consider that you want to delete 3 items in a given table. > > If you do this on the client side, you would need to perform: n * (1 API > request + 1 AJAX request) > If you have some logic on the server side that batch delete actions: n * > (1 API request) + 1 AJAX request > > Consider the following: > n = 1, client = 2 trips, server = 2 trips > n = 3, client = 6 trips, server = 4 trips > n = 10, client = 20 trips, server = 11 trips > n = 100, client = 200 trips, server 101 trips > > As you can see, this does not scale very well.... something to consider... > Yep, though in the above cases the client is still going to be hanging, waiting for those server-backend calls, with no feedback until it's all done. I would hope that the client-server call overhead is minimal, but I guess that's probably wishful thinking when in the land of random Internet users hitting some provider's Horizon :) So yeah, having mulled it over myself I agree that it's useful to have batch operations implemented in the POST handler, the most common operation being DELETE. Maybe one day we could transition to a batch call with user feedback using a websocket connection. Richard > [image: Inactive hide details for Richard Jones ---11/27/2014 05:38:53 > PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S Jones ---11/27/2014 05:38:53 PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp, > Travis S wrote: > > From: Richard Jones > To: "Tripp, Travis S" , OpenStack List < > openstack-dev at lists.openstack.org> > Date: 11/27/2014 05:38 PM > Subject: Re: [openstack-dev] [horizon] REST and Django > ------------------------------ > > > > > On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S <*travis.tripp at hp.com* > > wrote: > > Hi Richard, > > You are right, we should put this out on the main ML, so copying > thread out to there. ML: FYI that this started after some impromptu IRC > discussions about a specific patch led into an impromptu google hangout > discussion with all the people on the thread below. > > > Thanks Travis! > > > > As I mentioned in the review[1], Thai and I were mainly discussing the > possible performance implications of network hops from client to horizon > server and whether or not any aggregation should occur server side. In > other words, some views require several APIs to be queried before any data > can displayed and it would eliminate some extra network requests from > client to server if some of the data was first collected on the server side > across service APIs. For example, the launch instance wizard will need to > collect data from quite a few APIs before even the first step is displayed > (I?ve listed those out in the blueprint [2]). > > The flip side to that (as you also pointed out) is that if we keep the > API?s fine grained then the wizard will be able to optimize in one place > the calls for data as it is needed. For example, the first step may only > need half of the API calls. It also could lead to perceived performance > increases just due to the wizard making a call for different data > independently and displaying it as soon as it can. > > > Indeed, looking at the current launch wizard code it seems like you > wouldn't need to load all that data for the wizard to be displayed, since > only some subset of it would be necessary to display any given panel of the > wizard. > > > > I tend to lean towards your POV and starting with discrete API calls > and letting the client optimize calls. If there are performance problems > or other reasons then doing data aggregation on the server side could be > considered at that point. > > > I'm glad to hear it. I'm a fan of optimising when necessary, and not > beforehand :) > > > > Of course if anybody is able to do some performance testing between > the two approaches then that could affect the direction taken. > > > I would certainly like to see us take some measurements when performance > issues pop up. Optimising without solid metrics is bad idea :) > > > Richard > > > > > [1] > *https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py* > > [2] > *https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign* > > > -Travis > > *From: *Richard Jones <*r1chardj0n3s at gmail.com* > > > * Date: *Wednesday, November 26, 2014 at 11:55 PM > * To: *Travis Tripp <*travis.tripp at hp.com* >, Thai Q > Tran/Silicon Valley/IBM <*tqtran at us.ibm.com* >, > David Lyle <*dklyle0 at gmail.com* >, Maxime Vidori < > *maxime.vidori at enovance.com* >, > "Wroblewski, Szymon" <*szymon.wroblewski at intel.com* > >, "Wood, Matthew David (HP Cloud - > Horizon)" <*matt.wood at hp.com* >, "Chen, Shaoquan" < > *sean.chen2 at hp.com* >, "Farina, Matt (HP Cloud)" < > *matthew.farina at hp.com* >, Cindy Lu/Silicon > Valley/IBM <*clu at us.ibm.com* >, Justin > Pomeroy/Rochester/IBM <*jpomero at us.ibm.com* >, > Neill Cox <*neill.cox at ingenious.com.au* > > * Subject: *Re: REST and Django > > I'm not sure whether this is the appropriate place to discuss this, or > whether I should be posting to the list under [Horizon] but I think we need > to have a clear idea of what goes in the REST API and what goes in the > client (angular) code. > > In my mind, the thinner the REST API the better. Indeed if we can get > away with proxying requests through without touching any *client code, that > would be great. > > Coding additional logic into the REST API means that a developer would > need to look in two places, instead of one, to determine what was happening > for a particular call. If we keep it thin then the API presented to the > client developer is very, very similar to the API presented by the > services. Minimum surprise. > > Your thoughts? > > > Richard > > > On Wed Nov 26 2014 at 2:40:52 PM Richard Jones < > *r1chardj0n3s at gmail.com* > wrote: > > > Thanks for the great summary, Travis. > > I've completed the work I pledged this morning, so now the REST API > change set has: > > - no rest framework dependency > - AJAX scaffolding in openstack_dashboard.api.rest.utils > - code in openstack_dashboard/api/rest/ > - renamed the API from "identity" to "keystone" to be consistent > - added a sample of testing, mostly for my own sanity to check > things were working > > *https://review.openstack.org/#/c/136676* > > > > Richard > > On Wed Nov 26 2014 at 12:18:25 PM Tripp, Travis S < > *travis.tripp at hp.com* > wrote: > > > Hello all, > > Great discussion on the REST urls today! I think that we are on > track to come to a common REST API usage pattern. To provide quick summary: > > We all agreed that going to a straight REST pattern rather than > through tables was a good idea. We discussed using direct get / post in > Django views like what Max originally used[1][2] and Thai also started[3] > with the identity table rework or to go with djangorestframework [5] like > what Richard was prototyping with[4]. > > The main things we would use from Django Rest Framework were > built in JSON serialization (avoid boilerplate), better exception handling, > and some request wrapping. However, we all weren?t sure about the need for > a full new framework just for that. At the end of the conversation, we > decided that it was a cleaner approach, but Richard would see if he could > provide some utility code to do that much for us without requiring the full > framework. David voiced that he doesn?t want us building out a whole > framework on our own either. > > So, Richard will do some investigation during his day today and > get back to us. Whatever the case, we?ll get a patch in horizon for the > base dependency (framework or Richard?s utilities) that both Thai?s work > and the launch instance work is dependent upon. We?ll build REST style > API?s using the same pattern. We will likely put the rest api?s in > horizon/openstack_dashboard/api/rest/. > > [1] > *https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/keypair.py* > > [2] > *https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/launch.py* > > [3] > *https://review.openstack.org/#/c/133767/8/openstack_dashboard/dashboards/identity/users/views.py* > > [4] > *https://review.openstack.org/#/c/136676/4/openstack_dashboard/rest_api/identity.py* > > [5] *http://www.django-rest-framework.org/* > > > Thanks, > > > Travis_______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From mbirru at gmail.com Tue Dec 2 06:21:17 2014 From: mbirru at gmail.com (Murali B) Date: Tue, 2 Dec 2014 11:51:17 +0530 Subject: [openstack-dev] SRIOV failures error- Message-ID: Hi we are trying to bring-up the SRIOV on set-up. facing the below error when we tried create the vm. *Still during creating instance ERROR appears .PciDeviceRequestFailed: PCI device request ({'requests': [InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=58584ee1-8a41-4979-9905-4d18a3df3425,spec=[{physical_network='physnet1'}])], 'code': 500}equests)s failed* followed the steps from the https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking Please help us to get rid this error. let us know if any configuration is required at hardware in order to work properly. Thanks -Murali -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald.d.dugger at intel.com Tue Dec 2 06:53:30 2014 From: donald.d.dugger at intel.com (Dugger, Donald D) Date: Tue, 2 Dec 2014 06:53:30 +0000 Subject: [openstack-dev] [gantt] Scheduler sub-group meeting agenda 12/2 Message-ID: <6AF484C0160C61439DE06F17668F3BCB5343DCA3@ORSMSX114.amr.corp.intel.com> Meeting on #openstack-meeting at 1500 UTC (8:00AM MST) 1) Status on cleanup work - https://wiki.openstack.org/wiki/Gantt/kilo -- Don Dugger "Censeo Toto nos in Kansa esse decisse." - D. Gale Ph: 303/443-3786 -------------- next part -------------- An HTML attachment was scrubbed... URL: From itzikb at redhat.com Tue Dec 2 07:18:37 2014 From: itzikb at redhat.com (Itzik Brown) Date: Tue, 02 Dec 2014 09:18:37 +0200 Subject: [openstack-dev] SRIOV failures error- In-Reply-To: References: Message-ID: <547D67CD.20605@redhat.com> Hi, Seems like you don't have available devices for allocation. What's the output of: #echo 'use nova;select hypervisor_hostname,pci_stats from compute_nodes;' | mysql -u root Itzik On 12/02/2014 08:21 AM, Murali B wrote: > Hi > > we are trying to bring-up the SRIOV on set-up. > > facing the below error when we tried create the vm. > * > Still during creating instance ERROR appears . > PciDeviceRequestFailed: PCI device request ({'requests': > [InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=58584ee1-8a41-4979-9905-4d18a3df3425,spec=[{physical_network='physnet1'}])], > 'code': 500}equests)s failed* > > followed the steps from the > https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking > > Please help us to get rid this error. let us know if any configuration > is required at hardware in order to work properly. > > Thanks > -Murali > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkaminski at mirantis.com Tue Dec 2 07:31:11 2014 From: pkaminski at mirantis.com (Przemyslaw Kaminski) Date: Tue, 02 Dec 2014 08:31:11 +0100 Subject: [openstack-dev] [Fuel] [Nailgun] Unit tests improvement meeting minutes In-Reply-To: References: <54789FA6.7050200@mirantis.com> <547C4DAC.1010906@mirantis.com> Message-ID: <547D6ABF.5060003@mirantis.com> As I mentioned, we just discovered all handlers along with their urls (Tornado pre-processed url regexps so this was bit easier) and all models (we used Django). Based on this we set up a handler that generated Swagger JSON file according to this schema https://github.com/swagger-api/swagger-spec/blob/master/versions/2.0.md Swagger UI was told to use the URL of this handler as the feed of the spec and that's basically all. We didn't implement auth since we wanted the docs to be widely visible. We did have Basic Auth though but that was on nginx side which stood in front of the whole API. P. On 12/01/2014 08:00 PM, Dmitriy Shulyak wrote: > Swagger is not related to test improvement, but we started to discuss > it here so.. > > @Przemyslaw, how hard it will be to integrate it with nailgun rest api > (web.py and handlers hierarchy)? > Also is there any way to use auth with swagger? > > On Mon, Dec 1, 2014 at 1:14 PM, Przemyslaw Kaminski > > wrote: > > > On 11/28/2014 05:15 PM, Ivan Kliuk wrote: >> Hi, team! >> >> Let me please present ideas collected during the unit tests >> improvement meeting: >> 1) Rename class ``Environment`` to something more descriptive >> 2) Remove hardcoded self.clusters[0], e.t.c from ``Environment``. >> Let's use parameters instead >> 3) run_tests.sh should invoke alternate syncdb() for cases where >> we don't need to test migration procedure, i.e. create_db_schema() >> 4) Consider usage of custom fixture provider. The main >> functionality should combine loading from YAML/JSON source and >> support fixture inheritance >> 5) The project needs in a document(policy) which describes: >> - Tests creation technique; >> - Test categorization (integration/unit) and approaches of >> testing different code base >> - >> 6) Review the tests and refactor unit tests as described in the >> test policy >> 7) Mimic Nailgun module structure in unit tests >> 8) Explore Swagger tool > > Swagger is a great tool, we used it in my previous job. We used > Tornado, attached some hand-crafted code to RequestHandler class > so that it inspected all its subclasses (i.e. different endpoint > with REST methods), generated swagger file and presented the > Swagger UI (https://github.com/swagger-api/swagger-ui) under some > /docs/ URL. > What this gave us is that we could just add YAML specification > directly to the docstring of the handler method and it would > automatically appear in the UI. It's worth noting that the UI > provides an interactive form for sending requests to the API so > that tinkering with the API is easy [1]. > > [1] > https://www.dropbox.com/s/y0nuxull9mxm5nm/Swagger%20UI%202014-12-01%2012-13-06.png?dl=0 > > P. > >> -- >> Sincerely yours, >> Ivan Kliuk >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Tue Dec 2 07:45:44 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Tue, 2 Dec 2014 13:45:44 +0600 Subject: [openstack-dev] [Mistral] Event Subscription In-Reply-To: References: Message-ID: <57E85143-D90B-4888-A22B-938C37646BF8@mirantis.com> Hi Winson, > On 02 Dec 2014, at 09:10, W Chan wrote: > > To clarify on the shortcut solution, are you saying 1) we add an adhoc event subscription to the workflow spec OR 2) add a one time event subscription to the workflow execution OR both? Not sure what you mean by ?adhoc? here. What I meant is that we should have 2 options: Have an individual REST endpoint to be able to subscribe for certain types of events any time. For example, ?Notifications about all workflow events for workflow name ?my_workflow?? or ?Notifications about switching to state ?ERROR? for workflow ?my_workflow??. Using this endpoint we can also unsubscribe from these events. When we start a workflow (?mistral execution-create? in CLI and start_workflow() method in engine) we can configure the same subscription and pass it along with ?start workflow? request. For such purposes, engine method ?start_workflow? has keyword parameter ?**params? that can take any kind of additional parameters needed for workflow start (for example, when we start a reverse workflow we pass ?task_name?). This way we can start a workflow and configure our subscription with a single request. In the first approach we would have to make two requests. > I propose a separate worker/executor to process the events so to not disrupt the workflow executions. What if there are a lot of subscribers? What if one or more subscribers are offline? Do we retry and how many times? These activities will likely disrupt the throughput of the workflows and I rather handle these activities separately. Yeah, I now tend to agree with you here. Although it still bothers me because of that performance overhead that we?ll have most of the time. But generally, yes, you?re right. Do you think we should use same executors to process these notifications or introduce a new type of entity for that? Thanks Renat Akhmerov @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Tue Dec 2 08:31:44 2014 From: apevec at gmail.com (Alan Pevec) Date: Tue, 2 Dec 2014 09:31:44 +0100 Subject: [openstack-dev] [stable] New config options, no default change In-Reply-To: <547CF9A1.4070009@electronicjungle.net> References: <20141111103053.GD7366@redhat.com> <5461FC66.6060103@openstack.org> <5468F85B.10901@electronicjungle.net> <54748DC0.50400@redhat.com> <547CF9A1.4070009@electronicjungle.net> Message-ID: >>> What is the text that should be included in the commit messages to make sure that it is picked up for release notes? >> I'm not sure anyone tracks commit messages to create release notes. Let's use existing DocImpact tag, I'll add check for this in the release scripts. But I prefer if you could directly include the proposed text in the draft release notes (link below) >> better way to handle this is to create a draft, post it in review >> comments, and copy to release notes draft right before/after pushing the >> patch into gate. > > Forgive me, I think my question is more basic then. Where are the release > notes for a stable branch located to make such changes? https://wiki.openstack.org/wiki/ReleaseNotes/2014.2.1#Known_Issues_and_Limitations Cheers, Alan From trinath.somanchi at freescale.com Tue Dec 2 08:32:07 2014 From: trinath.somanchi at freescale.com (trinath.somanchi at freescale.com) Date: Tue, 2 Dec 2014 08:32:07 +0000 Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: <547CCA69.4000909@anteaya.info> References: <547CCA69.4000909@anteaya.info> Message-ID: Hi- Its nice to have CI operators meetings. I'm from India, Its okay for me for 05:00AM UTC on Tuesdays. -- Trinath Somanchi - B39208 trinath.somanchi at freescale.com | extn: 4048 -----Original Message----- From: Anita Kuno [mailto:anteaya at anteaya.info] Sent: Tuesday, December 02, 2014 1:37 AM To: openstack Development Mailing List; openstack-infra at lists.openstack.org Subject: [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party One of the actions from the Kilo Third-Party CI summit session was to start up an additional meeting for CI operators to participate from non-North American time zones. Please reply to this email with times/days that would work for you. The current third party meeting is on Mondays at 1800 utc which works well since Infra meetings are on Tuesdays. If we could find a time that works for Europe and APAC that is also on Monday that would be ideal. Josh Hesketh has said he will try to be available for these meetings, he is in Australia. Let's get a sense of what days and timeframes work for those interested and then we can narrow it down and pick a channel. Thanks everyone, Anita. _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra From apevec at gmail.com Tue Dec 2 08:39:40 2014 From: apevec at gmail.com (Alan Pevec) Date: Tue, 2 Dec 2014 09:39:40 +0100 Subject: [openstack-dev] [stable] Re: [neutron] the hostname regex pattern fix also changed behaviour :( In-Reply-To: References: <547860E4.7080304@redhat.com> Message-ID: >> With the change, will existing instances work as before? > > Yes, this cuts straight to the heart of the matter: What's the purpose of > these validation checks? Specifically, if someone is using an "invalid" > hostname that passed the previous check but doesn't pass an improved/updated > check, should we continue to allow it? ...snip... > I suggest they also consider the DoS-fix-backport and the Kilo-and-forwards > cases separately. If we don't have a solution for stable/juno yet, I need someone to propose a release note for 2014.2.1 (which is already frozen) at https://wiki.openstack.org/wiki/ReleaseNotes/2014.2.1#Known_Issues_and_Limitations Thanks, Alan From nuritv at mellanox.com Tue Dec 2 08:46:56 2014 From: nuritv at mellanox.com (Nurit Vilosny) Date: Tue, 2 Dec 2014 08:46:56 +0000 Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: References: <547CCA69.4000909@anteaya.info> Message-ID: HI, Thanks Anita for pushing it. We will be able to be much more involved if meetings would be earlier. We're located in Israel, so Mondays anytime between 8:00 - 16:00, will be ideal for us. Nurit -----Original Message----- From: trinath.somanchi at freescale.com [mailto:trinath.somanchi at freescale.com] Sent: Tuesday, December 02, 2014 10:32 AM To: Anita Kuno; openstack Development Mailing List; openstack-infra at lists.openstack.org Subject: Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party Hi- Its nice to have CI operators meetings. I'm from India, Its okay for me for 05:00AM UTC on Tuesdays. -- Trinath Somanchi - B39208 trinath.somanchi at freescale.com | extn: 4048 -----Original Message----- From: Anita Kuno [mailto:anteaya at anteaya.info] Sent: Tuesday, December 02, 2014 1:37 AM To: openstack Development Mailing List; openstack-infra at lists.openstack.org Subject: [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party One of the actions from the Kilo Third-Party CI summit session was to start up an additional meeting for CI operators to participate from non-North American time zones. Please reply to this email with times/days that would work for you. The current third party meeting is on Mondays at 1800 utc which works well since Infra meetings are on Tuesdays. If we could find a time that works for Europe and APAC that is also on Monday that would be ideal. Josh Hesketh has said he will try to be available for these meetings, he is in Australia. Let's get a sense of what days and timeframes work for those interested and then we can narrow it down and pick a channel. Thanks everyone, Anita. _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From m_mizutani at jp.fujitsu.com Tue Dec 2 09:10:01 2014 From: m_mizutani at jp.fujitsu.com (Mizutani, Michiyuki) Date: Tue, 2 Dec 2014 09:10:01 +0000 Subject: [openstack-dev] [Keystone] creadential API example BUG? Message-ID: <80A659C414C82B449AF9133A7B26351B39A90440@G01JPEXMBKW04> Hi all, I want to confirm that about the examples of the credential api that described at the following URL. http://docs.openstack.org/api/openstack-identity-service/3/content/create-credential-post-credentials.html I tried to request to our keystone server using the request example described in the reference. However we got a following error. {"error": {"message": "Invalid blob in credential", "code": 400, "title": "Bad Request"}} I think the reason of this error is because request is not using JSON string in "blob". Is my thinking correct? Has this problem been already registered? Do I need regist to BUG report if it not yet registered? Thank you. From shardy at redhat.com Tue Dec 2 09:29:17 2014 From: shardy at redhat.com (Steven Hardy) Date: Tue, 2 Dec 2014 09:29:17 +0000 Subject: [openstack-dev] [Keystone] creadential API example BUG? In-Reply-To: <80A659C414C82B449AF9133A7B26351B39A90440@G01JPEXMBKW04> References: <80A659C414C82B449AF9133A7B26351B39A90440@G01JPEXMBKW04> Message-ID: <20141202092916.GB28442@t430slt.redhat.com> On Tue, Dec 02, 2014 at 09:10:01AM +0000, Mizutani, Michiyuki wrote: > > Hi all, > > I want to confirm that about the examples of the credential api that described at the following URL. > > http://docs.openstack.org/api/openstack-identity-service/3/content/create-credential-post-credentials.html > > I tried to request to our keystone server using the request example described in the reference. > However we got a following error. > > {"error": {"message": "Invalid blob in credential", "code": 400, "title": "Bad Request"}} > > I think the reason of this error is because request is not using JSON string in "blob". > Is my thinking correct? > > Has this problem been already registered? > Do I need regist to BUG report if it not yet registered? It's a documentation bug IMO - if you use the "ec2" type then yes the data blob does need to be a json serialized dict with specific keys (the access/secret of the ec2 keypair). If you pass some other type (e.g not "ec2") then I think you can pass whatever data you want in the blob. Here's the validation code: https://github.com/openstack/keystone/blob/master/keystone/credential/controllers.py#L39 Here's some historical bugs explaining why we need this format for ec2 type: https://bugs.launchpad.net/keystone/+bug/1259584 https://bugs.launchpad.net/keystone/+bug/1269637 Here's an example of using the interface: https://github.com/openstack/heat/blob/master/heat/common/heat_keystoneclient.py#L585 Hope that helps. Steve From flavio at redhat.com Tue Dec 2 09:44:50 2014 From: flavio at redhat.com (Flavio Percoco) Date: Tue, 2 Dec 2014 10:44:50 +0100 Subject: [openstack-dev] [glance] Deprecating osprofiler option 'enabled' in favour of 'profiler_enabled' In-Reply-To: References: <20141201143756.GA1814@gmail.com> Message-ID: <20141202094450.GF8984@redhat.com> On 02/12/14 12:16 +0800, Zhi Yan Liu wrote: >Why not change other services instead of glance? I see one reason is >"glance is the only one service use this option name", but to me one >reason to keep it as-it in glance is that original name makes more >sense due to the option already under "profiler" group, adding >"profiler" prefix to it is really redundant, imo, and in other >existing config group there's no one go this naming way. Then in the >code we can just use a clear way: > > CONF.profiler.enabled > >instead of: > > CONF.profiler.profiler_enabled I'm with Zhi Yan on this one. Adding profiler sounds redundant. Cheers, Flavio > >thanks, >zhiyan > >On Mon, Dec 1, 2014 at 11:43 PM, Ian Cordasco > wrote: >> On 12/1/14, 08:37, "Louis Taylor" wrote: >> >>>Hi all, >>> >>>In order to enable or disable osprofiler in Glance, we currently have an >>>option: >>> >>> [profiler] >>> # If False fully disable profiling feature. >>> enabled = False >>> >>>However, all other services with osprofiler integration use a similar >>>option >>>named profiler_enabled. >>> >>>For consistency, I'm proposing we deprecate this option's name in favour >>>of >>>profiler_enabled. This should make it easier for someone to configure >>>osprofiler across projects with less confusion. Does anyone have any >>>thoughts >>>or concerns about this? >>> >>>Thanks, >>>Louis >> >> We *just* introduced this if I remember the IRC discussion from last >> month. I?m not sure how many people will be immediately making use of it. >> I?m in favor of consistency where possible and while this would require a >> deprecation, I think it?s a worthwhile change. >> >> +1 from me >> >> ? >> >> Ian >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From ihrachys at redhat.com Tue Dec 2 10:01:57 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 02 Dec 2014 11:01:57 +0100 Subject: [openstack-dev] sqlalchemy-migrate call for reviews In-Reply-To: <547CEE73.4000200@debian.org> References: <20141129232815.GB2497@yuggoth.org> <547C409E.2040306@redhat.com> <547CEE73.4000200@debian.org> Message-ID: <547D8E15.9060107@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 It's weird: we run python33 job for gate but not checks. Adding Cyril Roelandt who ported the library to py3 to CC. On 01/12/14 23:40, Thomas Goirand wrote: > On 12/01/2014 06:19 PM, Ihar Hrachyshka wrote: >> Indeed, the review queue is non-responsive. There are other >> patches in the queue that bit rot there: >> >> https://review.openstack.org/#/q/status:open+project:stackforge/sqlalchemy-migrate,n,z > >> > I did +2 some of the patches which I thought were totally > unharmful, but none passed the gate. Now, it looks like the Python > 3.3 gate for it is broken... :( Where are those "module not found" > issues from? > > Thomas > > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUfY4VAAoJEC5aWaUY1u57HmsIAJku4Zt/QK2HsV8RwGfgSRSW +WpET+Kqlhcx4WJEqaUApF/+N40NwMYrOEP4Tj1R1RAtT38ABQexvi3uvbQTEb71 XIVL+fzVP99GHqHKROYDWplXZCWfxlQJN2d43c0/JgEWWExZwlGNZbjA70vuCjjt eW3IpjcwKXvmeUTqHfQXTqttzDRePh9tuHxTooZAZ+hm+HOd0WUrtHI44KnxXGt2 1LAynY848I2bxxUrD3PgZWb/N9vwQ5+gZWv4KMRHtDvZJaTa1Oyycflqn+55HBn5 VShaWyVR9zvAKKL6FxIrtnO59EZdtb12YyDecskmIYCg07ywYZGQAUxKuIZK9oI= =bULf -----END PGP SIGNATURE----- From m_mizutani at jp.fujitsu.com Tue Dec 2 10:17:46 2014 From: m_mizutani at jp.fujitsu.com (Mizutani, Michiyuki) Date: Tue, 2 Dec 2014 10:17:46 +0000 Subject: [openstack-dev] [Keystone] creadential API example BUG? In-Reply-To: <20141202092916.GB28442@t430slt.redhat.com> References: <80A659C414C82B449AF9133A7B26351B39A90440@G01JPEXMBKW04> <20141202092916.GB28442@t430slt.redhat.com> Message-ID: <80A659C414C82B449AF9133A7B26351B39A909E7@G01JPEXMBKW04> Hi Steven, Thank you for your quick response. The following document is correct as credential API of ec2tokens, isn't it? http://developer.openstack.org/api-ref-identity-v3.html So, I was confuse about specifying to request body of credential API. Thanks again! > -----Original Message----- > From: Steven Hardy [mailto:shardy at redhat.com] > Sent: Tuesday, December 02, 2014 6:29 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [Keystone] creadential API example BUG? > > On Tue, Dec 02, 2014 at 09:10:01AM +0000, Mizutani, Michiyuki wrote: > > > > Hi all, > > > > I want to confirm that about the examples of the credential api that > described at the following URL. > > > > > http://docs.openstack.org/api/openstack-identity-service/3/content/cre > > ate-credential-post-credentials.html > > > > I tried to request to our keystone server using the request example > described in the reference. > > However we got a following error. > > > > {"error": {"message": "Invalid blob in credential", "code": 400, > > "title": "Bad Request"}} > > > > I think the reason of this error is because request is not using JSON > string in "blob". > > Is my thinking correct? > > > > Has this problem been already registered? > > Do I need regist to BUG report if it not yet registered? > > It's a documentation bug IMO - if you use the "ec2" type then yes the data > blob does need to be a json serialized dict with specific keys (the > access/secret of the ec2 keypair). > > If you pass some other type (e.g not "ec2") then I think you can pass whatever > data you want in the blob. > > Here's the validation code: > > https://github.com/openstack/keystone/blob/master/keystone/credential/ > controllers.py#L39 > > Here's some historical bugs explaining why we need this format for ec2 > type: > > https://bugs.launchpad.net/keystone/+bug/1259584 > https://bugs.launchpad.net/keystone/+bug/1269637 > > Here's an example of using the interface: > > https://github.com/openstack/heat/blob/master/heat/common/heat_keyston > eclient.py#L585 > > Hope that helps. > > Steve > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From thierry at openstack.org Tue Dec 2 10:25:06 2014 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 02 Dec 2014 11:25:06 +0100 Subject: [openstack-dev] [TripleO] mid-cycle details final draft In-Reply-To: <1417474177-sup-8420@fewbar.com> References: <1417474177-sup-8420@fewbar.com> Message-ID: <547D9382.10702@openstack.org> Clint Byrum wrote: > Hello! I've received confirmation that our venue, the HP offices in > downtown Seattle, will be available for the most-often-preferred > least-often-cannot week of Feb 16 - 20. > > Our venue has a maximum of 20 participants, but I only have 16 possible > attendees now. Please add yourself to that list _now_ if you will be > joining us. > > I've asked our office staff to confirm Feb 18 - 20 (Wed-Fri). When they > do, I will reply to this thread to let everyone know so you can all > start to book travel. See the etherpad for travel details. > > https://etherpad.openstack.org/p/kilo-tripleo-midcycle-meetup Could you add it to https://wiki.openstack.org/wiki/Sprints ? Cheers, -- Thierry Carrez (ttx) From slukjanov at mirantis.com Tue Dec 2 10:40:57 2014 From: slukjanov at mirantis.com (Sergey Lukjanov) Date: Tue, 2 Dec 2014 14:40:57 +0400 Subject: [openstack-dev] [sahara] client 0.7.6 release Message-ID: Sahara folks, we have python-saharaclient version 0.7.6 release. The main changes are: * support for server-side API filtering added to the client * client know checks both data-processing and data_processing endpoint types More info: https://launchpad.net/python-saharaclient/0.7.x/0.7.6 Thanks. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From raghavendra.lad at accenture.com Tue Dec 2 10:41:07 2014 From: raghavendra.lad at accenture.com (raghavendra.lad at accenture.com) Date: Tue, 2 Dec 2014 10:41:07 +0000 Subject: [openstack-dev] [Murano] - Install Guide Message-ID: <90a10d18dc28419ab7ba4dfae97fd709@BY2PR42MB101.048d.mgd.msft.net> Hi All, I am new to Murano and trying to integrate with Openstack Juno. Any build guides or help would be appreciated. Warm Regards, Raghavendra Lad IDC-IC-P-Capability Infrastructure Consulting, Infrastructure Services - Accenture Operations Mobile - +91 9880040919 ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kuvaja at hp.com Tue Dec 2 10:43:16 2014 From: kuvaja at hp.com (Kuvaja, Erno) Date: Tue, 2 Dec 2014 10:43:16 +0000 Subject: [openstack-dev] [glance] Deprecating osprofiler option 'enabled' in favour of 'profiler_enabled' In-Reply-To: <20141202094450.GF8984@redhat.com> References: <20141201143756.GA1814@gmail.com> <20141202094450.GF8984@redhat.com> Message-ID: > -----Original Message----- > From: Flavio Percoco [mailto:flavio at redhat.com] > Sent: 02 December 2014 09:45 > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [glance] Deprecating osprofiler option 'enabled' > in favour of 'profiler_enabled' > > On 02/12/14 12:16 +0800, Zhi Yan Liu wrote: > >Why not change other services instead of glance? I see one reason is > >"glance is the only one service use this option name", but to me one > >reason to keep it as-it in glance is that original name makes more > >sense due to the option already under "profiler" group, adding > >"profiler" prefix to it is really redundant, imo, and in other existing > >config group there's no one go this naming way. Then in the code we can > >just use a clear way: > > > > CONF.profiler.enabled > > > >instead of: > > > > CONF.profiler.profiler_enabled > > I'm with Zhi Yan on this one. Adding profiler sounds redundant. +1, The reasoning makes sense to keep it as it is. - Erno > > Cheers, > Flavio > > > > >thanks, > >zhiyan > > > >On Mon, Dec 1, 2014 at 11:43 PM, Ian Cordasco > > wrote: > >> On 12/1/14, 08:37, "Louis Taylor" wrote: > >> > >>>Hi all, > >>> > >>>In order to enable or disable osprofiler in Glance, we currently have > >>>an > >>>option: > >>> > >>> [profiler] > >>> # If False fully disable profiling feature. > >>> enabled = False > >>> > >>>However, all other services with osprofiler integration use a similar > >>>option named profiler_enabled. > >>> > >>>For consistency, I'm proposing we deprecate this option's name in > >>>favour of profiler_enabled. This should make it easier for someone to > >>>configure osprofiler across projects with less confusion. Does anyone > >>>have any thoughts or concerns about this? > >>> > >>>Thanks, > >>>Louis > >> > >> We *just* introduced this if I remember the IRC discussion from last > >> month. I?m not sure how many people will be immediately making use of > it. > >> I?m in favor of consistency where possible and while this would > >> require a deprecation, I think it?s a worthwhile change. > >> > >> +1 from me > >> > >> ? > >> > >> Ian > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > >_______________________________________________ > >OpenStack-dev mailing list > >OpenStack-dev at lists.openstack.org > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > @flaper87 > Flavio Percoco From andrea.frittoli at gmail.com Tue Dec 2 10:44:57 2014 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Tue, 2 Dec 2014 10:44:57 +0000 Subject: [openstack-dev] [qa] Should it be allowed to attach 2 interfaces from the same subnet to a VM? In-Reply-To: References: Message-ID: Hello Danny, I think so. Any special concern with a VM using more than one port on a subnet? andrea On 2 December 2014 at 02:04, Danny Choi (dannchoi) wrote: > Hi, > > When I attach 2 interfaces from the same subnet to a VM, there is no error > returned and > both interfaces come up. > > lab at tme211:/opt/stack/logs$ nova interface-attach --net-id > e38dba4a-74ed-4312-ba21-2a04b5c5a5b5 cirros-1 > > lab at tme211:/opt/stack/logs$ nova list > > +--------------------------------------+----------+--------+------------+-------------+-------------------+ > > | ID | Name | Status | Task State | > Power State | Networks | > > +--------------------------------------+----------+--------+------------+-------------+-------------------+ > > | 9d88d0b5-2453-4657-8058-987980ec7744 | cirros-1 | ACTIVE | - | > Running | private=10.0.0.10 | > > +--------------------------------------+----------+--------+------------+-------------+-------------------+ > > lab at tme211:/opt/stack/logs$ nova interface-attach --net-id > e38dba4a-74ed-4312-ba21-2a04b5c5a5b5 cirros-1 > > lab at tme211:/opt/stack/logs$ nova list > > +--------------------------------------+----------+--------+------------+-------------+------------------------------+ > > | ID | Name | Status | Task State | > Power State | Networks | > > +--------------------------------------+----------+--------+------------+-------------+------------------------------+ > > | 9d88d0b5-2453-4657-8058-987980ec7744 | cirros-1 | ACTIVE | - | > Running | private=10.0.0.10, 10.0.0.11 | > > +--------------------------------------+----------+--------+------------+-------------+------------------------------+ > > > $ ifconfig > > eth0 Link encap:Ethernet HWaddr FA:16:3E:92:2D:2B > > inet addr:10.0.0.10 Bcast:10.0.0.255 Mask:255.255.255.0 > > inet6 addr: fe80::f816:3eff:fe92:2d2b/64 Scope:Link > > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > > RX packets:514 errors:0 dropped:0 overruns:0 frame:0 > > TX packets:307 errors:0 dropped:0 overruns:0 carrier:0 > > collisions:0 txqueuelen:1000 > > RX bytes:48342 (47.2 KiB) TX bytes:41750 (40.7 KiB) > > > eth1 Link encap:Ethernet HWaddr FA:16:3E:EF:55:BC > > inet addr:10.0.0.11 Bcast:10.0.0.255 Mask:255.255.255.0 > > inet6 addr: fe80::f816:3eff:feef:55bc/64 Scope:Link > > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > > RX packets:49 errors:0 dropped:0 overruns:0 frame:0 > > TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 > > collisions:0 txqueuelen:1000 > > RX bytes:3556 (3.4 KiB) TX bytes:1120 (1.0 KiB) > > > > Should this operation be allowed? > > > Thanks, > > Danny > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From shardy at redhat.com Tue Dec 2 11:17:04 2014 From: shardy at redhat.com (Steven Hardy) Date: Tue, 2 Dec 2014 11:17:04 +0000 Subject: [openstack-dev] [Keystone] creadential API example BUG? In-Reply-To: <80A659C414C82B449AF9133A7B26351B39A909E7@G01JPEXMBKW04> References: <80A659C414C82B449AF9133A7B26351B39A90440@G01JPEXMBKW04> <20141202092916.GB28442@t430slt.redhat.com> <80A659C414C82B449AF9133A7B26351B39A909E7@G01JPEXMBKW04> Message-ID: <20141202111703.GA28914@t430slt.redhat.com> On Tue, Dec 02, 2014 at 10:17:46AM +0000, Mizutani, Michiyuki wrote: > Hi Steven, > > Thank you for your quick response. > > The following document is correct as credential API of ec2tokens, isn't it? > http://developer.openstack.org/api-ref-identity-v3.html Yes, the example in http://developer.openstack.org/api-ref-identity-v3.html#credentials-v3 Looks OK, although the example blob is not JSON serialized, despite teh description saying "JSON-serialized dictionary". I think the API will tolerate either format though. Steve From apevec at gmail.com Tue Dec 2 11:32:19 2014 From: apevec at gmail.com (Alan Pevec) Date: Tue, 2 Dec 2014 12:32:19 +0100 Subject: [openstack-dev] [stable] Call for testing: 2014.2.1 candidate tarballs Message-ID: Hi all, We are scheduled to publish 2014.2.1 on Thursday Dec 4th for Ceilometer, Cinder, Glance, Heat, Horizon, Keystone, Neutron, Nova, Sahara and Trove. We'd appreciate anyone who could test the candidate 2014.2.1 tarballs, which include all changes aside from any pending freeze exceptions: http://tarballs.openstack.org/ceilometer/ceilometer-stable-juno.tar.gz http://tarballs.openstack.org/cinder/cinder-stable-juno.tar.gz http://tarballs.openstack.org/glance/glance-stable-juno.tar.gz http://tarballs.openstack.org/heat/heat-stable-juno.tar.gz http://tarballs.openstack.org/horizon/horizon-stable-juno.tar.gz http://tarballs.openstack.org/keystone/keystone-stable-juno.tar.gz http://tarballs.openstack.org/neutron/neutron-stable-juno.tar.gz http://tarballs.openstack.org/nova/nova-stable-juno.tar.gz http://tarballs.openstack.org/sahara/sahara-stable-juno.tar.gz http://tarballs.openstack.org/trove/trove-stable-juno.tar.gz Thanks, Alan From kragniz at gmail.com Tue Dec 2 11:44:42 2014 From: kragniz at gmail.com (Louis Taylor) Date: Tue, 2 Dec 2014 11:44:42 +0000 Subject: [openstack-dev] [glance] Deprecating osprofiler option 'enabled' in favour of 'profiler_enabled' In-Reply-To: References: <20141201143756.GA1814@gmail.com> Message-ID: <20141202114441.GA5041@gmail.com> On Tue, Dec 02, 2014 at 12:16:44PM +0800, Zhi Yan Liu wrote: > Why not change other services instead of glance? I see one reason is > "glance is the only one service use this option name", but to me one > reason to keep it as-it in glance is that original name makes more > sense due to the option already under "profiler" group, adding > "profiler" prefix to it is really redundant, imo, and in other > existing config group there's no one go this naming way. Then in the > code we can just use a clear way: > > CONF.profiler.enabled > > instead of: > > CONF.profiler.profiler_enabled > > thanks, > zhiyan I agree this looks nicer in the code. However, the primary consumer of this option is someone editing it in the configuration files. In this case, I believe having something more verbose and consistent is better than the Glance code being slightly more elegant. One name or the other doesn't make all that much difference, but consistency in how we turn osprofiler on and off across projects would be best. - Louis -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: Digital signature URL: From kurt.r.taylor at gmail.com Tue Dec 2 11:44:48 2014 From: kurt.r.taylor at gmail.com (Kurt Taylor) Date: Tue, 2 Dec 2014 05:44:48 -0600 Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: References: <547CCA69.4000909@anteaya.info> Message-ID: Thanks for starting this discussion Anita. The existing meeting time on Monday has never worked well for me. We could follow what other working groups have done, having alternating meeting times to accommodate everyone. I propose that we have 2 meetings of Third-party CI Ops, alternating weeks: Wednesday 1400 UTC in #openstack-meeting-3 Wednesday 2200 UTC in #openstack-meeting-3 Kurt Taylor (krtaylor) On Tue, Dec 2, 2014 at 2:46 AM, Nurit Vilosny wrote: > HI, > Thanks Anita for pushing it. We will be able to be much more involved if > meetings would be earlier. > We're located in Israel, so Mondays anytime between 8:00 - 16:00, will be > ideal for us. > > Nurit > > -----Original Message----- > From: trinath.somanchi at freescale.com [mailto: > trinath.somanchi at freescale.com] > Sent: Tuesday, December 02, 2014 10:32 AM > To: Anita Kuno; openstack Development Mailing List; > openstack-infra at lists.openstack.org > Subject: Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for > Additional Meeting for third-party > > Hi- > > Its nice to have CI operators meetings. > > I'm from India, Its okay for me for 05:00AM UTC on Tuesdays. > > -- > Trinath Somanchi - B39208 > trinath.somanchi at freescale.com | extn: 4048 > > -----Original Message----- > From: Anita Kuno [mailto:anteaya at anteaya.info] > Sent: Tuesday, December 02, 2014 1:37 AM > To: openstack Development Mailing List; > openstack-infra at lists.openstack.org > Subject: [OpenStack-Infra] [third-party]Time for Additional Meeting for > third-party > > One of the actions from the Kilo Third-Party CI summit session was to > start up an additional meeting for CI operators to participate from > non-North American time zones. > > Please reply to this email with times/days that would work for you. The > current third party meeting is on Mondays at 1800 utc which works well > since Infra meetings are on Tuesdays. If we could find a time that works > for Europe and APAC that is also on Monday that would be ideal. > > Josh Hesketh has said he will try to be available for these meetings, he > is in Australia. > > Let's get a sense of what days and timeframes work for those interested > and then we can narrow it down and pick a channel. > > Thanks everyone, > Anita. > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m_mizutani at jp.fujitsu.com Tue Dec 2 12:09:55 2014 From: m_mizutani at jp.fujitsu.com (Mizutani, Michiyuki) Date: Tue, 2 Dec 2014 12:09:55 +0000 Subject: [openstack-dev] [Keystone] creadential API example BUG? In-Reply-To: <20141202111703.GA28914@t430slt.redhat.com> References: <80A659C414C82B449AF9133A7B26351B39A90440@G01JPEXMBKW04> <20141202092916.GB28442@t430slt.redhat.com> <80A659C414C82B449AF9133A7B26351B39A909E7@G01JPEXMBKW04> <20141202111703.GA28914@t430slt.redhat.com> Message-ID: <80A659C414C82B449AF9133A7B26351B39A91BA7@G01JPEXMBKW04> Hi Steven, Thank you for your reply. > Looks OK, although the example blob is not JSON serialized, despite teh > description saying "JSON-serialized dictionary". I think the API will > tolerate either format though. I understand somehow. Credential API that will be called from the Heat that formatted as follows. {"credential":{"type":"ec2","project_id":"","user_id":"", "blob":"{\"access\":\"\",\"secret\":\"\"}"}} It looks that blob is serialized string of JSON. Thank you very much. > -----Original Message----- > From: Steven Hardy [mailto:shardy at redhat.com] > Sent: Tuesday, December 02, 2014 8:17 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [Keystone] creadential API example BUG? > > On Tue, Dec 02, 2014 at 10:17:46AM +0000, Mizutani, Michiyuki wrote: > > Hi Steven, > > > > Thank you for your quick response. > > > > The following document is correct as credential API of ec2tokens, isn't > it? > > http://developer.openstack.org/api-ref-identity-v3.html > > Yes, the example in > > http://developer.openstack.org/api-ref-identity-v3.html#credentials-v3 > > Looks OK, although the example blob is not JSON serialized, despite teh > description saying "JSON-serialized dictionary". I think the API will > tolerate either format though. > > Steve > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lzy.dev at gmail.com Tue Dec 2 12:12:05 2014 From: lzy.dev at gmail.com (Zhi Yan Liu) Date: Tue, 2 Dec 2014 20:12:05 +0800 Subject: [openstack-dev] [glance] Deprecating osprofiler option 'enabled' in favour of 'profiler_enabled' In-Reply-To: <20141202114441.GA5041@gmail.com> References: <20141201143756.GA1814@gmail.com> <20141202114441.GA5041@gmail.com> Message-ID: I totally agreed to make it to be consistent cross all projects, so I propose to change other projects. But I think keeping it as-it is clear enough for both developer and operator/configuration, for example: [profiler] enable = True instead of: [profiler] profiler_enable = True Tbh, the "profiler" prefix is redundant to me still from the perspective of operator/configuration. zhiyan On Tue, Dec 2, 2014 at 7:44 PM, Louis Taylor wrote: > On Tue, Dec 02, 2014 at 12:16:44PM +0800, Zhi Yan Liu wrote: >> Why not change other services instead of glance? I see one reason is >> "glance is the only one service use this option name", but to me one >> reason to keep it as-it in glance is that original name makes more >> sense due to the option already under "profiler" group, adding >> "profiler" prefix to it is really redundant, imo, and in other >> existing config group there's no one go this naming way. Then in the >> code we can just use a clear way: >> >> CONF.profiler.enabled >> >> instead of: >> >> CONF.profiler.profiler_enabled >> >> thanks, >> zhiyan > > I agree this looks nicer in the code. However, the primary consumer of this > option is someone editing it in the configuration files. In this case, I > believe having something more verbose and consistent is better than the Glance > code being slightly more elegant. > > One name or the other doesn't make all that much difference, but consistency in > how we turn osprofiler on and off across projects would be best. > > - Louis > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tsufiev at mirantis.com Tue Dec 2 12:21:11 2014 From: tsufiev at mirantis.com (Timur Sufiev) Date: Tue, 2 Dec 2014 15:21:11 +0300 Subject: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon Message-ID: Hello, Horizoneers and UX-ers! The default behavior of modals in Horizon (defined in turn by Bootstrap defaults) regarding their closing is to simply close the modal once user clicks somewhere outside of it (on the backdrop element below and around the modal). This is not very convenient for the modal forms containing a lot of input - when it is closed without a warning all the data the user has already provided is lost. Keeping this in mind, I've made a patch [1] changing default Bootstrap 'modal_backdrop' parameter to 'static', which means that forms are not closed once the user clicks on a backdrop, while it's still possible to close them by pressing 'Esc' or clicking on the 'X' link at the top right border of the form. Also the patch [1] allows to customize this behavior (between 'true'-current one/'false' - no backdrop element/'static') on a per-form basis. What I didn't know at the moment I was uploading my patch is that David Lyle had been working on a similar solution [2] some time ago. It's a bit more elaborate than mine: if the user has already filled some some inputs in the form, then a confirmation dialog is shown, otherwise the form is silently dismissed as it happens now. The whole point of writing about this in the ML is to gather opinions which approach is better: * stick to the current behavior; * change the default behavior to 'static'; * use the David's solution with confirmation dialog (once it'll be rebased to the current codebase). What do you think? [1] https://review.openstack.org/#/c/113206/ [2] https://review.openstack.org/#/c/23037/ P.S. I remember that I promised to write this email a week ago, but better late than never :). -- Timur Sufiev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Tue Dec 2 13:00:45 2014 From: gfidente at redhat.com (Giulio Fidente) Date: Tue, 02 Dec 2014 14:00:45 +0100 Subject: [openstack-dev] [TripleO] [CI] Cinder/Ceph CI setup In-Reply-To: <547725BA.1050507@redhat.com> References: <5475B535.6080308@redhat.com> <547725BA.1050507@redhat.com> Message-ID: <547DB7FD.5070107@redhat.com> On 11/27/2014 02:23 PM, Derek Higgins wrote: > On 27/11/14 10:21, Duncan Thomas wrote: >> I'd suggest starting by making it an extra job, so that it can be >> monitored for a while for stability without affecting what is there. > > we have to be careful here, adding an extra job for this is probably the > safest option but tripleo CI resources are a constraint, for that reason > I would add it to the HA job (which is currently non voting) and once > its stable we should make it voting. > >> >> I'd be supportive of making it the default HA job in the longer term as >> long as the LVM code is still getting tested somewhere - LVM is still >> the reference implementation in cinder and after discussion there was >> strong resistance to changing that. > > We are and would continue to use lvm for our non ha jobs, If I > understand it correctly the tripleo lvm support isn't HA so continuing > to test it on our HA job doesn't achieve much. > >> >> I've no strong opinions on the node layout, I'll leave that to more >> knowledgable people to discuss. >> >> Is the ceph/tripleO code in a working state yet? Is there a guide to >> using it? hi guys, thanks for replying I just wanted to add here a link to the blueprint so you can keep track of development [1] all the code to make it happen (except the actual CI job config changes) is up for review now so feedback and reviews are indeed appreciated :) 1. https://blueprints.launchpad.net/tripleo/+spec/tripleo-kilo-cinder-ha -- Giulio Fidente GPG KEY: 08D733BA From sorlando at nicira.com Tue Dec 2 13:12:37 2014 From: sorlando at nicira.com (Salvatore Orlando) Date: Tue, 2 Dec 2014 14:12:37 +0100 Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: References: <547CCA69.4000909@anteaya.info> Message-ID: 1800UTC should generally work for Europe. The only issue is that it falls right around dinner time. It is however a good timing since most of the night hours fall over the pacific ocean. Therefore I tend to agree with Joshua's proposal since with that time range most of the night hours would fall over the atlantic. Ideally 0600UTC is perfectly symmetric to the other meeting time, but it might be a bit tricky for western and central Europe, especially during winter. Anytime between 0700UTC and 0900UTC would be better for Europe, but might fall towards dinner time for Australia and be a bit uncomfortable for New Zealand during their summer. Anyway, this proposal would make the meeting time prohibitive for eastern/central US & Canada as well as South America. I don't know if that's acceptable considering that, from what I gather, most of the regular attendees come from those time zones. Regards, Salvatore On 2 December 2014 at 12:44, Kurt Taylor wrote: > Thanks for starting this discussion Anita. > > The existing meeting time on Monday has never worked well for me. We could > follow what other working groups have done, having alternating meeting > times to accommodate everyone. > > I propose that we have 2 meetings of Third-party CI Ops, alternating weeks: > Wednesday 1400 UTC in #openstack-meeting-3 > Wednesday 2200 UTC in #openstack-meeting-3 > > Kurt Taylor (krtaylor) > > > On Tue, Dec 2, 2014 at 2:46 AM, Nurit Vilosny wrote: > >> HI, >> Thanks Anita for pushing it. We will be able to be much more involved if >> meetings would be earlier. >> We're located in Israel, so Mondays anytime between 8:00 - 16:00, will be >> ideal for us. >> >> Nurit >> >> -----Original Message----- >> From: trinath.somanchi at freescale.com [mailto: >> trinath.somanchi at freescale.com] >> Sent: Tuesday, December 02, 2014 10:32 AM >> To: Anita Kuno; openstack Development Mailing List; >> openstack-infra at lists.openstack.org >> Subject: Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for >> Additional Meeting for third-party >> >> Hi- >> >> Its nice to have CI operators meetings. >> >> I'm from India, Its okay for me for 05:00AM UTC on Tuesdays. >> >> -- >> Trinath Somanchi - B39208 >> trinath.somanchi at freescale.com | extn: 4048 >> >> -----Original Message----- >> From: Anita Kuno [mailto:anteaya at anteaya.info] >> Sent: Tuesday, December 02, 2014 1:37 AM >> To: openstack Development Mailing List; >> openstack-infra at lists.openstack.org >> Subject: [OpenStack-Infra] [third-party]Time for Additional Meeting for >> third-party >> >> One of the actions from the Kilo Third-Party CI summit session was to >> start up an additional meeting for CI operators to participate from >> non-North American time zones. >> >> Please reply to this email with times/days that would work for you. The >> current third party meeting is on Mondays at 1800 utc which works well >> since Infra meetings are on Tuesdays. If we could find a time that works >> for Europe and APAC that is also on Monday that would be ideal. >> >> Josh Hesketh has said he will try to be available for these meetings, he >> is in Australia. >> >> Let's get a sense of what days and timeframes work for those interested >> and then we can narrow it down and pick a channel. >> >> Thanks everyone, >> Anita. >> >> _______________________________________________ >> OpenStack-Infra mailing list >> OpenStack-Infra at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Tue Dec 2 13:22:57 2014 From: apevec at gmail.com (Alan Pevec) Date: Tue, 2 Dec 2014 14:22:57 +0100 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 Message-ID: Hi all, here are exception proposal I have collected when preparing for the 2014.2.1 release, stable-maint members please have a look! General: cap Oslo and client library versions - sync from openstack/requirements stable/juno, would be good to include in the release. https://review.openstack.org/#/q/status:open+branch:stable/juno+topic:openstack/requirements,n,z Ceilometer (all proposed by Ceilo PTL) https://review.openstack.org/138315 https://review.openstack.org/138317 https://review.openstack.org/138320 https://review.openstack.org/138321 https://review.openstack.org/138322 Cinder https://review.openstack.org/137537 - small change and limited to the VMWare driver Glance https://review.openstack.org/137704 - glance_store is backward compatible, but not sure about forcing version bump on stable https://review.openstack.org/137862 - Disable osprofiler by default to prevent upgrade issues, disabled by default in other services Horizon standing-after-freeze translation update, coming on Dec 3 https://review.openstack.org/138018 - visible issue, no translation string changes https://review.openstack.org/138313 - low risk patch for a highly problematic issue Neutron https://review.openstack.org/136294 - default SNAT, see review for details, I cannot distil 1liner :) https://review.openstack.org/136275 - self-contained to the vendor code, extensively tested in several deployments Nova https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/juno+topic:1386236/juno,n,z - soaked more than a week in master, makes numa actually work in Juno Sahara https://review.openstack.org/135549 - fix for auto security groups, there were some concerns, see review for details From rprikhodchenko at mirantis.com Tue Dec 2 13:23:02 2014 From: rprikhodchenko at mirantis.com (Roman Prykhodchenko) Date: Tue, 2 Dec 2014 14:23:02 +0100 Subject: [openstack-dev] [Fuel][Nailgun]Problems with auto-reloading Message-ID: <52844A1A-0D2D-44F1-8E19-95357EB89007@mirantis.com> Hi folks, today we encountered a problem caused by auto-reload feature and our code-organisation. The problem is that web.py tries reloading modules at some point and while it does that it expects that modules could be reloaded in any order without raising any errors. Unfortunately for Nailgun that condition is not satisfied in at least one place. That place is SQLAlchemy models which are placed in different modules. If web.py tries to reload any model?s module, say notifications.py, before reloading base module, Notifications will try registering itself in the old Base?s MetaData which already contain info about the appropriate table and that causes errors like "Table 'notifications' is already defined for this MetaData instance. Specify 'extend_existing=True' to redefine options and columns on an existing Table object.? That problem happens on every request that touches database. There are several possible solutions for this problem: - Disable autoreload even in Debug mode, because tests always run in that mode and that?s the cause these failures occure - Someone might need that so a command line option or config parameter for autoreload - Re-organise code to guarantee correct reloading order - Enable extention of existing tables in metadata, but I?m not sure what will be other consequences for that. - romcheg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vkramskikh at mirantis.com Tue Dec 2 13:46:28 2014 From: vkramskikh at mirantis.com (Vitaly Kramskikh) Date: Tue, 2 Dec 2014 17:46:28 +0400 Subject: [openstack-dev] [Fuel][Nailgun]Problems with auto-reloading In-Reply-To: <52844A1A-0D2D-44F1-8E19-95357EB89007@mirantis.com> References: <52844A1A-0D2D-44F1-8E19-95357EB89007@mirantis.com> Message-ID: I don't even remember if/when autoreloading worked correctly. +1 for disabling this feature. 2014-12-02 16:23 GMT+03:00 Roman Prykhodchenko : > Hi folks, > > today we encountered a problem caused by auto-reload feature and our > code-organisation. > The problem is that web.py tries reloading modules at some point and while > it does that it expects that modules could be reloaded in any order without > raising any errors. > > Unfortunately for Nailgun that condition is not satisfied in at least one > place. That place is SQLAlchemy models which are placed in different > modules. If web.py tries to reload any model?s module, say > notifications.py, before reloading base module, Notifications will try > registering itself in the old Base?s MetaData which already contain info > about the appropriate table and that causes errors like "Table > 'notifications' is already defined for this MetaData instance. Specify > 'extend_existing=True' to redefine options and columns on an existing Table > object.? That problem happens on every request that touches database. > > There are several possible solutions for this problem: > > - Disable autoreload even in Debug mode, because tests always run in that > mode and that?s the cause these failures occure > - Someone might need that so a command line option or config parameter > for autoreload > - Re-organise code to guarantee correct reloading order > - Enable extention of existing tables in metadata, but I?m not sure what > will be other consequences for that. > > > - romcheg > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From riwinter at cisco.com Tue Dec 2 13:51:36 2014 From: riwinter at cisco.com (Richard Winters (riwinter)) Date: Tue, 2 Dec 2014 13:51:36 +0000 Subject: [openstack-dev] [tempest] tearDownClass usage in scenario tests Message-ID: I?ve noticed that in scenario tests only the OfficialClientTest in manager.py has a tearDownClass and was wondering if there is a reason for that? In my scenario tests I need to ensure a particular connection gets closed after the test runs. This connection is setup in setUpClass so it makes sense to me that it should also be closed in the tearDownClass. This is how I?m cleaning up now ? but didn?t know if there is better way to do it. @classmethod def tearDownClass(cls): super(TestCSROneNet, cls).tearDownClass() if cls.nx_onep is not None: cls.nx_onep.disconnect() Thanks Rich -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp at jamezpolley.com Tue Dec 2 14:10:23 2014 From: jp at jamezpolley.com (James Polley) Date: Tue, 2 Dec 2014 15:10:23 +0100 Subject: [openstack-dev] [TripleO] Alternate meeting time Message-ID: Months ago, I pushed for us to alternate meeting times to something that was friendlier to me, so we started doing alternate weeks at 0700UTC. That worked well for me, but wasn't working so well for a few people in Europe, so we decided to give 0800UTC a try. Then DST changes happened, and wiki pages got out of sync, and there was confusion about what time the meeting is at.. The alternate meeting hasn't been very well attended for the last ~3 meetings. Partly I think that's due to summit and travel plans, but it seems like the 0800UTC time doesn't work very well for quite a few people. So, instead of trying things at random, I've created https://etherpad.openstack.org/p/tripleo-alternate-meeting-time as a starting point for figuring out what meeting time might work well for the most people. Obviously the world is round, and people have different schedules, and we're never going to get a meeting time that works well for everyone - but it'd be nice to try to maximise attendance (and minimise inconvenience) as much as we can. If you regularly attend, or would like to attend, the meeting, please take a moment to look at the etherpad to register your vote for which time works best for you. There's even a section for you to cast your vote if the UTC1900 meeting (aka the "main" or "US-Friendly" meeting) works better for you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From skalinowski at mirantis.com Tue Dec 2 14:19:32 2014 From: skalinowski at mirantis.com (Sebastian Kalinowski) Date: Tue, 2 Dec 2014 15:19:32 +0100 Subject: [openstack-dev] [Fuel][Nailgun] Web framework Message-ID: Hi all, Some time ago we had a discussion about moving Nailgun to new web framework [1]. There was comparison [2] of two possible options: Pecan [3] and Flask [4]. We came to conclusion that we need to move Nailgun on some alive web framework instead of web.py [5] (some of the reasons: [6]) but there was no clear agreement on what framework (there were strong voices for Flask). I would like to bring this topic up again so we could discuss with broader audience and make final decision what will be our next web framework. I think that we should also consider to make that framework our "weapon of choice" (or so called standard) when creating new web services in Fuel. Best, Sebastian [1] https://lists.launchpad.net/fuel-dev/msg01397.html [2] https://docs.google.com/a/mirantis.com/document/d/1QR7YphyfN64m-e9b5rKC_U8bMtx4zjfW943BfLTqTao/edit?usp=sharing [3] http://www.pecanpy.org/ [4] http://flask.pocoo.org/ [5] http://webpy.org/ [6] https://lists.launchpad.net/fuel-dev/msg01501.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Tue Dec 2 14:22:07 2014 From: marios at redhat.com (marios) Date: Tue, 02 Dec 2014 16:22:07 +0200 Subject: [openstack-dev] [TripleO] Alternate meeting time In-Reply-To: References: Message-ID: <547DCB0F.3070303@redhat.com> On 02/12/14 16:10, James Polley wrote: > Months ago, I pushed for us to alternate meeting times to something that > was friendlier to me, so we started doing alternate weeks at 0700UTC. > That worked well for me, but wasn't working so well for a few people in > Europe, so we decided to give 0800UTC a try. Then DST changes happened, > and wiki pages got out of sync, and there was confusion about what time > the meeting is at.. > > The alternate meeting hasn't been very well attended for the last ~3 > meetings. Partly I think that's due to summit and travel plans, but it > seems like the 0800UTC time doesn't work very well for quite a few people. > > So, instead of trying things at random, I've > created https://etherpad.openstack.org/p/tripleo-alternate-meeting-time > as a starting point for figuring out what meeting time might work well > for the most people. Obviously the world is round, and people have > different schedules, and we're never going to get a meeting time that > works well for everyone - but it'd be nice to try to maximise attendance > (and minimise inconvenience) as much as we can. > > If you regularly attend, or would like to attend, the meeting, please > take a moment to look at the etherpad to register your vote for which > time works best for you. There's even a section for you to cast your > vote if the UTC1900 meeting (aka the "main" or "US-Friendly" meeting) > works better for you! slight clarification - are we discussing which time would suit best for the alternative meeting, or that we are scrapping the alternate, and re-voting for the best main meeting time. In either case the proposed times are still mostly not EU friendly (I guess the 1800 UTC would be ok for UK), thanks, marios > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From marc at koderer.com Tue Dec 2 14:22:28 2014 From: marc at koderer.com (Marc Koderer) Date: Tue, 2 Dec 2014 15:22:28 +0100 Subject: [openstack-dev] [Manila] Manila project use-cases Message-ID: <1B324A03-7768-4603-9544-01EF53CA7BBC@koderer.com> Hello Manila Team, We identified use cases for Manila during an internal workshop with our operators. I would like to share them with you and update the wiki [1] since it seems to be outdated. Before that I would like to gather feedback and you might help me with identifying things that aren?t implemented yet. Our list: 1.) Create a share and use it in a tenant Initial creation of a shared storage volume and assign it to several VM?s 2.) Assign an preexisting share to a VM with Manila Import an existing Share with data and it to several VM?s in case of migrating an existing production - services to Openstack. 3.) External consumption of a share Accommodate and provide mechanisms for last-mile consumption of shares by consumers of the service that aren't mediated by Nova. 4.) Cross Tenant sharing Coordinate shares across tenants 5.) Detach a share and don?t destroy data (deactivate) Share is flagged as inactive and data are not destroyed for later usage or in case of legal requirements. 6.) Unassign and delete data of a share Destroy entire share with all data and free space for further usage. 7.) Resize Share Resize existing and assigned share on the fly. 8.) Copy existing share Copy existing share between different storage technologies Regards Marc Deutsche Telekom [1]: https://wiki.openstack.org/wiki/Manila/usecases From ayoung at redhat.com Tue Dec 2 14:45:30 2014 From: ayoung at redhat.com (Adam Young) Date: Tue, 02 Dec 2014 09:45:30 -0500 Subject: [openstack-dev] [horizon] REST and Django In-Reply-To: References: Message-ID: <547DD08A.7000402@redhat.com> On 12/02/2014 12:39 AM, Richard Jones wrote: > On Mon Dec 01 2014 at 4:18:42 PM Thai Q Tran > wrote: > > I agree that keeping the API layer thin would be ideal. I should > add that having discrete API calls would allow dynamic population > of table. However, I will make a case where it */might/* be > necessary to add additional APIs. Consider that you want to delete > 3 items in a given table. > > If you do this on the client side, you would need to perform: n * > (1 API request + 1 AJAX request) > If you have some logic on the server side that batch delete > actions: n * (1 API request) + 1 AJAX request > > Consider the following: > n = 1, client = 2 trips, server = 2 trips > n = 3, client = 6 trips, server = 4 trips > n = 10, client = 20 trips, server = 11 trips > n = 100, client = 200 trips, server 101 trips > > As you can see, this does not scale very well.... something to > consider... > This is not something Horizon can fix. Horizon can make matters worse, but cannot make things better. If you want to delete 3 users, Horizon still needs to make 3 distinct calls to Keystone. To fix this, we need either batch calls or a standard way to do multiples of the same operation. The unified API effort it the right place to drive this. > Yep, though in the above cases the client is still going to be > hanging, waiting for those server-backend calls, with no feedback > until it's all done. I would hope that the client-server call overhead > is minimal, but I guess that's probably wishful thinking when in the > land of random Internet users hitting some provider's Horizon :) > > So yeah, having mulled it over myself I agree that it's useful to have > batch operations implemented in the POST handler, the most common > operation being DELETE. > > Maybe one day we could transition to a batch call with user feedback > using a websocket connection. > > > Richard > > Inactive hide details for Richard Jones ---11/27/2014 05:38:53 > PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S > 2014 at 5:58:00 AM Tripp, Travis S > wrote: > > From: Richard Jones > > To: "Tripp, Travis S" >, OpenStack List > > > Date: 11/27/2014 05:38 PM > Subject: Re: [openstack-dev] [horizon] REST and Django > > ------------------------------------------------------------------------ > > > > > On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S > <_travis.tripp at hp.com_ > wrote: > > Hi Richard, > > You are right, we should put this out on the main ML, so > copying thread out to there. ML: FYI that this started after > some impromptu IRC discussions about a specific patch led into > an impromptu google hangout discussion with all the people on > the thread below. > > > Thanks Travis! > > As I mentioned in the review[1], Thai and I were mainly > discussing the possible performance implications of network > hops from client to horizon server and whether or not any > aggregation should occur server side. In other words, some > views require several APIs to be queried before any data can > displayed and it would eliminate some extra network requests > from client to server if some of the data was first collected > on the server side across service APIs. For example, the > launch instance wizard will need to collect data from quite a > few APIs before even the first step is displayed (I?ve listed > those out in the blueprint [2]). > > The flip side to that (as you also pointed out) is that if we > keep the API?s fine grained then the wizard will be able to > optimize in one place the calls for data as it is needed. For > example, the first step may only need half of the API calls. > It also could lead to perceived performance increases just due > to the wizard making a call for different data independently > and displaying it as soon as it can. > > > Indeed, looking at the current launch wizard code it seems like > you wouldn't need to load all that data for the wizard to be > displayed, since only some subset of it would be necessary to > display any given panel of the wizard. > > I tend to lean towards your POV and starting with discrete API > calls and letting the client optimize calls. If there are > performance problems or other reasons then doing data > aggregation on the server side could be considered at that point. > > > I'm glad to hear it. I'm a fan of optimising when necessary, and > not beforehand :) > > Of course if anybody is able to do some performance testing > between the two approaches then that could affect the > direction taken. > > > I would certainly like to see us take some measurements when > performance issues pop up. Optimising without solid metrics is bad > idea :) > > > Richard > > > [1] > _https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py_ > [2] > _https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign_ > > -Travis > > *From: *Richard Jones <_r1chardj0n3s at gmail.com_ > >* > Date: *Wednesday, November 26, 2014 at 11:55 PM* > To: *Travis Tripp <_travis.tripp at hp.com_ > >, Thai Q Tran/Silicon Valley/IBM > <_tqtran at us.ibm.com_ >, David Lyle > <_dklyle0 at gmail.com_ >, Maxime > Vidori <_maxime.vidori at enovance.com_ > >, "Wroblewski, Szymon" > <_szymon.wroblewski at intel.com_ > >, "Wood, Matthew David > (HP Cloud - Horizon)" <_matt.wood at hp.com_ > >, "Chen, Shaoquan" > <_sean.chen2 at hp.com_ >, "Farina, > Matt (HP Cloud)" <_matthew.farina at hp.com_ > >, Cindy Lu/Silicon Valley/IBM > <_clu at us.ibm.com_ >, Justin > Pomeroy/Rochester/IBM <_jpomero at us.ibm.com_ > >, Neill Cox > <_neill.cox at ingenious.com.au_ > >* > Subject: *Re: REST and Django > > I'm not sure whether this is the appropriate place to discuss > this, or whether I should be posting to the list under > [Horizon] but I think we need to have a clear idea of what > goes in the REST API and what goes in the client (angular) code. > > In my mind, the thinner the REST API the better. Indeed if we > can get away with proxying requests through without touching > any *client code, that would be great. > > Coding additional logic into the REST API means that a > developer would need to look in two places, instead of one, to > determine what was happening for a particular call. If we keep > it thin then the API presented to the client developer is > very, very similar to the API presented by the services. > Minimum surprise. > > Your thoughts? > > > Richard > > > On Wed Nov 26 2014 at 2:40:52 PM Richard Jones > <_r1chardj0n3s at gmail.com_ > wrote: > > Thanks for the great summary, Travis. > > I've completed the work I pledged this morning, so now the > REST API change set has: > > - no rest framework dependency > - AJAX scaffolding in openstack_dashboard.api.rest.utils > - code in openstack_dashboard/api/rest/ > - renamed the API from "identity" to "keystone" to be > consistent > - added a sample of testing, mostly for my own sanity to > check things were working > > _https://review.openstack.org/#/c/136676_ > > > Richard > > On Wed Nov 26 2014 at 12:18:25 PM Tripp, Travis S > <_travis.tripp at hp.com_ > wrote: > > Hello all, > > Great discussion on the REST urls today! I think that > we are on track to come to a common REST API usage > pattern. To provide quick summary: > > We all agreed that going to a straight REST pattern > rather than through tables was a good idea. We > discussed using direct get / post in Django views like > what Max originally used[1][2] and Thai also > started[3] with the identity table rework or to go > with djangorestframework [5] like what Richard was > prototyping with[4]. > > The main things we would use from Django Rest > Framework were built in JSON serialization (avoid > boilerplate), better exception handling, and some > request wrapping. However, we all weren?t sure about > the need for a full new framework just for that. At > the end of the conversation, we decided that it was a > cleaner approach, but Richard would see if he could > provide some utility code to do that much for us > without requiring the full framework. David voiced > that he doesn?t want us building out a whole framework > on our own either. > > So, Richard will do some investigation during his day > today and get back to us. Whatever the case, we?ll > get a patch in horizon for the base dependency > (framework or Richard?s utilities) that both Thai?s > work and the launch instance work is dependent upon. > We?ll build REST style API?s using the same pattern. > We will likely put the rest api?s in > horizon/openstack_dashboard/api/rest/. > > [1] > _https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/keypair.py_ > [2] > _https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/launch.py_ > [3] > _https://review.openstack.org/#/c/133767/8/openstack_dashboard/dashboards/identity/users/views.py_ > [4] > _https://review.openstack.org/#/c/136676/4/openstack_dashboard/rest_api/identity.py_ > [5] _http://www.django-rest-framework.org/_ > > Thanks, > > Travis_______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From derekh at redhat.com Tue Dec 2 14:45:54 2014 From: derekh at redhat.com (Derek Higgins) Date: Tue, 02 Dec 2014 14:45:54 +0000 Subject: [openstack-dev] [TripleO] Alternate meeting time In-Reply-To: References: Message-ID: <547DD0A2.3030102@redhat.com> On 02/12/14 14:10, James Polley wrote: > Months ago, I pushed for us to alternate meeting times to something that > was friendlier to me, so we started doing alternate weeks at 0700UTC. > That worked well for me, but wasn't working so well for a few people in > Europe, so we decided to give 0800UTC a try. Then DST changes happened, > and wiki pages got out of sync, and there was confusion about what time > the meeting is at.. > > The alternate meeting hasn't been very well attended for the last ~3 > meetings. Partly I think that's due to summit and travel plans, but it > seems like the 0800UTC time doesn't work very well for quite a few people. > > So, instead of trying things at random, I've > created https://etherpad.openstack.org/p/tripleo-alternate-meeting-time > as a starting point for figuring out what meeting time might work well > for the most people. Obviously the world is round, and people have > different schedules, and we're never going to get a meeting time that > works well for everyone - but it'd be nice to try to maximise attendance > (and minimise inconvenience) as much as we can. > > If you regularly attend, or would like to attend, the meeting, please > take a moment to look at the etherpad to register your vote for which > time works best for you. There's even a section for you to cast your > vote if the UTC1900 meeting (aka the "main" or "US-Friendly" meeting) > works better for you! Can I suggest an alternative data gathering method, I've put each hour in a week in a poll, for each slot you have 3 options Yes, If needs be and no if we all were to fill in this poll with what suits each of us we should easily see the overlaps, I just picked a week, ignore the dates, Assume times are UTC. Any thoughts on this? I think it would allow us to explore all options available. http://doodle.com/27ffgkdm5gxzr654 > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ihrachys at redhat.com Tue Dec 2 14:48:12 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 02 Dec 2014 15:48:12 +0100 Subject: [openstack-dev] [neutron] force_gateway_on_subnet, please don't deprecate In-Reply-To: References: <95DCE311E0DE421D8C881E5CCD80E5D8@redhat.com> <1384913819.11781239.1417435977062.JavaMail.zimbra@redhat.com> Message-ID: <547DD12C.7060405@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 01/12/14 21:19, Kyle Mestery wrote: > On Mon, Dec 1, 2014 at 6:12 AM, Assaf Muller > wrote: >> >> >> ----- Original Message ----- >>> >>> My proposal here, is, _let?s not deprecate this setting_, as >>> it?s a valid use case of a gateway configuration, and let?s >>> provide it on the reference implementation. >> >> I agree. As long as the reference implementation works with the >> setting off there's no need to deprecate it. I still think the >> default should be set to True though. >> >> Keep in mind that the DHCP agent will need changes as well. >> > ++ to both suggestions Assaf. Thanks for bringing this up Miguel! Miguel, how about sending a patch that removes deprecation warning from the help text then? > > Kyle > >>> >>> TL;DR >>> >>> I?ve been looking at this yesterday, during a test deployment >>> on a site where they provide external connectivity with the >>> gateway outside subnet. >>> >>> And I needed to switch it of, to actually be able to have any >>> external connectivity. >>> >>> https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L121 >>> >>> >>> This is handled by providing an on-link route to the gateway first, >>> and then adding the default gateway. >>> >>> It looks to me very interesting (not only because it?s the only >>> way to work on that specific site [2][3][4]), because you can >>> dynamically wire RIPE blocks to your server, without needing to >>> use an specific IP for external routing or broadcast purposes, >>> and instead use the full block in openstack. >>> >>> >>> I have a tiny patch to support this on the neutron l3-agent [1] >>> I yet need to add the logic to check ?gateway outside subnet?, >>> then add the ?onlink? route. >>> >>> >>> [1] >>> >>> diff --git a/neutron/agent/linux/interface.py >>> b/neutron/agent/linux/interface.py index 538527b..5a9f186 >>> 100644 --- a/neutron/agent/linux/interface.py +++ >>> b/neutron/agent/linux/interface.py @@ -116,15 +116,16 @@ class >>> LinuxInterfaceDriver(object): namespace=namespace, ip=ip_cidr) >>> >>> - if gateway: - device.route.add_gateway(gateway) - >>> new_onlink_routes = set(s['cidr'] for s in extra_subnets) + if >>> gateway: + new_onlink_routes.update([gateway]) >>> existing_onlink_routes = >>> set(device.route.list_onlink_routes()) for route in >>> new_onlink_routes - existing_onlink_routes: >>> device.route.add_onlink_route(route) for route in >>> existing_onlink_routes - new_onlink_routes: >>> device.route.delete_onlink_route(route) + if gateway: + >>> device.route.add_gateway(gateway) >>> >>> def delete_conntrack_state(self, root_helper, namespace, ip): >>> """Delete conntrack state associated with an IP address. >>> >>> [2] http://www.soyoustart.com/ [3] http://www.ovh.co.uk/ [4] >>> http://www.kimsufi.com/ >>> >>> >>> Miguel ?ngel Ajo >>> >>> >>> >>> >>> _______________________________________________ OpenStack-dev >>> mailing list OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >>> _______________________________________________ >> OpenStack-dev mailing list OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUfdEsAAoJEC5aWaUY1u57a4QIANjx/wOJJKlHJ1kiE5DQ80La WP5DYwWj64MA+pDXPoE18+JZEV+7igHD7zeKb8pua4Ql+X/EDbLG5GK1ry4EV5RC uKnO2tht/bLfrniirqoOcL5TqybW86ZP4TLtTzV1PdAQBNGoOaRU8pox5oAkZOmm FrFVtBqoMtUAM9X8P7OHjkkvMLfoBinhWjlnyYWrzl6ZJtTCCipWJrVesHoWAL+F DcWotMsSMkkCAolnDE1AST4Z6pRvj7Y4lhQyZGaOtDGkYoMPBb7PTaGIltzX3ijB ZzDwz39o+kU9pY0/7Web6tFCEw+zFFr01rVBcQXDi5cJ2wRW7uT0J/9Aw0Rrn1M= =coN8 -----END PGP SIGNATURE----- From sgordon at redhat.com Tue Dec 2 14:51:06 2014 From: sgordon at redhat.com (Steve Gordon) Date: Tue, 2 Dec 2014 09:51:06 -0500 (EST) Subject: [openstack-dev] [qa] Should it be allowed to attach 2 interfaces from the same subnet to a VM? In-Reply-To: References: Message-ID: <1413562004.25342235.1417531866358.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Danny Choi (dannchoi)" > To: openstack-dev at lists.openstack.org > > Hi, > > When I attach 2 interfaces from the same subnet to a VM, there is no error > returned and > both interfaces come up. > > lab at tme211:/opt/stack/logs$ nova interface-attach --net-id > e38dba4a-74ed-4312-ba21-2a04b5c5a5b5 cirros-1 > > lab at tme211:/opt/stack/logs$ nova list > > +--------------------------------------+----------+--------+------------+-------------+-------------------+ > > | ID | Name | Status | Task State | > | Power State | Networks | > > +--------------------------------------+----------+--------+------------+-------------+-------------------+ > > | 9d88d0b5-2453-4657-8058-987980ec7744 | cirros-1 | ACTIVE | - | > | Running | private=10.0.0.10 | > > +--------------------------------------+----------+--------+------------+-------------+-------------------+ > > lab at tme211:/opt/stack/logs$ nova interface-attach --net-id > e38dba4a-74ed-4312-ba21-2a04b5c5a5b5 cirros-1 > > lab at tme211:/opt/stack/logs$ nova list > > +--------------------------------------+----------+--------+------------+-------------+------------------------------+ > > | ID | Name | Status | Task State | > | Power State | Networks | > > +--------------------------------------+----------+--------+------------+-------------+------------------------------+ > > | 9d88d0b5-2453-4657-8058-987980ec7744 | cirros-1 | ACTIVE | - | > | Running | private=10.0.0.10, 10.0.0.11 | > > +--------------------------------------+----------+--------+------------+-------------+------------------------------+ > > > $ ifconfig > > eth0 Link encap:Ethernet HWaddr FA:16:3E:92:2D:2B > > inet addr:10.0.0.10 Bcast:10.0.0.255 Mask:255.255.255.0 > > inet6 addr: fe80::f816:3eff:fe92:2d2b/64 Scope:Link > > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > > RX packets:514 errors:0 dropped:0 overruns:0 frame:0 > > TX packets:307 errors:0 dropped:0 overruns:0 carrier:0 > > collisions:0 txqueuelen:1000 > > RX bytes:48342 (47.2 KiB) TX bytes:41750 (40.7 KiB) > > > eth1 Link encap:Ethernet HWaddr FA:16:3E:EF:55:BC > > inet addr:10.0.0.11 Bcast:10.0.0.255 Mask:255.255.255.0 > > inet6 addr: fe80::f816:3eff:feef:55bc/64 Scope:Link > > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > > RX packets:49 errors:0 dropped:0 overruns:0 frame:0 > > TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 > > collisions:0 txqueuelen:1000 > > RX bytes:3556 (3.4 KiB) TX bytes:1120 (1.0 KiB) > > > > Should this operation be allowed? Support for this was explicitly added in Juno: https://blueprints.launchpad.net/nova/+spec/multiple-if-1-net Do you have a concrete reason in mind as to why this should *not* be allowed? Thanks, Steve From andrea.frittoli at gmail.com Tue Dec 2 14:54:14 2014 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Tue, 2 Dec 2014 14:54:14 +0000 Subject: [openstack-dev] [tempest] tearDownClass usage in scenario tests In-Reply-To: References: Message-ID: Hello Rich, in the latest tempest we made two significant changes compared to the version you're using. We dropped the use of official clients from scenario tests (and OfficialClientTest has been replaced by ScenarioTest). And we introduced resource_setup and resource_cleanup in the test base class, which should be used instead of setUpClass and teadDownClass (there's an hacking rule for that). While tearDownClass is not always invoked, resource_cleanup is always invoked, and it has been implemented to avoid resource leaks. If you are using an older version of tempest you should be able to override tearDownClass instead. andrea On 2 December 2014 at 13:51, Richard Winters (riwinter) wrote: > I?ve noticed that in scenario tests only the OfficialClientTest in > manager.py has a tearDownClass and was wondering if there is a reason for > that? > > In my scenario tests I need to ensure a particular connection gets closed > after the test runs. This connection is setup in setUpClass so it makes > sense to me that it should also be closed in the tearDownClass. > > This is how I?m cleaning up now ? but didn?t know if there is better way to > do it. > @classmethod > def tearDownClass(cls): > super(TestCSROneNet, cls).tearDownClass() > if cls.nx_onep is not None: > cls.nx_onep.disconnect() > > Thanks > Rich > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ikalnitsky at mirantis.com Tue Dec 2 14:55:17 2014 From: ikalnitsky at mirantis.com (Igor Kalnitsky) Date: Tue, 2 Dec 2014 16:55:17 +0200 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: Message-ID: Hi, Sebastian, Thank you for raising this topic again. Yes, indeed, we need to move out from web.py as soon as possible and there are a lot of reasons why we should do it. But this topic is not about "Why", this topic is about "Flask or Pecan". Well, currently Fuel uses both of this frameworks: * OSTF is using Pecan * Fuel Stats is using Flask Personally, I'd like to use Flask instead of Pecan, because first one is more production-ready tool and I like its design. But I believe this should be resolved by voting. Thanks, Igor On Tue, Dec 2, 2014 at 4:19 PM, Sebastian Kalinowski wrote: > Hi all, > > Some time ago we had a discussion about moving Nailgun to new web framework > [1]. > > There was comparison [2] of two possible options: Pecan [3] and Flask [4]. > We came to conclusion that we need to move Nailgun on some alive web > framework > instead of web.py [5] (some of the reasons: [6]) but there was no clear > agreement > on what framework (there were strong voices for Flask). > > I would like to bring this topic up again so we could discuss with broader > audience and > make final decision what will be our next web framework. > > I think that we should also consider to make that framework our "weapon of > choice" (or so > called standard) when creating new web services in Fuel. > > Best, > Sebastian > > > [1] https://lists.launchpad.net/fuel-dev/msg01397.html > [2] > https://docs.google.com/a/mirantis.com/document/d/1QR7YphyfN64m-e9b5rKC_U8bMtx4zjfW943BfLTqTao/edit?usp=sharing > [3] http://www.pecanpy.org/ > [4] http://flask.pocoo.org/ > [5] http://webpy.org/ > [6] https://lists.launchpad.net/fuel-dev/msg01501.html > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ihrachys at redhat.com Tue Dec 2 15:04:24 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 02 Dec 2014 16:04:24 +0100 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: References: Message-ID: <547DD4F8.6020306@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 02/12/14 14:22, Alan Pevec wrote: > Hi all, > > here are exception proposal I have collected when preparing for > the 2014.2.1 release, stable-maint members please have a look! > > > General: cap Oslo and client library versions - sync from > openstack/requirements stable/juno, would be good to include in > the release. > https://review.openstack.org/#/q/status:open+branch:stable/juno+topic:openstack/requirements,n,z +2, > let's keep all deps in sync. Those updates do not break anything for existing users. > > Ceilometer (all proposed by Ceilo PTL) > https://review.openstack.org/138315 Already pushed to gate (why?) > https://review.openstack.org/138317 > https://review.openstack.org/138320 > https://review.openstack.org/138321 > https://review.openstack.org/138322 > > Cinder https://review.openstack.org/137537 - small change and > limited to the VMWare driver > > Glance https://review.openstack.org/137704 - glance_store is > backward compatible, but not sure about forcing version bump on > stable I think this one should not go in. For stable releases in downstream, it's quite common to backport fixes for bugs. We don't fix bugs by bumping versions. > https://review.openstack.org/137862 - Disable osprofiler by default > to prevent upgrade issues, disabled by default in other services Hm, looks more like a version incompatibility. Should we instead set glance and osprofiler versions in line? I'm probably ok with disabling debug features even in stable releases, but this one seems like a wrong fix for a rightful issue. Comments? > > Horizon standing-after-freeze translation update, coming on Dec 3 > https://review.openstack.org/138018 - visible issue, no > translation string changes https://review.openstack.org/138313 - > low risk patch for a highly problematic issue > > Neutron https://review.openstack.org/136294 - default SNAT, see > review for details, I cannot distil 1liner :) As I told in comments, this one seems to me making the code in line with official documentation, so can be considered as a bug fix. Though the change in behaviour is pretty significant to be cautious. > https://review.openstack.org/136275 - self-contained to the vendor > code, extensively tested in several deployments +2. > > Nova > https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/juno+topic:1386236/juno,n,z > > - - soaked more than a week in master, makes numa actually work in Juno > > Sahara https://review.openstack.org/135549 - fix for auto security > groups, there were some concerns, see review for details > > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUfdT4AAoJEC5aWaUY1u574hoIAJtE6dIAtCPcJw4H83EsFoEh pNDHiHTBmjMCJVFtEKtBUgKT/pRZw5zyvRXYHQSk99Lqw7StcYn2gyW9sDQJSclv ak8wm5KCDtZMnkzfisDtTILx2AQj8RHw1UWVrsjqkoS0vyjUW6dfOpiyxd7o6s9n zJYgGi5uO1EZO+oLDk5NkKl6pDu4OZNbx1iLk+0EPmpjPD9ZT6AdacvtW5oM3+4c udA4CCsiAkHXvUutM0GNeftuOk4TBj6evnnzOai5mZC4QoT3/vhd1or+AEjLtEqO QhM8MT8u+mSDhhlbfblNqIf/bBHOkgZcEX4DMdPtz9R/LtqvBDhDjtyOjJ8cG6w= =bcTW -----END PGP SIGNATURE----- From thierry at openstack.org Tue Dec 2 15:06:25 2014 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 02 Dec 2014 16:06:25 +0100 Subject: [openstack-dev] oslo.rootwrap 1.4.0 released Message-ID: <547DD571.8010604@openstack.org> The Oslo team is pleased to announce the release of oslo.rootwrap 1.4.0. This is primarily a bug fix and test-requirements update release: $ git log --abbrev-commit --pretty=oneline --no-merges 1.3.0..1.4.0 2c43df7 Updated from global requirements 62d7322 Updated from global requirements 8c39d15 Correct filters examples in README.rst 9316ea0 Updated from global requirements 56e9cb5 Fix exit of subprocess in case it was terminated by signal b4feb41 Updated from global requirements d46ecc9 Support building wheels (PEP-427) 3c7ea8a Updated from global requirements $ git diff 1.3.0..1.4.0 | diffstat README.rst | 4 ++-- oslo/rootwrap/cmd.py | 8 ++++++-- setup.cfg | 3 +++ test-requirements-py3.txt | 2 +- test-requirements.txt | 8 ++++---- tests/test_rootwrap.py | 16 ++++++++++++++++ 6 files changed, 32 insertions(+), 9 deletions(-) For more details, please see: https://launchpad.net/oslo.rootwrap/kilo/1.4.0 Please report issues through launchpad: https://launchpad.net/oslo.rootwrap -- Thierry Carrez (ttx) From davanum at gmail.com Tue Dec 2 15:10:31 2014 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 2 Dec 2014 10:10:31 -0500 Subject: [openstack-dev] oslo.utils 1.1.0 released Message-ID: The Oslo team is pleased to announce the release of oslo.utils 1.1.0. This release includes several bug fixes as well as many other changes. For more details, please see the git log history below and https://launchpad.net/oslo.utils/+milestone/1.1.0 Please report issues through launchpad: https://launchpad.net/oslo.utils $ git log --no-color --oneline --no-merges 1.0.0..1.1.0 ed9a695 Add get_my_ip() fb28c02 Updated from global requirements dbc02d0 Add 'auth_password' in _SANITIZE_KEYS for strutils f0cce3c Updated from global requirements b8d5872 Activate pep8 check that _ is imported 45b7166 Add uuidutils to oslo.utils 3b9df71 Add pbr to installation requirements 7b32a91 Updated from global requirements 760dbc7 Add is_int_like() function 5d034e5 Hide auth_token and new_pass a74d9ee Imported Translations from Transifex c7ef2c2 Add history/changelog to docs 563a990 Imported Translations from Transifex c6bdcce Support building wheels (PEP-427) cac930c Imported Translations from Transifex d4e87e8 Improve docstrings for IP verification functions b5ab4d0 Imported Translations from Transifex baacebc Add ip address validation 5d3b3da Fix how it appears we need to use mock_anything to avoid 'self' errors dba9f9a Updated from global requirements 614a849 Move over a reflection module that taskflow uses f02f8df Make safe_encode func case-insensitive e54a359 Enable mask_password to handle byte code strings f79497e Updated from global requirements 08a348c Add the ability to extract the query params from a urlsplit fa77453 Work toward Python 3.4 support and testing 8a858b7 warn against sorting requirements $ git diff --stat --no-color 1.0.0..1.1.0 | egrep -v '(/tests/|^ doc)' .../en_GB/LC_MESSAGES/oslo.utils-log-critical.po | 21 -- .../en_GB/LC_MESSAGES/oslo.utils-log-info.po | 21 -- .../locale/fr/LC_MESSAGES/oslo.utils-log-error.po | 32 +++ .../fr/LC_MESSAGES/oslo.utils-log-warning.po | 33 +++ oslo.utils/locale/fr/LC_MESSAGES/oslo.utils.po | 38 +++ oslo/utils/encodeutils.py | 6 + oslo/utils/netutils.py | 101 ++++++++ oslo/utils/reflection.py | 208 +++++++++++++++ oslo/utils/strutils.py | 24 +- oslo/utils/uuidutils.py | 44 ++++ requirements.txt | 9 +- setup.cfg | 3 + test-requirements.txt | 12 +- tests/test_excutils.py | 3 +- tests/test_netutils.py | 80 ++++++ tests/test_reflection.py | 279 +++++++++++++++++++++ tests/test_strutils.py | 38 +++ tests/test_uuidutils.py | 51 ++++ tests/tests_encodeutils.py | 65 ++++- tox.ini | 3 +- $ git diff -U0 --no-color 1.0.0..1.1.0 *requirements*.txt | sed -e 's/^/ /g' diff --git a/requirements.txt b/requirements.txt index 4421ce9..c508f12 100644 --- a/requirements.txt +++ b/requirements.txt @@ -0,0 +1,5 @@ +# The order of packages is significant, because pip processes them in the order +# of appearance. Changing the order has an impact on the overall integration +# process, which may cause wedges in the gate later. + +pbr>=0.6,!=0.7,<1.0 @@ -4 +9,3 @@ iso8601>=0.1.9 -oslo.i18n>=0.2.0 # Apache-2.0 +oslo.i18n>=1.0.0 # Apache-2.0 +netaddr>=0.7.12 +netifaces>=0.10.4 diff --git a/test-requirements.txt b/test-requirements.txt index 043d97f..0fab2b3 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -0,0 +1,4 @@ +# The order of packages is significant, because pip processes them in the order +# of appearance. Changing the order has an impact on the overall integration +# process, which may cause wedges in the gate later. + @@ -8,2 +12,2 @@ testscenarios>=0.4 -testtools>=0.9.34 -oslotest>=1.1.0.0a1 +testtools>=0.9.36,!=1.2.0 +oslotest>=1.2.0 # Apache-2.0 @@ -17,2 +21,2 @@ coverage>=3.6 -sphinx>=1.1.2,!=1.2.0,<1.3 -oslosphinx>=2.2.0.0a2 +sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 +oslosphinx>=2.2.0 # Apache-2.0 -- Davanum Srinivas :: https://twitter.com/dims From gfidente at redhat.com Tue Dec 2 15:12:33 2014 From: gfidente at redhat.com (Giulio Fidente) Date: Tue, 02 Dec 2014 16:12:33 +0100 Subject: [openstack-dev] [TripleO] Alternate meeting time In-Reply-To: <547DD0A2.3030102@redhat.com> References: <547DD0A2.3030102@redhat.com> Message-ID: <547DD6E1.8070103@redhat.com> On 12/02/2014 03:45 PM, Derek Higgins wrote: > On 02/12/14 14:10, James Polley wrote: >> Months ago, I pushed for us to alternate meeting times to something that >> was friendlier to me, so we started doing alternate weeks at 0700UTC. >> That worked well for me, but wasn't working so well for a few people in >> Europe, so we decided to give 0800UTC a try. Then DST changes happened, >> and wiki pages got out of sync, and there was confusion about what time >> the meeting is at.. >> >> The alternate meeting hasn't been very well attended for the last ~3 >> meetings. Partly I think that's due to summit and travel plans, but it >> seems like the 0800UTC time doesn't work very well for quite a few people. >> >> So, instead of trying things at random, I've >> created https://etherpad.openstack.org/p/tripleo-alternate-meeting-time >> as a starting point for figuring out what meeting time might work well >> for the most people. Obviously the world is round, and people have >> different schedules, and we're never going to get a meeting time that >> works well for everyone - but it'd be nice to try to maximise attendance >> (and minimise inconvenience) as much as we can. >> >> If you regularly attend, or would like to attend, the meeting, please >> take a moment to look at the etherpad to register your vote for which >> time works best for you. There's even a section for you to cast your >> vote if the UTC1900 meeting (aka the "main" or "US-Friendly" meeting) >> works better for you! > > > Can I suggest an alternative data gathering method, I've put each hour > in a week in a poll, for each slot you have 3 options I think it is great, but would be even better if we could trim it to just a *single* day and once we agreed on the timeframe, we decide on the day as that probably won't count much so long as it is a weekday I suppose -- Giulio Fidente GPG KEY: 08D733BA From jp at jamezpolley.com Tue Dec 2 15:14:02 2014 From: jp at jamezpolley.com (James Polley) Date: Tue, 2 Dec 2014 16:14:02 +0100 Subject: [openstack-dev] [TripleO] Alternate meeting time In-Reply-To: <547DCB0F.3070303@redhat.com> References: <547DCB0F.3070303@redhat.com> Message-ID: On Tue, Dec 2, 2014 at 3:22 PM, marios wrote: > On 02/12/14 16:10, James Polley wrote: > > Months ago, I pushed for us to alternate meeting times to something that > > was friendlier to me, so we started doing alternate weeks at 0700UTC. > > That worked well for me, but wasn't working so well for a few people in > > Europe, so we decided to give 0800UTC a try. Then DST changes happened, > > and wiki pages got out of sync, and there was confusion about what time > > the meeting is at.. > > > > The alternate meeting hasn't been very well attended for the last ~3 > > meetings. Partly I think that's due to summit and travel plans, but it > > seems like the 0800UTC time doesn't work very well for quite a few > people. > > > > So, instead of trying things at random, I've > > created https://etherpad.openstack.org/p/tripleo-alternate-meeting-time > > as a starting point for figuring out what meeting time might work well > > for the most people. Obviously the world is round, and people have > > different schedules, and we're never going to get a meeting time that > > works well for everyone - but it'd be nice to try to maximise attendance > > (and minimise inconvenience) as much as we can. > > > > If you regularly attend, or would like to attend, the meeting, please > > take a moment to look at the etherpad to register your vote for which > > time works best for you. There's even a section for you to cast your > > vote if the UTC1900 meeting (aka the "main" or "US-Friendly" meeting) > > works better for you! > > slight clarification - are we discussing which time would suit best for > the alternative meeting, or that we are scrapping the alternate, and > re-voting for the best main meeting time. In either case the proposed > times are still mostly not EU friendly (I guess the 1800 UTC would be ok > for UK), > > thanks, marios > I was only intending to talk about the best time for the alternate meeting. The main/US/original meeting time still seems to have been fairly well attended lately, aside from the interruption due to summit; I haven't been aware of any meetings that got abandoned or shortened due to lack of participants in that timezone (except for the week of summit), and I haven't heard anyone in that meeting talking about how hard it is for them to make the meeting. But perhaps I'm not hearing the people who'd like to be at that meeting - maybe because they can't *be* at the meeting to be heard? Derek has suggested a Doodle poll to help us look at which hours across the week people would find it possible to make the meeting - I think the data it gathers would help us figure out if it's worth considering moving the US meeting time as well. If there are other people who'd like to move the time of the US meeting so they can make it, now is probably a good time to speak up! > > > > > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akurilin at mirantis.com Tue Dec 2 15:21:38 2014 From: akurilin at mirantis.com (Andrey Kurilin) Date: Tue, 2 Dec 2014 17:21:38 +0200 Subject: [openstack-dev] [python-novaclient] Status of novaclient V3 Message-ID: Hi! While working on fixing wrong import in novaclient v3 shell, I have found that a lot of commands, which are listed in V3 shell(novaclient.v3.shell), are broken, because appropriate managers are missed from V3 client(novaclient.V3.client.Client). Template of error is "ERROR (AttributeError): 'Client' object has no attribute ''", where can be "floating_ip_pools", "floating_ip", "security_groups", "dns_entries" and etc. I know that novaclient V3 is not finished yet, and I guess it will be not finished. So the main question is: What we should do with implemented code of novaclient V3 ? Should it be ported to novaclient V2.1 or it can be removed? -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekh at redhat.com Tue Dec 2 15:25:41 2014 From: derekh at redhat.com (Derek Higgins) Date: Tue, 02 Dec 2014 15:25:41 +0000 Subject: [openstack-dev] [TripleO] Alternate meeting time In-Reply-To: <547DD6E1.8070103@redhat.com> References: <547DD0A2.3030102@redhat.com> <547DD6E1.8070103@redhat.com> Message-ID: <547DD9F5.6010108@redhat.com> On 02/12/14 15:12, Giulio Fidente wrote: > On 12/02/2014 03:45 PM, Derek Higgins wrote: >> On 02/12/14 14:10, James Polley wrote: >>> Months ago, I pushed for us to alternate meeting times to something that >>> was friendlier to me, so we started doing alternate weeks at 0700UTC. >>> That worked well for me, but wasn't working so well for a few people in >>> Europe, so we decided to give 0800UTC a try. Then DST changes happened, >>> and wiki pages got out of sync, and there was confusion about what time >>> the meeting is at.. >>> >>> The alternate meeting hasn't been very well attended for the last ~3 >>> meetings. Partly I think that's due to summit and travel plans, but it >>> seems like the 0800UTC time doesn't work very well for quite a few >>> people. >>> >>> So, instead of trying things at random, I've >>> created https://etherpad.openstack.org/p/tripleo-alternate-meeting-time >>> as a starting point for figuring out what meeting time might work well >>> for the most people. Obviously the world is round, and people have >>> different schedules, and we're never going to get a meeting time that >>> works well for everyone - but it'd be nice to try to maximise attendance >>> (and minimise inconvenience) as much as we can. >>> >>> If you regularly attend, or would like to attend, the meeting, please >>> take a moment to look at the etherpad to register your vote for which >>> time works best for you. There's even a section for you to cast your >>> vote if the UTC1900 meeting (aka the "main" or "US-Friendly" meeting) >>> works better for you! >> >> >> Can I suggest an alternative data gathering method, I've put each hour >> in a week in a poll, for each slot you have 3 options > > I think it is great, but would be even better if we could trim it to > just a *single* day and once we agreed on the timeframe, we decide on > the day as that probably won't count much so long as it is a weekday I > suppose I think leaving the whole week in there is better, some people may have different schedules on different weekdays, me for example ;-) From jp at jamezpolley.com Tue Dec 2 15:26:07 2014 From: jp at jamezpolley.com (James Polley) Date: Tue, 2 Dec 2014 16:26:07 +0100 Subject: [openstack-dev] [TripleO] Alternate meeting time In-Reply-To: <547DD6E1.8070103@redhat.com> References: <547DD0A2.3030102@redhat.com> <547DD6E1.8070103@redhat.com> Message-ID: On Tue, Dec 2, 2014 at 4:12 PM, Giulio Fidente wrote: > On 12/02/2014 03:45 PM, Derek Higgins wrote: > >> On 02/12/14 14:10, James Polley wrote: >> >>> Months ago, I pushed for us to alternate meeting times to something that >>> was friendlier to me, so we started doing alternate weeks at 0700UTC. >>> That worked well for me, but wasn't working so well for a few people in >>> Europe, so we decided to give 0800UTC a try. Then DST changes happened, >>> and wiki pages got out of sync, and there was confusion about what time >>> the meeting is at.. >>> >>> The alternate meeting hasn't been very well attended for the last ~3 >>> meetings. Partly I think that's due to summit and travel plans, but it >>> seems like the 0800UTC time doesn't work very well for quite a few >>> people. >>> >>> So, instead of trying things at random, I've >>> created https://etherpad.openstack.org/p/tripleo-alternate-meeting-time >>> as a starting point for figuring out what meeting time might work well >>> for the most people. Obviously the world is round, and people have >>> different schedules, and we're never going to get a meeting time that >>> works well for everyone - but it'd be nice to try to maximise attendance >>> (and minimise inconvenience) as much as we can. >>> >>> If you regularly attend, or would like to attend, the meeting, please >>> take a moment to look at the etherpad to register your vote for which >>> time works best for you. There's even a section for you to cast your >>> vote if the UTC1900 meeting (aka the "main" or "US-Friendly" meeting) >>> works better for you! >>> >> >> >> Can I suggest an alternative data gathering method, I've put each hour >> in a week in a poll, for each slot you have 3 options >> > > I think it is great, but would be even better if we could trim it to just > a *single* day and once we agreed on the timeframe, we decide on the day as > that probably won't count much so long as it is a weekday I suppose *hand-waggle* I thought that at first too, but then I remembered that I had regular commitments on thursday nights all of last year, but this year it's shifted to wednesday night. I'd prefer to keep those nights more free than other nights, if I can. > -- > Giulio Fidente > GPG KEY: 08D733BA > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ian.cordasco at RACKSPACE.COM Tue Dec 2 15:32:57 2014 From: ian.cordasco at RACKSPACE.COM (Ian Cordasco) Date: Tue, 2 Dec 2014 15:32:57 +0000 Subject: [openstack-dev] [glance] Deprecating osprofiler option 'enabled' in favour of 'profiler_enabled' In-Reply-To: Message-ID: Except for the fact that the person who implemented this was told to change the option name in other projects because it conflicted with a different option. We can keep this if we?re worried about being too obvious (to the point of becoming the Department of Redundancy Department) with our naming. I don?t think other projects will be very happy having to change their naming especially if the original name was already a problem. On 12/2/14, 06:12, "Zhi Yan Liu" wrote: >I totally agreed to make it to be consistent cross all projects, so I >propose to change other projects. > >But I think keeping it as-it is clear enough for both developer and >operator/configuration, for example: > >[profiler] >enable = True > >instead of: > >[profiler] >profiler_enable = True > >Tbh, the "profiler" prefix is redundant to me still from the >perspective of operator/configuration. > >zhiyan > > >On Tue, Dec 2, 2014 at 7:44 PM, Louis Taylor wrote: >> On Tue, Dec 02, 2014 at 12:16:44PM +0800, Zhi Yan Liu wrote: >>> Why not change other services instead of glance? I see one reason is >>> "glance is the only one service use this option name", but to me one >>> reason to keep it as-it in glance is that original name makes more >>> sense due to the option already under "profiler" group, adding >>> "profiler" prefix to it is really redundant, imo, and in other >>> existing config group there's no one go this naming way. Then in the >>> code we can just use a clear way: >>> >>> CONF.profiler.enabled >>> >>> instead of: >>> >>> CONF.profiler.profiler_enabled >>> >>> thanks, >>> zhiyan >> >> I agree this looks nicer in the code. However, the primary consumer of >>this >> option is someone editing it in the configuration files. In this case, I >> believe having something more verbose and consistent is better than the >>Glance >> code being slightly more elegant. >> >> One name or the other doesn't make all that much difference, but >>consistency in >> how we turn osprofiler on and off across projects would be best. >> >> - Louis >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From davanum at gmail.com Tue Dec 2 15:35:26 2014 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 2 Dec 2014 10:35:26 -0500 Subject: [openstack-dev] oslo.i18n 1.1.0 released Message-ID: The Oslo team is pleased to announce the release of oslo.i18n 1.1.0. For more details, please see the git log history below and https://launchpad.net/oslo.i18n/+milestone/1.1.0 Please report issues through launchpad: https://launchpad.net/oslo.i18n $ git log --no-color --oneline --no-merges 1.0.0..1.1.0 5a163eb Imported Translations from Transifex 1fc63ac Add note for integration modules in libraries 2fe3f73 Activate pep8 check that _ is imported d67767b Add pbr to installation requirements 3583c89 Updated from global requirements 497f8d3 Updated from global requirements 624c52c Remove extraneous vim editor configuration comments 9b6a9c2 Make clear in docs to use _LE() when using LOG.exception() 999a112 Support building wheels (PEP-427) 47c5d73 Imported Translations from Transifex 3041689 Fix coverage testing 04752ee Imported Translations from Transifex 26edee1 Use same indentation in doc/source/usage 12f14da Imported Translations from Transifex c9f2b63 Imported Translations from Transifex af4fc2c Updated from global requirements efbe658 Remove unused/mutable default args f721da7 Fixes a small syntax error in the doc examples 0624f8d Work toward Python 3.4 support and testing $ git diff --stat --no-color 1.0.0..1.1.0 | egrep -v '(/tests/|^ doc)' .../en_GB/LC_MESSAGES/oslo.i18n-log-critical.po | 21 ------- .../en_GB/LC_MESSAGES/oslo.i18n-log-error.po | 21 ------- .../locale/en_GB/LC_MESSAGES/oslo.i18n-log-info.po | 21 ------- .../en_GB/LC_MESSAGES/oslo.i18n-log-warning.po | 21 ------- oslo.i18n/locale/fr/LC_MESSAGES/oslo.i18n.po | 35 +++++++++++ .../it/LC_MESSAGES/oslo.i18n-log-critical.po | 21 ------- .../locale/it/LC_MESSAGES/oslo.i18n-log-error.po | 21 ------- .../locale/it/LC_MESSAGES/oslo.i18n-log-info.po | 21 ------- .../locale/it/LC_MESSAGES/oslo.i18n-log-warning.po | 21 ------- oslo.i18n/locale/ko_KR/LC_MESSAGES/oslo.i18n.po | 33 +++++++++++ oslo.i18n/locale/zh_CN/LC_MESSAGES/oslo.i18n.po | 31 ++++++++++ oslo/__init__.py | 2 - requirements.txt | 5 ++ setup.cfg | 3 + test-requirements.txt | 10 +++- tests/test_gettextutils.py | 3 +- tox.ini | 3 +- 19 files changed, 184 insertions(+), 206 deletions(-) $ git diff -U0 --no-color 1.0.0..1.1.0 *requirements*.txt | sed -e 's/^/ /g' diff --git a/requirements.txt b/requirements.txt index b25096e..5bef251 100644 --- a/requirements.txt +++ b/requirements.txt @@ -0,0 +1,5 @@ +# The order of packages is significant, because pip processes them in the order +# of appearance. Changing the order has an impact on the overall integration +# process, which may cause wedges in the gate later. + +pbr>=0.6,!=0.7,<1.0 diff --git a/test-requirements.txt b/test-requirements.txt index a4acbb6..1fe2384 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -0,0 +1,3 @@ +# The order of packages is significant, because pip processes them in the order +# of appearance. Changing the order has an impact on the overall integration +# process, which may cause wedges in the gate later. @@ -3,2 +6,2 @@ hacking>=0.9.2,<0.10 -sphinx>=1.1.2,!=1.2.0,<1.3 -oslosphinx>=2.2.0.0a2 +sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 +oslosphinx>=2.2.0 # Apache-2.0 @@ -6 +9,2 @@ oslosphinx>=2.2.0.0a2 -oslotest>=1.1.0.0a1 +oslotest>=1.2.0 # Apache-2.0 +coverage>=3.6 -- Davanum Srinivas :: https://twitter.com/dims From ryan.petrello at dreamhost.com Tue Dec 2 15:42:04 2014 From: ryan.petrello at dreamhost.com (Ryan Petrello) Date: Tue, 2 Dec 2014 10:42:04 -0500 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: Message-ID: <20141202154204.GA26111@Ryans-MBP> For what it's worth in terms of "production-ready", pecan is mostly just webob under the hood, and has been used pretty extensively at DreamHost in high-traffic environments (we use pecan to handle *all* customer signups for our entire product line at DreamHost). Not to mention the other major OpenStack projects that are already making use of it or are in the process of switching (Ceilometer, Ironic, Neutron)... On 12/02/14 04:55 PM, Igor Kalnitsky wrote: > Hi, Sebastian, > > Thank you for raising this topic again. > > Yes, indeed, we need to move out from web.py as soon as possible and > there are a lot of reasons why we should do it. But this topic is not > about "Why", this topic is about "Flask or Pecan". > > Well, currently Fuel uses both of this frameworks: > > * OSTF is using Pecan > * Fuel Stats is using Flask > > Personally, I'd like to use Flask instead of Pecan, because first one > is more production-ready tool and I like its design. But I believe > this should be resolved by voting. > > Thanks, > Igor > > On Tue, Dec 2, 2014 at 4:19 PM, Sebastian Kalinowski > wrote: > > Hi all, > > > > Some time ago we had a discussion about moving Nailgun to new web framework > > [1]. > > > > There was comparison [2] of two possible options: Pecan [3] and Flask [4]. > > We came to conclusion that we need to move Nailgun on some alive web > > framework > > instead of web.py [5] (some of the reasons: [6]) but there was no clear > > agreement > > on what framework (there were strong voices for Flask). > > > > I would like to bring this topic up again so we could discuss with broader > > audience and > > make final decision what will be our next web framework. > > > > I think that we should also consider to make that framework our "weapon of > > choice" (or so > > called standard) when creating new web services in Fuel. > > > > Best, > > Sebastian > > > > > > [1] https://lists.launchpad.net/fuel-dev/msg01397.html > > [2] > > https://docs.google.com/a/mirantis.com/document/d/1QR7YphyfN64m-e9b5rKC_U8bMtx4zjfW943BfLTqTao/edit?usp=sharing > > [3] http://www.pecanpy.org/ > > [4] http://flask.pocoo.org/ > > [5] http://webpy.org/ > > [6] https://lists.launchpad.net/fuel-dev/msg01501.html > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ryan Petrello Senior Developer, DreamHost ryan.petrello at dreamhost.com From vponomaryov at mirantis.com Tue Dec 2 15:44:12 2014 From: vponomaryov at mirantis.com (Valeriy Ponomaryov) Date: Tue, 2 Dec 2014 17:44:12 +0200 Subject: [openstack-dev] [Manila] Manila project use-cases In-Reply-To: <1B324A03-7768-4603-9544-01EF53CA7BBC@koderer.com> References: <1B324A03-7768-4603-9544-01EF53CA7BBC@koderer.com> Message-ID: Hello Marc, Here, I tried to cover mentioned use cases with "implemented or not" notes: 1) Implemented, but details of implementation are different for different share drivers. 2) Not clear for me. If you mean possibility to mount one share to any amount of VMs, then yes. 3) Nova is used only in one case - Generic Driver that uses Cinder volumes. So, it can be said, that Manila interface does allow to use "flat" network and a share driver just should have implementation for it. I will assume you mean usage of generic driver and possibility to mount shares to different machines except Nova VMs. - In that case network architecture should allow to make connection in general. If it is allowed, then should not be any problems with mount to any machine. Just access-allow operations should be performed. 4) Access can be shared, but it is not as flexible as could be wanted. Owner of a share can grant access to all, and if there is network connectivity between user and share host, then user will be able to mount having provided access. 5) Manila can not remove some "mount" of some share, it can remove access for possibility to mount in general. So, looks like not implemented. 6) Implemented. 7) Not implemented yet. 8) No "cloning", but we have snapshot-approach as for volumes in cinder. Regards, Valeriy Ponomaryov Mirantis On Tue, Dec 2, 2014 at 4:22 PM, Marc Koderer wrote: > Hello Manila Team, > > We identified use cases for Manila during an internal workshop > with our operators. I would like to share them with you and > update the wiki [1] since it seems to be outdated. > > Before that I would like to gather feedback and you might help me > with identifying things that aren?t implemented yet. > > Our list: > > 1.) Create a share and use it in a tenant > Initial creation of a shared storage volume and assign it to several > VM?s > > 2.) Assign an preexisting share to a VM with Manila > Import an existing Share with data and it to several VM?s in case of > migrating an existing production - services to Openstack. > > 3.) External consumption of a share > Accommodate and provide mechanisms for last-mile consumption of > shares by > consumers of the service that aren't mediated by Nova. > > 4.) Cross Tenant sharing > Coordinate shares across tenants > > 5.) Detach a share and don?t destroy data (deactivate) > Share is flagged as inactive and data are not destroyed for later > usage or in case of legal requirements. > > 6.) Unassign and delete data of a share > Destroy entire share with all data and free space for further usage. > > 7.) Resize Share > Resize existing and assigned share on the fly. > > 8.) Copy existing share > Copy existing share between different storage technologies > > Regards > Marc > Deutsche Telekom > > [1]: https://wiki.openstack.org/wiki/Manila/usecases > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Kind Regards Valeriy Ponomaryov www.mirantis.com vponomaryov at mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From davanum at gmail.com Tue Dec 2 15:46:20 2014 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 2 Dec 2014 10:46:20 -0500 Subject: [openstack-dev] oslo.vmware 0.8.0 released Message-ID: The Oslo team is pleased to announce the release of oslo.vmware 0.8.0. For more details, please see the git log history below and https://launchpad.net/oslo.vmware/+milestone/0.8.0 Please report issues through launchpad: https://launchpad.net/oslo.vmware $ git log --no-color --oneline --no-merges 0.7.0..0.8.0 969bfba Switch to use requests/urllib3 and enable cacert validation 5b9408f Updated from global requirements 9d9bf2f Updated from global requirements 1ebbc4d Enable support for python 3.x 4dc0ded Updated from global requirements 589ba43 Activate pep8 check that _ is imported $ git diff --stat --no-color 0.7.0..0.8.0 | egrep -v '(/tests/|^ doc)' oslo/vmware/api.py | 2 +- oslo/vmware/exceptions.py | 30 ++++-- oslo/vmware/image_transfer.py | 3 +- oslo/vmware/objects/datastore.py | 2 +- oslo/vmware/pbm.py | 4 +- oslo/vmware/rw_handles.py | 215 ++++++++++++++++++++------------------- oslo/vmware/service.py | 19 ++-- oslo/vmware/vim_util.py | 4 +- requirements-py3.txt | 24 +++++ requirements.txt | 1 + test-requirements.txt | 4 +- tests/objects/test_datastore.py | 2 +- tests/test_api.py | 7 +- tests/test_image_transfer.py | 3 +- tests/test_pbm.py | 8 +- tests/test_rw_handles.py | 35 +++---- tests/test_service.py | 33 ++++-- tox.ini | 12 ++- 18 files changed, 240 insertions(+), 168 deletions(-) $ git diff -U0 --no-color 0.7.0..0.8.0 *requirements*.txt | sed -e 's/^/ /g' diff --git a/requirements-py3.txt b/requirements-py3.txt new file mode 100644 index 0000000..b14f525 --- /dev/null +++ b/requirements-py3.txt @@ -0,0 +1,24 @@ +# The order of packages is significant, because pip processes them in the order +# of appearance. Changing the order has an impact on the overall integration +# process, which may cause wedges in the gate later. + +stevedore>=1.1.0 # Apache-2.0 +netaddr>=0.7.12 + +# for timeutils +iso8601>=0.1.9 + +# for jsonutils +six>=1.7.0 + +oslo.i18n>=1.0.0 # Apache-2.0 +oslo.utils>=1.0.0 # Apache-2.0 +Babel>=1.3 + +# for the routing notifier +PyYAML>=3.1.0 + +suds-jurko>=0.6 +eventlet>=0.15.2 +requests>=2.2.0,!=2.4.0 +urllib3>=1.7.1 diff --git a/requirements.txt b/requirements.txt index c019874..6939fd3 100644 --- a/requirements.txt +++ b/requirements.txt @@ -23,0 +24 @@ requests>=2.2.0,!=2.4.0 +urllib3>=1.7.1 diff --git a/test-requirements.txt b/test-requirements.txt index 61fd4eb..cfaa103 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -7 +7 @@ hacking>=0.9.2,<0.10 -pylint==0.25.2 +pylint>=1.3.0 # GNU GPL v2 @@ -16 +16 @@ testscenarios>=0.4 -testtools>=0.9.36 +testtools>=0.9.36,!=1.2.0 -- Davanum Srinivas :: https://twitter.com/dims From mestery at mestery.com Tue Dec 2 15:59:13 2014 From: mestery at mestery.com (Kyle Mestery) Date: Tue, 2 Dec 2014 09:59:13 -0600 Subject: [openstack-dev] [neutron] Changes to the core team Message-ID: Now that we're in the thick of working hard on Kilo deliverables, I'd like to make some changes to the neutron core team. Reviews are the most important part of being a core reviewer, so we need to ensure cores are doing reviews. The stats for the 180 day period [1] indicate some changes are needed for cores who are no longer reviewing. First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from neutron-core. Bob and Nachi have been core members for a while now. They have contributed to Neutron over the years in reviews, code and leading sub-teams. I'd like to thank them for all that they have done over the years. I'd also like to propose that should they start reviewing more going forward the core team looks to fast track them back into neutron-core. But for now, their review stats place them below the rest of the team for 180 days. As part of the changes, I'd also like to propose two new members to neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have been very active in reviews, meetings, and code for a while now. Henry lead the DB team which fixed Neutron DB migrations during Juno. Kevin has been actively working across all of Neutron, he's done some great work on security fixes and stability fixes in particular. Their comments in reviews are insightful and they have helped to onboard new reviewers and taken the time to work with people on their patches. Existing neutron cores, please vote +1/-1 for the addition of Henry and Kevin to the core team. Thanks! Kyle [1] http://stackalytics.com/report/contribution/neutron-group/180 From jsbryant at electronicjungle.net Tue Dec 2 16:11:23 2014 From: jsbryant at electronicjungle.net (Jay S. Bryant) Date: Tue, 02 Dec 2014 10:11:23 -0600 Subject: [openstack-dev] [stable] New config options, no default change In-Reply-To: References: <20141111103053.GD7366@redhat.com> <5461FC66.6060103@openstack.org> <5468F85B.10901@electronicjungle.net> <54748DC0.50400@redhat.com> <547CF9A1.4070009@electronicjungle.net> Message-ID: <547DE4AB.60109@electronicjungle.net> On 12/02/2014 02:31 AM, Alan Pevec wrote: >>>> What is the text that should be included in the commit messages to make sure that it is picked up for release notes? >>> I'm not sure anyone tracks commit messages to create release notes. > Let's use existing DocImpact tag, I'll add check for this in the > release scripts. > But I prefer if you could directly include the proposed text in the > draft release notes (link below) I like the idea of using the DocImpact tag to be consistent with what is done in the Master branch. Thanks for the link to the ReleaseNotes wiki page. >>> better way to handle this is to create a draft, post it in review >>> comments, and copy to release notes draft right before/after pushing the >>> patch into gate. >> Forgive me, I think my question is more basic then. Where are the release >> notes for a stable branch located to make such changes? > https://wiki.openstack.org/wiki/ReleaseNotes/2014.2.1#Known_Issues_and_Limitations > > > Cheers, > Alan From rybrown at redhat.com Tue Dec 2 16:13:07 2014 From: rybrown at redhat.com (Ryan Brown) Date: Tue, 02 Dec 2014 11:13:07 -0500 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: Message-ID: <547DE513.1080203@redhat.com> On 12/02/2014 09:55 AM, Igor Kalnitsky wrote: > Hi, Sebastian, > > Thank you for raising this topic again. > > [snip] > > Personally, I'd like to use Flask instead of Pecan, because first one > is more production-ready tool and I like its design. But I believe > this should be resolved by voting. > > Thanks, > Igor > > On Tue, Dec 2, 2014 at 4:19 PM, Sebastian Kalinowski > wrote: >> Hi all, >> >> [snip explanation+history] >> >> Best, >> Sebastian Given that Pecan is used for other OpenStack projects and has plenty of builtin functionality (REST support, sessions, etc) I'd prefer it for a number of reasons. 1) Wouldn't have to pull in plugins for standard (in Pecan) things 2) Pecan is built for high traffic, where Flask is aimed at much smaller projects 3) Already used by other OpenStack projects, so common patterns can be reused as oslo libs Of course, the Flask community seems larger (though the average flask project seems pretty small). I'm not sure what determines "production readiness", but it seems to me like Fuel developers fall more in Pecan's target audience than in Flask's. My $0.02, Ryan -- Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. From mark at mcclain.xyz Tue Dec 2 16:14:54 2014 From: mark at mcclain.xyz (Mark McClain) Date: Tue, 2 Dec 2014 11:14:54 -0500 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: <8EAC7F6F-0C94-4614-A786-12235B929310@mcclain.xyz> > On Dec 2, 2014, at 10:59 AM, Kyle Mestery wrote: > > First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from > neutron-core. Bob and Nachi have been core members for a while now. > They have contributed to Neutron over the years in reviews, code and > leading sub-teams. I'd like to thank them for all that they have done > over the years. I'd also like to propose that should they start > reviewing more going forward the core team looks to fast track them > back into neutron-core. But for now, their review stats place them > below the rest of the team for 180 days. Thanks to Bob and Nachi for all of their hard work during the early cycles of Quantum/Neutron. > > As part of the changes, I'd also like to propose two new members to > neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have > been very active in reviews, meetings, and code for a while now. Henry > lead the DB team which fixed Neutron DB migrations during Juno. Kevin > has been actively working across all of Neutron, he's done some great > work on security fixes and stability fixes in particular. Their > comments in reviews are insightful and they have helped to onboard new > reviewers and taken the time to work with people on their patches. > > Existing neutron cores, please vote +1/-1 for the addition of Henry > and Kevin to the core team. > +1 to Henry and Kevin. They?ve both been great contributors the last few cycles and would make great cores. mark From jsbryant at electronicjungle.net Tue Dec 2 16:15:15 2014 From: jsbryant at electronicjungle.net (Jay S. Bryant) Date: Tue, 02 Dec 2014 10:15:15 -0600 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: References: Message-ID: <547DE593.9090709@electronicjungle.net> On 12/02/2014 07:22 AM, Alan Pevec wrote: > Hi all, > > here are exception proposal I have collected when preparing for the > 2014.2.1 release, > stable-maint members please have a look! > > > General: cap Oslo and client library versions - sync from > openstack/requirements stable/juno, would be good to include in the > release. > https://review.openstack.org/#/q/status:open+branch:stable/juno+topic:openstack/requirements,n,z > > Ceilometer (all proposed by Ceilo PTL) > https://review.openstack.org/138315 > https://review.openstack.org/138317 > https://review.openstack.org/138320 > https://review.openstack.org/138321 > https://review.openstack.org/138322 > > Cinder > https://review.openstack.org/137537 - small change and limited to the > VMWare driver +1 I think this is fine to make an exception for. > > Glance > https://review.openstack.org/137704 - glance_store is backward > compatible, but not sure about forcing version bump on stable > https://review.openstack.org/137862 - Disable osprofiler by default to > prevent upgrade issues, disabled by default in other services > > Horizon > standing-after-freeze translation update, coming on Dec 3 > https://review.openstack.org/138018 - visible issue, no translation > string changes > https://review.openstack.org/138313 - low risk patch for a highly > problematic issue > > Neutron > https://review.openstack.org/136294 - default SNAT, see review for > details, I cannot distil 1liner :) > https://review.openstack.org/136275 - self-contained to the vendor > code, extensively tested in several deployments > > Nova > https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/juno+topic:1386236/juno,n,z > - soaked more than a week in master, makes numa actually work in Juno > > Sahara > https://review.openstack.org/135549 - fix for auto security groups, > there were some concerns, see review for details > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at nemebean.com Tue Dec 2 16:29:43 2014 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 02 Dec 2014 10:29:43 -0600 Subject: [openstack-dev] oslo.concurrency 0.3.0 released Message-ID: <547DE8F7.7090206@nemebean.com> The Oslo team is pleased to announce the release of oslo.concurrency 0.3.0. This release includes a number of fixes for problems found during the initial adoptions of the library, as well as some functionality improvements. For more details, please see the git log history below and https://launchpad.net/oslo.concurrency/+milestone/0.3.0 Please report issues through launchpad: https://launchpad.net/oslo.concurrency openstack/oslo.concurrency 0.2.0..HEAD 54c84da Add external lock fixture 19f07c6 Add a TODO for retrying pull request #20 46c836e Allow the lock delay to be provided 3bda65c Allow for providing a customized semaphore container 656f908 Move locale files to proper place faa30f8 Flesh out the README bca4a0d Move out of the oslo namespace package 58de317 Improve testing in py3 environment fa52a63 Only modify autoindex.rst if it exists 63e618b Imported Translations from Transifex d5ea62c lockutils-wrapper cleanup 78ba143 Don't use variables that aren't initialized diffstat (except docs and test files): .gitignore | 1 + .testr.conf | 2 +- README.rst | 4 +- .../locale/en_GB/LC_MESSAGES/oslo.concurrency.po | 16 +- oslo.concurrency/locale/oslo.concurrency.pot | 16 +- oslo/concurrency/__init__.py | 29 ++ oslo/concurrency/_i18n.py | 32 -- oslo/concurrency/fixture/__init__.py | 13 + oslo/concurrency/fixture/lockutils.py | 51 -- oslo/concurrency/lockutils.py | 376 -------------- oslo/concurrency/openstack/__init__.py | 0 oslo/concurrency/openstack/common/__init__.py | 0 oslo/concurrency/openstack/common/fileutils.py | 146 ------ oslo/concurrency/opts.py | 45 -- oslo/concurrency/processutils.py | 340 ------------ oslo_concurrency/__init__.py | 0 oslo_concurrency/_i18n.py | 32 ++ oslo_concurrency/fixture/__init__.py | 0 oslo_concurrency/fixture/lockutils.py | 76 +++ oslo_concurrency/lockutils.py | 502 ++++++++++++++++++ oslo_concurrency/openstack/__init__.py | 0 oslo_concurrency/openstack/common/__init__.py | 0 oslo_concurrency/openstack/common/fileutils.py | 146 ++++++ oslo_concurrency/opts.py | 45 ++ oslo_concurrency/processutils.py | 340 ++++++++++++ requirements-py3.txt | 1 + requirements.txt | 1 + setup.cfg | 9 +- tests/test_lockutils.py | 575 ++++++++++++++++++++ tests/test_processutils.py | 519 +++++++++++++++++++ tests/test_warning.py | 29 ++ tests/unit/__init__.py | 0 tests/unit/test_lockutils.py | 543 ------------------- tests/unit/test_lockutils_eventlet.py | 59 --- tests/unit/test_processutils.py | 518 ------------------ tox.ini | 8 +- 42 files changed, 3515 insertions(+), 2135 deletions(-) Requirements updates: diff --git a/requirements-py3.txt b/requirements-py3.txt index b1a8722..a27b434 100644 --- a/requirements-py3.txt +++ b/requirements-py3.txt @@ -13,0 +14 @@ six>=1.7.0 +retrying>=1.2.2,!=1.3.0 # Apache-2.0 diff --git a/requirements.txt b/requirements.txt index b1a8722..a27b434 100644 --- a/requirements.txt +++ b/requirements.txt @@ -13,0 +14 @@ six>=1.7.0 +retrying>=1.2.2,!=1.3.0 # Apache-2.0 From tqtran at us.ibm.com Tue Dec 2 16:30:58 2014 From: tqtran at us.ibm.com (Thai Q Tran) Date: Tue, 2 Dec 2014 08:30:58 -0800 Subject: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon In-Reply-To: References: Message-ID: I like David's approach, but having two modals (one for the form, one for the confirmation) on top of each other is a bit weird and would require constant checks for input. We already have three ways to close the dialog today: via the cancel button, X button, and ESC key. It's more important to me that I don't lose work by accidentally clicking outside. So from this perspective, I think that having a static behavior is the way to go. Regardless of what approach we pick, its an improvement over what we have today. From: Timur Sufiev To: "OpenStack Development Mailing List (not for usage questions)" Date: 12/02/2014 04:22 AM Subject: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon Hello, Horizoneers and UX-ers! The default behavior of modals in Horizon (defined in turn by Bootstrap defaults) regarding their closing is to simply close the modal once user clicks somewhere outside of it (on the backdrop element below and around the modal). This is not very convenient for the modal forms containing a lot of input - when it is closed without a warning all the data the user has already provided is lost. Keeping this in mind, I've made a patch [1] changing default Bootstrap 'modal_backdrop' parameter to 'static', which means that forms are not closed once the user clicks on a backdrop, while it's still possible to close them by pressing 'Esc' or clicking on the 'X' link at the top right border of the form. Also the patch [1] allows to customize this behavior (between 'true'-current one/'false' - no backdrop element/'static') on a per-form basis. What I didn't know at the moment I was uploading my patch is that David Lyle had been working on a similar solution [2] some time ago. It's a bit more elaborate than mine: if the user has already filled some some inputs in the form, then a confirmation dialog is shown, otherwise the form is silently dismissed as it happens now. The whole point of writing about this in the ML is to gather opinions which approach is better: * stick to the current behavior; * change the default behavior to 'static'; * use the David's solution with confirmation dialog (once it'll be rebased to the current codebase). What do you think? [1]?https://review.openstack.org/#/c/113206/ [2]?https://review.openstack.org/#/c/23037/ P.S. I remember that I promised to write this email a week ago, but better late than never :). -- Timur Sufiev_______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From doug at doughellmann.com Tue Dec 2 16:33:27 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 2 Dec 2014 11:33:27 -0500 Subject: [openstack-dev] cliff 1.9.0 released Message-ID: <0E9D888C-4BB8-4924-9B85-45DB52C4CD34@doughellmann.com> The Oslo team is pleased to announce release 1.9.0 of cliff. This is primarily a bug-fix release, but does include a requirements update. For more details, please see the git log history below and https://launchpad.net/python-cliff/+milestone/1.9.0 Please report issues through launchpad: https://bugs.launchpad.net/python-cliff ---------------------------------------- Changes in openstack/cliff 1.8.0..1.9.0 f6e9bbd print the real error cmd argument a5fd24d Updated from global requirements diffstat (except docs and test files): cliff/commandmanager.py | 3 ++- requirements.txt | 2 +- 3 files changed, 5 insertions(+), 2 deletions(-) Requirements updates: diff --git a/requirements.txt b/requirements.txt index 4d3ccc9..bf06e82 100644 --- a/requirements.txt +++ b/requirements.txt @@ -10 +10 @@ six>=1.7.0 -stevedore>=0.14 +stevedore>=1.1.0 # Apache-2.0 From sorlando at nicira.com Tue Dec 2 16:34:04 2014 From: sorlando at nicira.com (Salvatore Orlando) Date: Tue, 2 Dec 2014 17:34:04 +0100 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: <8EAC7F6F-0C94-4614-A786-12235B929310@mcclain.xyz> References: <8EAC7F6F-0C94-4614-A786-12235B929310@mcclain.xyz> Message-ID: I am happy with the proposed change in the core team. Many thanks to the outgoing members and a warm welcome to hell (err core team) to Henry and Kevin! Salvatore Il 02/dic/2014 17:20 "Mark McClain" ha scritto: > > > On Dec 2, 2014, at 10:59 AM, Kyle Mestery wrote: > > > > First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from > > neutron-core. Bob and Nachi have been core members for a while now. > > They have contributed to Neutron over the years in reviews, code and > > leading sub-teams. I'd like to thank them for all that they have done > > over the years. I'd also like to propose that should they start > > reviewing more going forward the core team looks to fast track them > > back into neutron-core. But for now, their review stats place them > > below the rest of the team for 180 days. > > Thanks to Bob and Nachi for all of their hard work during the early cycles > of Quantum/Neutron. > > > > > As part of the changes, I'd also like to propose two new members to > > neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have > > been very active in reviews, meetings, and code for a while now. Henry > > lead the DB team which fixed Neutron DB migrations during Juno. Kevin > > has been actively working across all of Neutron, he's done some great > > work on security fixes and stability fixes in particular. Their > > comments in reviews are insightful and they have helped to onboard new > > reviewers and taken the time to work with people on their patches. > > > > Existing neutron cores, please vote +1/-1 for the addition of Henry > > and Kevin to the core team. > > > > +1 to Henry and Kevin. They?ve both been great contributors the last few > cycles and would make great cores. > > mark > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dannchoi at cisco.com Tue Dec 2 16:34:07 2014 From: dannchoi at cisco.com (Danny Choi (dannchoi)) Date: Tue, 2 Dec 2014 16:34:07 +0000 Subject: [openstack-dev] [qa] Should it be allowed to attach 2 interfaces from the same subnet to a VM? Message-ID: Hi Andrea, Though both interfaces come up, only one will response to the ping from the neutron router. When I disable it, then the second one will response to ping. So it looks like only one interface is useful at a time. My question is is there any useful case for this, I.e. Why would you do this? Thanks, Danny Date: Tue, 2 Dec 2014 10:44:57 +0000 From: Andrea Frittoli > To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [qa] Should it be allowed to attach 2 interfaces from the same subnet to a VM? Message-ID: > Content-Type: text/plain; charset=UTF-8 Hello Danny, I think so. Any special concern with a VM using more than one port on a subnet? andrea On 2 December 2014 at 02:04, Danny Choi (dannchoi) > wrote: Hi, When I attach 2 interfaces from the same subnet to a VM, there is no error returned and both interfaces come up. lab at tme211:/opt/stack/logs$ nova interface-attach --net-id e38dba4a-74ed-4312-ba21-2a04b5c5a5b5 cirros-1 lab at tme211:/opt/stack/logs$ nova list +--------------------------------------+----------+--------+------------+-------------+-------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------+--------+------------+-------------+-------------------+ | 9d88d0b5-2453-4657-8058-987980ec7744 | cirros-1 | ACTIVE | - | Running | private=10.0.0.10 | +--------------------------------------+----------+--------+------------+-------------+-------------------+ lab at tme211:/opt/stack/logs$ nova interface-attach --net-id e38dba4a-74ed-4312-ba21-2a04b5c5a5b5 cirros-1 lab at tme211:/opt/stack/logs$ nova list +--------------------------------------+----------+--------+------------+-------------+------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------+--------+------------+-------------+------------------------------+ | 9d88d0b5-2453-4657-8058-987980ec7744 | cirros-1 | ACTIVE | - | Running | private=10.0.0.10, 10.0.0.11 | +--------------------------------------+----------+--------+------------+-------------+------------------------------+ $ ifconfig eth0 Link encap:Ethernet HWaddr FA:16:3E:92:2D:2B inet addr:10.0.0.10 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe92:2d2b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:514 errors:0 dropped:0 overruns:0 frame:0 TX packets:307 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:48342 (47.2 KiB) TX bytes:41750 (40.7 KiB) eth1 Link encap:Ethernet HWaddr FA:16:3E:EF:55:BC inet addr:10.0.0.11 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:feef:55bc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:49 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3556 (3.4 KiB) TX bytes:1120 (1.0 KiB) Should this operation be allowed? Thanks, Danny _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From armamig at gmail.com Tue Dec 2 16:35:17 2014 From: armamig at gmail.com (Armando M.) Date: Tue, 2 Dec 2014 08:35:17 -0800 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: <8EAC7F6F-0C94-4614-A786-12235B929310@mcclain.xyz> References: <8EAC7F6F-0C94-4614-A786-12235B929310@mcclain.xyz> Message-ID: Congrats to Henry and Kevin, +1! -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Dec 2 16:37:19 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 2 Dec 2014 11:37:19 -0500 Subject: [openstack-dev] oslo.config 1.5.0 released Message-ID: <17D67BAB-68FC-48DF-8C90-8C6B3ACB2984@doughellmann.com> The Oslo team is pleased to announce release 1.5.0 of oslo.config. This is primarily a bug-fix release, but does include requirements changes. For more details, please see the git log history below and http://launchpad.net/oslo/+milestone/1.5.0 Please report issues through launchpad: http://bugs.launchpad.net/oslo ---------------------------------------- Changes in openstack/oslo.config 1.4.0..1.5.0 7ab3326 Updated from global requirements c81dc30 Updated from global requirements 4a15ea3 Fix class constant indentation 5d5faeb Updated from global requirements d6b0ee6 Activate pep8 check that _ is imported 73635ef Updated from global requirements cf94a51 Updated from global requirements e140a1d Add pbr to installation requirements e906e74 Updated from global requirements 0a7abd0 Add some guidance for group names e0ad7fa delay formatting debug log message f7c54d9 Check config default value is correct type 41770ad Report permission denied when parsing config 5ada833 Fix docs example using generator config files e82f6bb Updated from global requirements fa458ee do not use colons in section titles 2af57e5 Stop using intersphinx a736da3 Fixed typo in docstring for _get_config_dirs ba6486a Update contributing instructions 70fc459 Add missing newline to stderr output when argument value is wrong diffstat (except docs and test files): CONTRIBUTING.rst | 2 +- oslo/config/cfg.py | 64 ++++++++++++++++++++++++++++++++++++++++----- oslo/config/generator.py | 2 +- oslo/config/types.py | 45 +++++++++++++++++++++++++------ requirements.txt | 3 ++- test-requirements.txt | 8 +++--- tests/test_cfg.py | 58 ++++++++++++++++++++++++---------------- tests/test_generator.py | 3 ++- tox.ini | 1 - 14 files changed, 162 insertions(+), 52 deletions(-) Requirements updates: diff --git a/requirements.txt b/requirements.txt index bde4919..fb3695c 100644 --- a/requirements.txt +++ b/requirements.txt @@ -4,0 +5 @@ +pbr>=0.6,!=0.7,<1.0 @@ -8 +9 @@ six>=1.7.0 -stevedore>=0.14 +stevedore>=1.1.0 # Apache-2.0 diff --git a/test-requirements.txt b/test-requirements.txt index 2a9b720..e23d9ef 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -12,2 +12,2 @@ testscenarios>=0.4 -testtools>=0.9.34 -oslotest>=1.1.0.0a1 +testtools>=0.9.36,!=1.2.0 +oslotest>=1.2.0 # Apache-2.0 @@ -21,2 +21,2 @@ coverage>=3.6 -sphinx>=1.1.2,!=1.2.0,<1.3 -oslosphinx>=2.2.0.0a2 +sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 +oslosphinx>=2.2.0 # Apache-2.0 From doug at doughellmann.com Tue Dec 2 16:42:09 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 2 Dec 2014 11:42:09 -0500 Subject: [openstack-dev] oslosphinx 2.3.0 released Message-ID: The Oslo team is pleased to announce release 2.3.0 of oslosphinx. This release includes bug fixes, new features, and updated requirements. For more details, please see the git log history below and http://launchpad.net/oslosphinx/+milestone/2.3.0 Please report issues through launchpad: http://bugs.launchpad.net/oslosphinx ---------------------------------------- Changes in openstack/oslosphinx 2.2.0..2.3.0 c7e307e provide visual separation in sidebar 6ce23c5 Updated from global requirements 64fe08f Add pbr to installation requirements cf85e06 Report documentation build warnings as errors 446a8dc Add initial cut for documentation 8464870 Remove empty file f6fee8b warn against sorting requirements diffstat (except docs and test files): CONTRIBUTING.rst | 17 +++++++ oslosphinx/theme/openstack/static/tweaks.css | 13 ++++- requirements.txt | 5 ++ setup.cfg | 3 ++ test-requirements.txt | 7 +++ tox.ini | 7 +-- 12 files changed, 166 insertions(+), 4 deletions(-) Requirements updates: diff --git a/requirements.txt b/requirements.txt index e69de29..c81bd8e 100644 --- a/requirements.txt +++ b/requirements.txt @@ -0,0 +1,5 @@ +# The order of packages is significant, because pip processes them in the order +# of appearance. Changing the order has an impact on the overall integration +# process, which may cause wedges in the gate later. + +pbr>=0.6,!=0.7,<1.0 diff --git a/test-requirements.txt b/test-requirements.txt index c0de34d..b859e83 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -0,0 +1,4 @@ +# The order of packages is significant, because pip processes them in the order +# of appearance. Changing the order has an impact on the overall integration +# process, which may cause wedges in the gate later. + @@ -1,0 +6,3 @@ hacking>=0.9.2,<0.10 + +# this is required for the docs build jobs +sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 From kukura at noironetworks.com Tue Dec 2 16:43:02 2014 From: kukura at noironetworks.com (Robert Kukura) Date: Tue, 02 Dec 2014 11:43:02 -0500 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: <547DEC16.9080700@noironetworks.com> Assuming I still have a vote, I vote +1 for adding Henry and Kevin, both of whom I am confident will do a great job as core reviewer. I'd ask people to consider voting against dropping me from core (and I vote -1 on that if I get a vote). During Juno, my plan was to balance my time between neutron core work and implementing GBP as part of neutron. Unfortunately, that did not go as planned, and the Juno GBP work has continued outside neutron, requiring most of my attention, and leaving little time for neutron work. I've felt that it would be irresponsible to do drive-by reviews in neutron while this is the case, so had pretty much stopped doing neutron reviews until the point when I could devote enough attention to follow through on the patches that I review. But I've continued to participate in other ways, including co-leading the ML2 sub team. The Juno GBP work is just about complete, and I have agreement from my management to make neutron work my top priority for the remainder of Kilo. So, as core or not, I expect to be ramping my neutron reviewing back up very quickly, and plan to be in SLC next week for the mid cycle meetup. If you agree that my contributions as a core reviewer have been worthwhile over the long term, and trust that I will do as I say and make core reviews my top priority for the remainder of Kilo, I ask that you vote -1 on dropping me. If I am not dropped and my stats don't improve significantly in the next 30 days, I'll happily resign from core. Regarding dropping Nachi, I will pass, as I've not been paying enough attention to the reviews to judge his recent level of contribution. -Bob On 12/2/14 10:59 AM, Kyle Mestery wrote: > Now that we're in the thick of working hard on Kilo deliverables, I'd > like to make some changes to the neutron core team. Reviews are the > most important part of being a core reviewer, so we need to ensure > cores are doing reviews. The stats for the 180 day period [1] indicate > some changes are needed for cores who are no longer reviewing. > > First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from > neutron-core. Bob and Nachi have been core members for a while now. > They have contributed to Neutron over the years in reviews, code and > leading sub-teams. I'd like to thank them for all that they have done > over the years. I'd also like to propose that should they start > reviewing more going forward the core team looks to fast track them > back into neutron-core. But for now, their review stats place them > below the rest of the team for 180 days. > > As part of the changes, I'd also like to propose two new members to > neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have > been very active in reviews, meetings, and code for a while now. Henry > lead the DB team which fixed Neutron DB migrations during Juno. Kevin > has been actively working across all of Neutron, he's done some great > work on security fixes and stability fixes in particular. Their > comments in reviews are insightful and they have helped to onboard new > reviewers and taken the time to work with people on their patches. > > Existing neutron cores, please vote +1/-1 for the addition of Henry > and Kevin to the core team. > > Thanks! > Kyle > > [1] http://stackalytics.com/report/contribution/neutron-group/180 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From riwinter at cisco.com Tue Dec 2 16:44:02 2014 From: riwinter at cisco.com (Richard Winters (riwinter)) Date: Tue, 2 Dec 2014 16:44:02 +0000 Subject: [openstack-dev] [tempest] tearDownClass usage in scenario tests In-Reply-To: References: Message-ID: I see - looks like the repro I?m based off of hasn?t keep up with the latest. I?ll have to look at the ScenarioTest. Thanks Rich On 12/2/14, 9:54 AM, "Andrea Frittoli" wrote: >Hello Rich, > >in the latest tempest we made two significant changes compared to the >version you're using. > >We dropped the use of official clients from scenario tests (and >OfficialClientTest has been replaced by ScenarioTest). >And we introduced resource_setup and resource_cleanup in the test base >class, which should be used instead of setUpClass and teadDownClass >(there's an hacking rule for that). > >While tearDownClass is not always invoked, resource_cleanup is always >invoked, and it has been implemented to avoid resource leaks. > >If you are using an older version of tempest you should be able to >override tearDownClass instead. > >andrea > > >On 2 December 2014 at 13:51, Richard Winters (riwinter) > wrote: >> I?ve noticed that in scenario tests only the OfficialClientTest in >> manager.py has a tearDownClass and was wondering if there is a reason >>for >> that? >> >> In my scenario tests I need to ensure a particular connection gets >>closed >> after the test runs. This connection is setup in setUpClass so it makes >> sense to me that it should also be closed in the tearDownClass. >> >> This is how I?m cleaning up now ? but didn?t know if there is better >>way to >> do it. >> @classmethod >> def tearDownClass(cls): >> super(TestCSROneNet, cls).tearDownClass() >> if cls.nx_onep is not None: >> cls.nx_onep.disconnect() >> >> Thanks >> Rich >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Tue Dec 2 16:45:05 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 2 Dec 2014 11:45:05 -0500 Subject: [openstack-dev] oslotest 1.3.0 released Message-ID: <11D5F26E-06A3-4F12-8B08-E0F486059C5A@doughellmann.com> The Oslo team is pleased to announce release 1.3.0 of oslotest. This release is primarily to update the dependencies for the library. For more details, please see the git log history below and http://launchpad.net/oslotest/+milestone/1.3.0 Please report issues through launchpad: http://bugs.launchpad.net/oslotest ---------------------------------------- Changes in openstack/oslotest 1.2.0..1.3.0 2568413 Updated from global requirements 30925bd Updated from global requirements 47afc62 Updated from global requirements 230f8e3 Add pbr to installation requirements fa1f130 Clean up the docs for oslo_debug_helper diffstat (except docs and test files): requirements.txt | 3 ++- 2 files changed, 19 insertions(+), 6 deletions(-) Requirements updates: diff --git a/requirements.txt b/requirements.txt index 01d1947..125b54e 100644 --- a/requirements.txt +++ b/requirements.txt @@ -4,0 +5 @@ +pbr>=0.6,!=0.7,<1.0 @@ -11 +12 @@ testscenarios>=0.4 -testtools>=0.9.34 +testtools>=0.9.36,!=1.2.0 From openstack at nemebean.com Tue Dec 2 16:49:24 2014 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 02 Dec 2014 10:49:24 -0600 Subject: [openstack-dev] [TripleO] [CI] Cinder/Ceph CI setup In-Reply-To: <547725BA.1050507@redhat.com> References: <5475B535.6080308@redhat.com> <547725BA.1050507@redhat.com> Message-ID: <547DED94.5000701@nemebean.com> On 11/27/2014 07:23 AM, Derek Higgins wrote: > On 27/11/14 10:21, Duncan Thomas wrote: >> I'd suggest starting by making it an extra job, so that it can be >> monitored for a while for stability without affecting what is there. > > we have to be careful here, adding an extra job for this is probably the > safest option but tripleo CI resources are a constraint, for that reason > I would add it to the HA job (which is currently non voting) and once > its stable we should make it voting. The only problem is that the HA job has been non-voting for so long that I don't think anyone pays attention to it. That said, I don't have a better suggestion because it makes no sense to run a Cinder HA job in a non-HA CI run, so I guess until HA CI is fixed we're kind of stuck. So +1 to making this the default in HA jobs. > >> >> I'd be supportive of making it the default HA job in the longer term as >> long as the LVM code is still getting tested somewhere - LVM is still >> the reference implementation in cinder and after discussion there was >> strong resistance to changing that. > We are and would continue to use lvm for our non ha jobs, If I > understand it correctly the tripleo lvm support isn't HA so continuing > to test it on our HA job doesn't achieve much. > >> >> I've no strong opinions on the node layout, I'll leave that to more >> knowledgable people to discuss. >> >> Is the ceph/tripleO code in a working state yet? Is there a guide to >> using it? >> >> >> On 26 November 2014 at 13:10, Giulio Fidente > > wrote: >> >> hi there, >> >> while working on the TripleO cinder-ha spec meant to provide HA for >> Cinder via Ceph [1], we wondered how to (if at all) test this in CI, >> so we're looking for some feedback >> >> first of all, shall we make Cinder/Ceph the default for our >> (currently non-voting) HA job? >> (check-tripleo-ironic-__overcloud-precise-ha) >> >> current implementation (under review) should permit for the >> deployment of both the Ceph monitors and Ceph OSDs on either >> controllers, dedicated nodes, or to split them up so that only OSDs >> are on dedicated nodes >> >> what would be the best scenario for CI? >> >> * a single additional node hosting a Ceph OSD with the Ceph monitors >> deployed on all controllers (my preference is for this one) > > I would be happy with this so long as it didn't drastically increase the > time to run the HA job. > >> >> * a single additional node hosting a Ceph OSD and a Ceph monitor >> >> * no additional nodes with controllers also service as Ceph monitor >> and Ceph OSD >> >> more scenarios? comments? Thanks for helping >> >> 1. >> https://blueprints.launchpad.__net/tripleo/+spec/tripleo-__kilo-cinder-ha >> >> -- >> Giulio Fidente >> GPG KEY: 08D733BA >> >> _________________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.__org >> >> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev >> >> >> >> >> -- >> Duncan Thomas >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sgordon at redhat.com Tue Dec 2 16:52:40 2014 From: sgordon at redhat.com (Steve Gordon) Date: Tue, 2 Dec 2014 11:52:40 -0500 (EST) Subject: [openstack-dev] [stable] New config options, no default change In-Reply-To: References: <20141111103053.GD7366@redhat.com> <5461FC66.6060103@openstack.org> <5468F85B.10901@electronicjungle.net> <54748DC0.50400@redhat.com> <547CF9A1.4070009@electronicjungle.net> Message-ID: <350487463.25449839.1417539160631.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Alan Pevec" > To: "Jay Bryant" , "OpenStack Development Mailing List (not for usage questions)" > > >>> What is the text that should be included in the commit messages to make > >>> sure that it is picked up for release notes? > >> I'm not sure anyone tracks commit messages to create release notes. > > Let's use existing DocImpact tag, I'll add check for this in the > release scripts. > But I prefer if you could directly include the proposed text in the > draft release notes (link below) Will the bugs created by this end up in the openstack-manuals project (which I don't think is the right place for them in this case) or has it been set up to create them somewhere else (or not at all) when the commits are against the stable branches? -Steve From Kevin.Fox at pnnl.gov Tue Dec 2 16:59:17 2014 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 2 Dec 2014 16:59:17 +0000 Subject: [openstack-dev] [Manila] Manila project use-cases In-Reply-To: References: <1B324A03-7768-4603-9544-01EF53CA7BBC@koderer.com>, Message-ID: <1A3C52DFCD06494D8528644858247BF017812477@EX10MBOX03.pnnl.gov> I does 2 mean something like: Have new manilla share unionfs under the hood, with the existing storage mounted too, so you can switch all clients over to manilla, then migrate the data, then remove the old storage/unionfs live? Thanks, Kevin ________________________________ From: Valeriy Ponomaryov [vponomaryov at mirantis.com] Sent: Tuesday, December 02, 2014 7:44 AM To: OpenStack Development Mailing List (not for usage questions) Cc: Haub, Stefan; Lichtenstein, Thomas Subject: Re: [openstack-dev] [Manila] Manila project use-cases Hello Marc, Here, I tried to cover mentioned use cases with "implemented or not" notes: 1) Implemented, but details of implementation are different for different share drivers. 2) Not clear for me. If you mean possibility to mount one share to any amount of VMs, then yes. 3) Nova is used only in one case - Generic Driver that uses Cinder volumes. So, it can be said, that Manila interface does allow to use "flat" network and a share driver just should have implementation for it. I will assume you mean usage of generic driver and possibility to mount shares to different machines except Nova VMs. - In that case network architecture should allow to make connection in general. If it is allowed, then should not be any problems with mount to any machine. Just access-allow operations should be performed. 4) Access can be shared, but it is not as flexible as could be wanted. Owner of a share can grant access to all, and if there is network connectivity between user and share host, then user will be able to mount having provided access. 5) Manila can not remove some "mount" of some share, it can remove access for possibility to mount in general. So, looks like not implemented. 6) Implemented. 7) Not implemented yet. 8) No "cloning", but we have snapshot-approach as for volumes in cinder. Regards, Valeriy Ponomaryov Mirantis On Tue, Dec 2, 2014 at 4:22 PM, Marc Koderer > wrote: Hello Manila Team, We identified use cases for Manila during an internal workshop with our operators. I would like to share them with you and update the wiki [1] since it seems to be outdated. Before that I would like to gather feedback and you might help me with identifying things that aren?t implemented yet. Our list: 1.) Create a share and use it in a tenant Initial creation of a shared storage volume and assign it to several VM?s 2.) Assign an preexisting share to a VM with Manila Import an existing Share with data and it to several VM?s in case of migrating an existing production - services to Openstack. 3.) External consumption of a share Accommodate and provide mechanisms for last-mile consumption of shares by consumers of the service that aren't mediated by Nova. 4.) Cross Tenant sharing Coordinate shares across tenants 5.) Detach a share and don?t destroy data (deactivate) Share is flagged as inactive and data are not destroyed for later usage or in case of legal requirements. 6.) Unassign and delete data of a share Destroy entire share with all data and free space for further usage. 7.) Resize Share Resize existing and assigned share on the fly. 8.) Copy existing share Copy existing share between different storage technologies Regards Marc Deutsche Telekom [1]: https://wiki.openstack.org/wiki/Manila/usecases _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kind Regards Valeriy Ponomaryov www.mirantis.com vponomaryov at mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Tue Dec 2 16:59:21 2014 From: sgordon at redhat.com (Steve Gordon) Date: Tue, 2 Dec 2014 11:59:21 -0500 (EST) Subject: [openstack-dev] [qa] Should it be allowed to attach 2 interfaces from the same subnet to a VM? In-Reply-To: References: Message-ID: <821487884.25457531.1417539561825.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Danny Choi (dannchoi)" > To: openstack-dev at lists.openstack.org > > Hi Andrea, > > Though both interfaces come up, only one will response to the ping from the > neutron router. > When I disable it, then the second one will response to ping. > So it looks like only one interface is useful at a time. > > My question is is there any useful case for this, I.e. Why would you do this? > > Thanks, > Danny The rationale is given in the spec (http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/nfv-multiple-if-1-net.html) as: """ NFV functions occasionally require multiple interfaces to be attached to a single network from the same VM, for reasons described below in the ?use cases? section. When this is required, the VNF generally cannot be used under Openstack. VNFs are often large, complex pieces of code, and may be supplied by third parties. For various reasons, it is not uncommon that it is necessary to feed traffic out of an interface and into another interface (when the VNF implements multiple functions and the functions cannot be chained internally) or to feed traffic from e.g. the internet into multiple interfaces to run them through separate processing functions internally. The limitation can be seen as one of the VNF. Clearly, the VNF could be changed to put multiple addresses or functions on a single port (to fix the incoming traffic issue) or to connect functions internally (to fix the passthrough problem. The problem with this solution is that the timescale for getting such a fix is often prohibitive. VNFs are large, complex pieces of code, and often the supplier of the VNF is not the same organisation as that trying to use the VNF within Openstack, necessitating a feature change request which may well not be possible within reasonable timescales. We propose changing the code within Nova to remove this limitation. """ -Steve From carl at ecbaldwin.net Tue Dec 2 17:05:52 2014 From: carl at ecbaldwin.net (Carl Baldwin) Date: Tue, 2 Dec 2014 10:05:52 -0700 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: +1 from me for all the changes. I appreciate the work from all four of these excellent contributors. I'm happy to welcome Henry and Kevin as new core reviewers. I also look forward to continuing to work with Nachi and Bob as important members of the community. Carl On Tue, Dec 2, 2014 at 8:59 AM, Kyle Mestery wrote: > Now that we're in the thick of working hard on Kilo deliverables, I'd > like to make some changes to the neutron core team. Reviews are the > most important part of being a core reviewer, so we need to ensure > cores are doing reviews. The stats for the 180 day period [1] indicate > some changes are needed for cores who are no longer reviewing. > > First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from > neutron-core. Bob and Nachi have been core members for a while now. > They have contributed to Neutron over the years in reviews, code and > leading sub-teams. I'd like to thank them for all that they have done > over the years. I'd also like to propose that should they start > reviewing more going forward the core team looks to fast track them > back into neutron-core. But for now, their review stats place them > below the rest of the team for 180 days. > > As part of the changes, I'd also like to propose two new members to > neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have > been very active in reviews, meetings, and code for a while now. Henry > lead the DB team which fixed Neutron DB migrations during Juno. Kevin > has been actively working across all of Neutron, he's done some great > work on security fixes and stability fixes in particular. Their > comments in reviews are insightful and they have helped to onboard new > reviewers and taken the time to work with people on their patches. > > Existing neutron cores, please vote +1/-1 for the addition of Henry > and Kevin to the core team. > > Thanks! > Kyle > > [1] http://stackalytics.com/report/contribution/neutron-group/180 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From vponomaryov at mirantis.com Tue Dec 2 17:12:00 2014 From: vponomaryov at mirantis.com (Valeriy Ponomaryov) Date: Tue, 2 Dec 2014 19:12:00 +0200 Subject: [openstack-dev] [Manila] Manila project use-cases In-Reply-To: <1A3C52DFCD06494D8528644858247BF017812477@EX10MBOX03.pnnl.gov> References: <1B324A03-7768-4603-9544-01EF53CA7BBC@koderer.com> <1A3C52DFCD06494D8528644858247BF017812477@EX10MBOX03.pnnl.gov> Message-ID: Kevin, Just for clarification - there is no support of UnionFS in Manila for the moment. "Migration" of shares from one storage to another is not supported in Manila for the moment, even on level of interfaces. All possible "movements" of data are out of Manila. On Tue, Dec 2, 2014 at 6:59 PM, Fox, Kevin M wrote: > I does 2 mean something like: > Have new manilla share unionfs under the hood, with the existing storage > mounted too, so you can switch all clients over to manilla, then migrate > the data, then remove the old storage/unionfs live? > > Thanks, > Kevin > ------------------------------ > *From:* Valeriy Ponomaryov [vponomaryov at mirantis.com] > *Sent:* Tuesday, December 02, 2014 7:44 AM > *To:* OpenStack Development Mailing List (not for usage questions) > *Cc:* Haub, Stefan; Lichtenstein, Thomas > *Subject:* Re: [openstack-dev] [Manila] Manila project use-cases > > Hello Marc, > > Here, I tried to cover mentioned use cases with "implemented or not" > notes: > > 1) Implemented, but details of implementation are different for > different share drivers. > 2) Not clear for me. If you mean possibility to mount one share to any > amount of VMs, then yes. > 3) Nova is used only in one case - Generic Driver that uses Cinder > volumes. So, it can be said, that Manila interface does allow to use "flat" > network and a share driver just should have implementation for it. I will > assume you mean usage of generic driver and possibility to mount shares to > different machines except Nova VMs. - In that case network architecture > should allow to make connection in general. If it is allowed, then should > not be any problems with mount to any machine. Just access-allow operations > should be performed. > 4) Access can be shared, but it is not as flexible as could be wanted. > Owner of a share can grant access to all, and if there is network > connectivity between user and share host, then user will be able to mount > having provided access. > 5) Manila can not remove some "mount" of some share, it can remove access > for possibility to mount in general. So, looks like not implemented. > 6) Implemented. > 7) Not implemented yet. > 8) No "cloning", but we have snapshot-approach as for volumes in cinder. > > Regards, > Valeriy Ponomaryov > Mirantis > > On Tue, Dec 2, 2014 at 4:22 PM, Marc Koderer wrote: > >> Hello Manila Team, >> >> We identified use cases for Manila during an internal workshop >> with our operators. I would like to share them with you and >> update the wiki [1] since it seems to be outdated. >> >> Before that I would like to gather feedback and you might help me >> with identifying things that aren?t implemented yet. >> >> Our list: >> >> 1.) Create a share and use it in a tenant >> Initial creation of a shared storage volume and assign it to several >> VM?s >> >> 2.) Assign an preexisting share to a VM with Manila >> Import an existing Share with data and it to several VM?s in case of >> migrating an existing production - services to Openstack. >> >> 3.) External consumption of a share >> Accommodate and provide mechanisms for last-mile consumption of >> shares by >> consumers of the service that aren't mediated by Nova. >> >> 4.) Cross Tenant sharing >> Coordinate shares across tenants >> >> 5.) Detach a share and don?t destroy data (deactivate) >> Share is flagged as inactive and data are not destroyed for later >> usage or in case of legal requirements. >> >> 6.) Unassign and delete data of a share >> Destroy entire share with all data and free space for further usage. >> >> 7.) Resize Share >> Resize existing and assigned share on the fly. >> >> 8.) Copy existing share >> Copy existing share between different storage technologies >> >> Regards >> Marc >> Deutsche Telekom >> >> [1]: https://wiki.openstack.org/wiki/Manila/usecases >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Kind Regards > Valeriy Ponomaryov > www.mirantis.com > vponomaryov at mirantis.com > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Kind Regards Valeriy Ponomaryov www.mirantis.com vponomaryov at mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gord at live.ca Tue Dec 2 17:21:27 2014 From: gord at live.ca (gordon chung) Date: Tue, 2 Dec 2014 12:21:27 -0500 Subject: [openstack-dev] oslo.middleware 0.2.0 released Message-ID: The Oslo team is pleased to announce release 0.2.0 of oslo.middleware. This is primarily a bug-fix release, but does include requirements changes. For more details, please see the git log history below and https://launchpad.net/oslo.middleware/kilo/0.2.0 Please report issues through launchpad: https://bugs.launchpad.net/oslo.middleware ---------------------------------------- Changes in openstack/oslo.middleware 0.1.0..0.2.0 7baf57a Updated from global requirements 6f88759 Updated from global requirements f9d0b94 Flesh out the README edfa12c Imported Translations from Transifex 5fd894b Updated from global requirements 28b8ad2 Add pbr to installation requirements b49d38c Updated from global requirements 2f53838 Updated from global requirements afb541d Remove extraneous vim editor configuration comments c32959f Imported Translations from Transifex 9ccefd8 Support building wheels (PEP-427) 72836d0 Fix coverage testing 7ee3b0f Expose sizelimit option to config generator 7846039 Imported Translations from Transifex e18de4a Imported Translations from Transifex 7874cf9 Updated from global requirements d7bdf52 Imported Translations from Transifex 3679023 Remove oslo-incubator fixture diffstat (except docs and test files): README.rst | 5 +- openstack-common.conf | 1 - .../de/LC_MESSAGES/oslo.middleware-log-error.po | 27 +++++++ .../locale/de/LC_MESSAGES/oslo.middleware.po | 27 +++++++ .../en_GB/LC_MESSAGES/oslo.middleware-log-error.po | 27 +++++++ .../locale/en_GB/LC_MESSAGES/oslo.middleware.po | 27 +++++++ .../fr/LC_MESSAGES/oslo.middleware-log-error.po | 27 +++++++ .../locale/fr/LC_MESSAGES/oslo.middleware.po | 27 +++++++ .../locale/oslo.middleware-log-critical.pot | 20 +++++ .../locale/oslo.middleware-log-error.pot | 25 +++++++ .../locale/oslo.middleware-log-info.pot | 20 +++++ .../locale/oslo.middleware-log-warning.pot | 20 +++++ oslo/__init__.py | 2 - .../openstack/common/fixture/__init__.py | 0 oslo/middleware/openstack/common/fixture/config.py | 85 ---------------------- oslo/middleware/opts.py | 45 ++++++++++++ oslo/middleware/sizelimit.py | 30 +++++--- requirements.txt | 5 +- setup.cfg | 9 ++- test-requirements.txt | 9 ++- tests/test_sizelimit.py | 5 +- 21 files changed, 334 insertions(+), 109 deletions(-) Requirements updates: diff --git a/requirements.txt b/requirements.txt index 414bdf6..275fa4f 100644 --- a/requirements.txt +++ b/requirements.txt @@ -4,0 +5 @@ +pbr>=0.6,!=0.7,<1.0 @@ -6,2 +7,2 @@ Babel>=1.3 -oslo.config>=1.4.0.0a3 -oslo.i18n>=0.3.0 # Apache-2.0 +oslo.config>=1.4.0 # Apache-2.0 +oslo.i18n>=1.0.0 # Apache-2.0 diff --git a/test-requirements.txt b/test-requirements.txt index 506a33d..c5c0328 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -8,4 +8,5 @@ mock>=1.0 -oslosphinx>=2.2.0.0a2 -oslotest>=1.1.0.0a2 -sphinx>=1.1.2,!=1.2.0,<1.3 -testtools>=0.9.34 +oslosphinx>=2.2.0 # Apache-2.0 +oslotest>=1.2.0 # Apache-2.0 +sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 +testtools>=0.9.36,!=1.2.0 +coverage>=3.6 cheers gord -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian.otto at rackspace.com Tue Dec 2 17:24:42 2014 From: adrian.otto at rackspace.com (Adrian Otto) Date: Tue, 2 Dec 2014 17:24:42 +0000 Subject: [openstack-dev] [Solum] Solum Team Meeting Time Change Message-ID: Solum Team, As agreed in today?s team meeting, we are shifting our meetings to 2100 UTC on Tuesdays in #openstack-meeting-alt. We are ending our alternating meeting schedule for now. For scheduling specifics, please see: https://wiki.openstack.org/wiki/Meetings/Solum Regards, Adrian Otto From sileht at sileht.net Tue Dec 2 17:26:59 2014 From: sileht at sileht.net (Mehdi Abaakouk) Date: Tue, 02 Dec 2014 18:26:59 +0100 Subject: [openstack-dev] oslo.messaging 1.5.0 released Message-ID: The Oslo team is pleased to announce the release of oslo.messaging 1.5.0. This release includes a number of fixes about rabbit driver timeout that was not always respected, starts using kombu API instead of custom code when it's possible. It also introduces the first ZMQ unit tests. And ZMQ and AMQP 1.0 drivers got some bug fixes and improvements. For more details, please see the git log history below and https://launchpad.net/oslo.messaging/+milestone/1.5.0 Please report issues through launchpad: https://launchpad.net/oslo.messaging ---------------------------------------- Changes in openstack/oslo.messaging 1.4.1..1.5.0 cb78f2e Rabbit: Fixes debug message format 2dd7de9 Rabbit: iterconsume must honor timeout bcb3b23 Don't use oslo.cfg to set kombu in-memory driver f3370da Don't share connection pool between driver object c7d99bf Show what the threshold is being increased to 30a5b12 Wait for expected messages in listener pool test c8e02e9 Dispath messages in all listeners in a pool cb96666 Reduces the unit tests run times b369826 Set correctly the messaging driver to use in tests 7bce31a Always use a poll timeout in the executor f1c7e78 Have the timeout decrement inside the wait() method e15cd36 Renamed PublishErrorsHandler 80e62ae Create a new connection when a process fork has been detected 42f55a1 Remove the use of PROTOCOL_SSLv3 a8d3da2 Add qpid and amqp 1.0 tox targets a5ffc62 Updated from global requirements ee6a729 Imported Translations from Transifex 973301a rabbit: uses kombu instead of builtin stuffs d9d04fb Allows to overriding oslotest environ var 0d49793 Create ZeroMQ Context per socket 7306680 Remove unuseful param of the ConnectionContext 442d8b9 Updated from global requirements 5aadc56 Add basic tests for 0mq matchmakers 30e0aea Notification listener pools 7ea4147 Updated from global requirements 37e5e2a Fix tiny typo in server.py 10eb120 Switch to oslo.middleware a3ca0e5 Updated from global requirements 6f76039 Activate pep8 check that _ is imported f43fe66 Enable user authentication in the AMQP 1.0 driver f74014a Documentation anomaly in TransportURL parse classmethod f61f7c5 Don't put the message payload into warning log 70910e0 Updated from global requirements 6857db1 Add pbr to installation requirements 0088ac9 Updated from global requirements f1afac4 Add driver independent functional tests a476b2e Imported Translations from Transifex db2709e zmq: Remove dead code a87aa3e Updated from global requirements 3dd6a23 Finish transition to oslo.i18n 969847d Imported Translations from Transifex 63a5f1c Imported Translations from Transifex 1640cc1 qpid: Always auto-delete queue of DirectConsumer 6b405b9 Updated from global requirements d4e64d8 Imported Translations from Transifex 487bbf5 Enable oslo.i18n for oslo.messaging 8d242bd Switch to oslo.serialization f378009 Cleanup listener after stopping rpc server 5fd9845 Updated from global requirements ed88623 Track the attempted method when raising UnsupportedVersion 93283f2 fix memory leak for function _safe_log 2478675 Stop using importutils from oslo-incubator 3fa6b8f Add missing deprecated group amqp1 f44b612 Updated from global requirements f57a4ab Stop using intersphinx bc0033a Add documentation explaining how to use the AMQP 1.0 driver d2b34c0 Imported Translations from Transifex 4b57eee Construct ZmqListener with correct arguments 3e6c0b3 Message was send to wrong node with use zmq as rpc_backend e0adc7d Work toward Python 3.4 support and testing d753b03 Ensure the amqp options are present in config file 214fa5e Add contributing page to docs f8ea1a0 Import notifier middleware from oslo-incubator 41fbe41 Let oslotest manage the six.move setting for mox ff6c5e9 Add square brackets for ipv6 based hosts b9a917c warn against sorting requirements 7c2853a Improve help strings diffstat (except docs and test files): .testr.conf | 2 +- openstack-common.conf | 4 - .../locale/de/LC_MESSAGES/oslo.messaging.po | 34 +- .../LC_MESSAGES/oslo.messaging-log-critical.po | 21 - .../en_GB/LC_MESSAGES/oslo.messaging-log-error.po | 14 +- .../en_GB/LC_MESSAGES/oslo.messaging-log-info.po | 21 - .../LC_MESSAGES/oslo.messaging-log-warning.po | 21 - .../locale/en_GB/LC_MESSAGES/oslo.messaging.po | 33 +- .../fr/LC_MESSAGES/oslo.messaging-log-error.po | 27 ++ .../locale/fr/LC_MESSAGES/oslo.messaging.po | 38 ++ oslo.messaging/locale/oslo.messaging-log-error.pot | 9 +- oslo.messaging/locale/oslo.messaging.pot | 31 +- oslo/messaging/_drivers/amqp.py | 34 +- oslo/messaging/_drivers/amqpdriver.py | 62 ++- oslo/messaging/_drivers/base.py | 9 + oslo/messaging/_drivers/common.py | 32 +- oslo/messaging/_drivers/impl_fake.py | 37 +- oslo/messaging/_drivers/impl_qpid.py | 10 +- oslo/messaging/_drivers/impl_rabbit.py | 359 +++++++-------- oslo/messaging/_drivers/impl_zmq.py | 88 +--- oslo/messaging/_drivers/matchmaker.py | 2 +- oslo/messaging/_drivers/matchmaker_ring.py | 2 +- oslo/messaging/_drivers/protocols/amqp/__init__.py | 14 - .../_drivers/protocols/amqp/controller.py | 140 +++--- oslo/messaging/_drivers/protocols/amqp/driver.py | 93 ++-- .../messaging/_drivers/protocols/amqp/eventloop.py | 59 ++- oslo/messaging/_drivers/protocols/amqp/opts.py | 73 +++ oslo/messaging/_executors/base.py | 4 + oslo/messaging/_executors/impl_blocking.py | 14 +- oslo/messaging/_executors/impl_eventlet.py | 14 +- oslo/messaging/_i18n.py | 35 ++ oslo/messaging/conffixture.py | 15 - oslo/messaging/notify/__init__.py | 3 +- oslo/messaging/notify/_impl_log.py | 2 +- oslo/messaging/notify/_impl_routing.py | 2 +- oslo/messaging/notify/dispatcher.py | 7 +- oslo/messaging/notify/listener.py | 14 +- oslo/messaging/notify/log_handler.py | 5 +- oslo/messaging/notify/middleware.py | 128 ++++++ oslo/messaging/openstack/common/__init__.py | 17 - oslo/messaging/openstack/common/gettextutils.py | 498 --------------------- oslo/messaging/openstack/common/importutils.py | 73 --- oslo/messaging/openstack/common/jsonutils.py | 196 -------- .../openstack/common/middleware/__init__.py | 0 oslo/messaging/openstack/common/middleware/base.py | 56 --- oslo/messaging/openstack/common/strutils.py | 239 ---------- oslo/messaging/openstack/common/timeutils.py | 210 --------- oslo/messaging/opts.py | 4 +- oslo/messaging/rpc/dispatcher.py | 9 +- oslo/messaging/server.py | 5 +- oslo/messaging/transport.py | 24 +- requirements-py3.txt | 19 +- requirements.txt | 24 +- setup.cfg | 2 +- test-requirements-py3.txt | 12 +- test-requirements.txt | 15 +- tests/drivers/test_impl_rabbit.py | 198 +++----- tests/drivers/test_matchmaker.py | 69 +++ tests/drivers/test_matchmaker_redis.py | 78 ++++ tests/drivers/test_matchmaker_ring.py | 73 +++ tests/executors/test_executor.py | 2 +- tests/functional/__init__.py | 0 tests/functional/test_functional.py | 279 ++++++++++++ tests/functional/utils.py | 343 ++++++++++++++ tests/notify/test_dispatcher.py | 4 +- tests/notify/test_listener.py | 199 ++++++-- tests/notify/test_log_handler.py | 18 +- tests/notify/test_middleware.py | 190 ++++++++ tests/notify/test_notifier.py | 2 +- tests/rpc/test_dispatcher.py | 2 + tests/rpc/test_server.py | 20 + tests/test_amqp_driver.py | 91 +++- tests/test_exception_serialization.py | 2 +- tests/test_opts.py | 3 +- tests/test_transport.py | 2 +- tests/utils.py | 1 + tox.ini | 24 +- 82 files changed, 2464 insertions(+), 2260 deletions(-) Requirements updates: diff --git a/requirements-py3.txt b/requirements-py3.txt index 1ddc482..e074095 100644 --- a/requirements-py3.txt +++ b/requirements-py3.txt @@ -1,3 +1,9 @@ -oslo.config>=1.2.1 -oslo.utils>=0.2.0 -stevedore>=0.14 +# The order of packages is significant, because pip processes them in the order +# of appearance. Changing the order has an impact on the overall integration +# process, which may cause wedges in the gate later. + +oslo.config>=1.4.0 # Apache-2.0 +oslo.serialization>=1.0.0 # Apache-2.0 +oslo.utils>=1.0.0 # Apache-2.0 +oslo.i18n>=1.0.0 # Apache-2.0 +stevedore>=1.1.0 # Apache-2.0 @@ -8,3 +13,0 @@ six>=1.7.0 -# used by openstack/common/gettextutils.py -Babel>=1.3 - @@ -15 +18 @@ PyYAML>=3.1.0 -kombu>=2.4.8 +kombu>=2.5.0 @@ -18 +21 @@ kombu>=2.4.8 -WebOb>=1.2.3 +oslo.middleware>=0.1.0 # Apache-2.0 diff --git a/requirements.txt b/requirements.txt index efb513e..3f80258 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,3 +1,11 @@ -oslo.config>=1.4.0.0a3 -oslo.utils>=0.2.0 -stevedore>=0.14 +# The order of packages is significant, because pip processes them in the order +# of appearance. Changing the order has an impact on the overall integration +# process, which may cause wedges in the gate later. + +pbr>=0.6,!=0.7,<1.0 + +oslo.config>=1.4.0 # Apache-2.0 +oslo.utils>=1.0.0 # Apache-2.0 +oslo.serialization>=1.0.0 # Apache-2.0 +oslo.i18n>=1.0.0 # Apache-2.0 +stevedore>=1.1.0 # Apache-2.0 @@ -11,4 +19 @@ six>=1.7.0 -eventlet>=0.13.0 - -# used by openstack/common/gettextutils.py -Babel>=1.3 +eventlet>=0.15.2 @@ -20 +25,4 @@ PyYAML>=3.1.0 -kombu>=2.4.8 +kombu>=2.5.0 + +# middleware +oslo.middleware>=0.1.0 # Apache-2.0 diff --git a/test-requirements-py3.txt b/test-requirements-py3.txt index 1c922e7..49c9cba 100644 --- a/test-requirements-py3.txt +++ b/test-requirements-py3.txt @@ -0,0 +1,4 @@ +# The order of packages is significant, because pip processes them in the order +# of appearance. Changing the order has an impact on the overall integration +# process, which may cause wedges in the gate later. + @@ -11,2 +15,2 @@ testscenarios>=0.4 -testtools>=0.9.34 -oslotest +testtools>=0.9.36,!=1.2.0 +oslotest>=1.2.0 # Apache-2.0 @@ -20,2 +24,2 @@ coverage>=3.6 -sphinx>=1.1.2,!=1.2.0,<1.3 -oslosphinx +sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 +oslosphinx>=2.2.0 # Apache-2.0 diff --git a/test-requirements.txt b/test-requirements.txt index 610a052..3105e4c 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -0,0 +1,4 @@ +# The order of packages is significant, because pip processes them in the order +# of appearance. Changing the order has an impact on the overall integration +# process, which may cause wedges in the gate later. + @@ -11,2 +15,2 @@ testscenarios>=0.4 -testtools>=0.9.34 -oslotest +testtools>=0.9.36,!=1.2.0 +oslotest>=1.2.0 # Apache-2.0 @@ -16,0 +21,3 @@ qpid-python +# for test_matchmaker_redis +redis>=2.10.0 + @@ -23,2 +30,2 @@ coverage>=3.6 -sphinx>=1.1.2,!=1.2.0,<1.3 -oslosphinx +sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 +oslosphinx>=2.2.0 # Apache-2.0 From brandon.logan at RACKSPACE.COM Tue Dec 2 17:27:33 2014 From: brandon.logan at RACKSPACE.COM (Brandon Logan) Date: Tue, 2 Dec 2014 17:27:33 +0000 Subject: [openstack-dev] [neutron][lbaas] Kilo Midcycle Meetup In-Reply-To: References: <1416867738.3960.19.camel@localhost> Message-ID: <1417541257.4057.1.camel@localhost> Per the meeting, put together an etherpad here: https://etherpad.openstack.org/p/lbaas-kilo-meetup I would like to get the location and dates finalized ASAP (preferrably the next couple of days). We'll also try to do the same as the neutron and octava meetups for remote attendees. From clint at fewbar.com Tue Dec 2 17:51:21 2014 From: clint at fewbar.com (Clint Byrum) Date: Tue, 02 Dec 2014 09:51:21 -0800 Subject: [openstack-dev] [qa] Should it be allowed to attach 2 interfaces from the same subnet to a VM? In-Reply-To: References: Message-ID: <1417542135-sup-3131@fewbar.com> Excerpts from Danny Choi (dannchoi)'s message of 2014-12-02 08:34:07 -0800: > Hi Andrea, > > Though both interfaces come up, only one will response to the ping from the neutron router. > When I disable it, then the second one will response to ping. > So it looks like only one interface is useful at a time. > I believe both interfaces can be used independently by setting arp_announce to 1 or 2. As in: sysctl -w net.ipv4.conf.all.arp_announce=2 Might want to try both settings. The documentation is here: https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt From m4d.coder at gmail.com Tue Dec 2 17:53:39 2014 From: m4d.coder at gmail.com (W Chan) Date: Tue, 2 Dec 2014 09:53:39 -0800 Subject: [openstack-dev] [Mistral] Event Subscription Message-ID: Renat, I agree with the two methods you proposed. On processing the events, I was thinking a separate entity. But you gave me an idea, how about a system action for publishing the events that the current executors can run? Alternately, instead of making HTTP calls, what do you think if mistral just post the events to the exchange(s) that the subscribers provided and let the subscribers decide how to consume the events (i.e post to webhook, etc.) from these exchanges? This will simplify implementation somewhat. The engine can just take care of publishing the events to the exchanges and call it done. Winson -------------- next part -------------- An HTML attachment was scrubbed... URL: From zaro0508 at gmail.com Tue Dec 2 18:00:49 2014 From: zaro0508 at gmail.com (Zaro) Date: Tue, 2 Dec 2014 10:00:49 -0800 Subject: [openstack-dev] [CI]Setup CI system behind proxy In-Reply-To: References: Message-ID: Could you please clarify? Do you mean you want to setup everything behind a proxy? or do you mean you want to setup just gerrit behind a proxy? On Mon, Dec 1, 2014 at 6:26 PM, thanh le giang wrote: > Dear all > > I have set up a CI system successfully with directly access to internet. > Now I have another requirement which requires setting up CI system behind > proxy but i can't find any way to configure zuul to connect to gerrit > through proxy. > > Any advice is appreciated. > > Thanks and Regards > > -- > L.G.Thanh > > Email: legiangthan at gmail.com > lgthanh at fit.hcmus.edu.vn > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anant.techie at gmail.com Tue Dec 2 18:08:04 2014 From: anant.techie at gmail.com (Anant Patil) Date: Tue, 2 Dec 2014 23:38:04 +0530 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <1417464757-sup-2646@fewbar.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <1417464757-sup-2646@fewbar.com> Message-ID: On Tue, Dec 2, 2014 at 1:49 AM, Clint Byrum wrote: > Excerpts from Anant Patil's message of 2014-11-30 23:02:29 -0800: > > On 27-Nov-14 18:03, Murugan, Visnusaran wrote: > > > Hi Zane, > > > > > > > > > > > > At this stage our implementation (as mentioned in wiki > > > ) achieves > your > > > design goals. > > > > > > > > > > > > 1. In case of a parallel update, our implementation adjusts graph > > > according to new template and waits for dispatched resource tasks to > > > complete. > > > > > > 2. Reason for basing our PoC on Heat code: > > > > > > a. To solve contention processing parent resource by all > dependent > > > resources in parallel. > > > > > > b. To avoid porting issue from PoC to HeatBase. (just to be aware > > > of potential issues asap) > > > > > > 3. Resource timeout would be helpful, but I guess its resource > > > specific and has to come from template and default values from plugins. > > > > > > 4. We see resource notification aggregation and processing next > > > level of resources without contention and with minimal DB usage as the > > > problem area. We are working on the following approaches in *parallel.* > > > > > > a. Use a Queue per stack to serialize notification. > > > > > > b. Get parent ProcessLog (ResourceID, EngineID) and initiate > > > convergence upon first child notification. Subsequent children who fail > > > to get parent resource lock will directly send message to waiting > parent > > > task (topic=stack_id.parent_resource_id) > > > > > > Based on performance/feedback we can select either or a mashed version. > > > > > > > > > > > > Advantages: > > > > > > 1. Failed Resource tasks can be re-initiated after ProcessLog > > > table lookup. > > > > > > 2. One worker == one resource. > > > > > > 3. Supports concurrent updates > > > > > > 4. Delete == update with empty stack > > > > > > 5. Rollback == update to previous know good/completed stack. > > > > > > > > > > > > Disadvantages: > > > > > > 1. Still holds stackLock (WIP to remove with ProcessLog) > > > > > > > > > > > > Completely understand your concern on reviewing our code, since commits > > > are numerous and there is change of course at places. Our start commit > > > is [c1b3eb22f7ab6ea60b095f88982247dd249139bf] though this might not > help J > > > > > > > > > > > > Your Thoughts. > > > > > > > > > > > > Happy Thanksgiving. > > > > > > Vishnu. > > > > > > > > > > > > *From:*Angus Salkeld [mailto:asalkeld at mirantis.com] > > > *Sent:* Thursday, November 27, 2014 9:46 AM > > > *To:* OpenStack Development Mailing List (not for usage questions) > > > *Subject:* Re: [openstack-dev] [Heat] Convergence proof-of-concept > showdown > > > > > > > > > > > > On Thu, Nov 27, 2014 at 12:20 PM, Zane Bitter > > > wrote: > > > > > > A bunch of us have spent the last few weeks working independently > on > > > proof of concept designs for the convergence architecture. I think > > > those efforts have now reached a sufficient level of maturity that > > > we should start working together on synthesising them into a plan > > > that everyone can forge ahead with. As a starting point I'm going > to > > > summarise my take on the three efforts; hopefully the authors of > the > > > other two will weigh in to give us their perspective. > > > > > > > > > Zane's Proposal > > > =============== > > > > > > > https://github.com/zaneb/heat-convergence-prototype/tree/distributed-graph > > > > > > I implemented this as a simulator of the algorithm rather than > using > > > the Heat codebase itself in order to be able to iterate rapidly on > > > the design, and indeed I have changed my mind many, many times in > > > the process of implementing it. Its notable departure from a > > > realistic simulation is that it runs only one operation at a time - > > > essentially giving up the ability to detect race conditions in > > > exchange for a completely deterministic test framework. You just > > > have to imagine where the locks need to be. Incidentally, the test > > > framework is designed so that it can easily be ported to the actual > > > Heat code base as functional tests so that the same scenarios could > > > be used without modification, allowing us to have confidence that > > > the eventual implementation is a faithful replication of the > > > simulation (which can be rapidly experimented on, adjusted and > > > tested when we inevitably run into implementation issues). > > > > > > This is a complete implementation of Phase 1 (i.e. using existing > > > resource plugins), including update-during-update, resource > > > clean-up, replace on update and rollback; with tests. > > > > > > Some of the design goals which were successfully incorporated: > > > - Minimise changes to Heat (it's essentially a distributed version > > > of the existing algorithm), and in particular to the database > > > - Work with the existing plugin API > > > - Limit total DB access for Resource/Stack to O(n) in the number of > > > resources > > > - Limit overall DB access to O(m) in the number of edges > > > - Limit lock contention to only those operations actually > contending > > > (i.e. no global locks) > > > - Each worker task deals with only one resource > > > - Only read resource attributes once > > > > > > > > > Open questions: > > > - What do we do when we encounter a resource that is in progress > > > from a previous update while doing a subsequent update? Obviously > we > > > don't want to interrupt it, as it will likely be left in an unknown > > > state. Making a replacement is one obvious answer, but in many > cases > > > there could be serious down-sides to that. How long should we wait > > > before trying it? What if it's still in progress because the engine > > > processing the resource already died? > > > > > > > > > > > > Also, how do we implement resource level timeouts in general? > > > > > > > > > > > > > > > Micha?'s Proposal > > > ================= > > > > > > https://github.com/inc0/heat-convergence-prototype/tree/iterative > > > > > > Note that a version modified by me to use the same test scenario > > > format (but not the same scenarios) is here: > > > > > > > https://github.com/zaneb/heat-convergence-prototype/tree/iterative-adapted > > > > > > This is based on my simulation framework after a fashion, but with > > > everything implemented synchronously and a lot of handwaving about > > > how the actual implementation could be distributed. The central > > > premise is that at each step of the algorithm, the entire graph is > > > examined for tasks that can be performed next, and those are then > > > started. Once all are complete (it's synchronous, remember), the > > > next step is run. Keen observers will be asking how we know when it > > > is time to run the next step in a distributed version of this > > > algorithm, where it will be run and what to do about resources that > > > are in an intermediate state at that time. All of these questions > > > remain unanswered. > > > > > > > > > > > > Yes, I was struggling to figure out how it could manage an IN_PROGRESS > > > state as it's stateless. So you end up treading on the other action's > toes. > > > > > > Assuming we use the resource's state (IN_PROGRESS) you could get around > > > that. Then you kick off a converge when ever an action completes (if > > > there is nothing new to be > > > > > > done then do nothing). > > > > > > > > > > > > > > > A non-exhaustive list of concerns I have: > > > - Replace on update is not implemented yet > > > - AFAIK rollback is not implemented yet > > > - The simulation doesn't actually implement the proposed > architecture > > > - This approach is punishingly heavy on the database - O(n^2) or > worse > > > > > > > > > > > > Yes, re-reading the state of all resources when ever run a new converge > > > is worrying, but I think Michal had some ideas to minimize this. > > > > > > > > > > > > - A lot of phase 2 is mixed in with phase 1 here, making it > > > difficult to evaluate which changes need to be made first and > > > whether this approach works with existing plugins > > > - The code is not really based on how Heat works at the moment, so > > > there would be either a major redesign required or lots of radical > > > changes in Heat or both > > > > > > I think there's a fair chance that given another 3-4 weeks to work > > > on this, all of these issues and others could probably be resolved. > > > The question for me at this point is not so much "if" but "why". > > > > > > Micha? believes that this approach will make Phase 2 easier to > > > implement, which is a valid reason to consider it. However, I'm not > > > aware of any particular issues that my approach would cause in > > > implementing phase 2 (note that I have barely looked into it at all > > > though). In fact, I very much want Phase 2 to be entirely > > > encapsulated by the Resource class, so that the plugin type (legacy > > > vs. convergence-enabled) is transparent to the rest of the system. > > > Only in this way can we be sure that we'll be able to maintain > > > support for legacy plugins. So a phase 1 that mixes in aspects of > > > phase 2 is actually a bad thing in my view. > > > > > > I really appreciate the effort that has gone into this already, but > > > in the absence of specific problems with building phase 2 on top of > > > another approach that are solved by this one, I'm ready to call > this > > > a distraction. > > > > > > > > > > > > In it's defence, I like the simplicity of it. The concepts and code are > > > easy to understand - tho' part of this is doesn't implement all the > > > stuff on your list yet. > > > > > > > > > > > > > > > > > > Anant & Friends' Proposal > > > ========================= > > > > > > First off, I have found this very difficult to review properly > since > > > the code is not separate from the huge mass of Heat code and nor is > > > the commit history in the form that patch submissions would take > > > (but rather includes backtracking and iteration on the design). As > a > > > result, most of the information here has been gleaned from > > > discussions about the code rather than direct review. I have > > > repeatedly suggested that this proof of concept work should be done > > > using the simulator framework instead, unfortunately so far to no > avail. > > > > > > The last we heard on the mailing list about this, resource clean-up > > > had not yet been implemented. That was a major concern because that > > > is the more difficult half of the algorithm. Since then there have > > > been a lot more commits, but it's not yet clear whether resource > > > clean-up, update-during-update, replace-on-update and rollback have > > > been implemented, though it is clear that at least some progress > has > > > been made on most or all of them. Perhaps someone can give us an > update. > > > > > > > > > https://github.com/anantpatil/heat-convergence-poc > > > > > > > > > > > > AIUI this code also mixes phase 2 with phase 1, which is a concern. > > > For me the highest priority for phase 1 is to be sure that it works > > > with existing plugins. Not only because we need to continue to > > > support them, but because converting all of our existing > > > 'integration-y' unit tests to functional tests that operate in a > > > distributed system is virtually impossible in the time frame we > have > > > available. So the existing test code needs to stick around, and the > > > existing stack create/update/delete mechanisms need to remain in > > > place until such time as we have equivalent functional test > coverage > > > to begin eliminating existing unit tests. (We'll also, of course, > > > need to have unit tests for the individual elements of the new > > > distributed workflow, functional tests to confirm that the > > > distributed workflow works in principle as a whole - the scenarios > > > from the simulator can help with _part_ of this - and, not least, > an > > > algorithm that is as similar as possible to the current one so that > > > our existing tests remain at least somewhat representative and > don't > > > require too many major changes themselves.) > > > > > > Speaking of tests, I gathered that this branch included tests, but > I > > > don't know to what extent there are automated end-to-end functional > > > tests of the algorithm? > > > > > > From what I can gather, the approach seems broadly similar to the > > > one I eventually settled on also. The major difference appears to > be > > > in how we merge two or more streams of execution (i.e. when one > > > resource depends on two or more others). In my approach, the > > > dependencies are stored in the resources and each joining of > streams > > > creates a database row to track it, which is easily locked with > > > contention on the lock extending only to those resources which are > > > direct dependencies of the one waiting. In this approach, both the > > > dependencies and the progress through the graph are stored in a > > > database table, necessitating (a) reading of the entire table (as > it > > > relates to the current stack) on every resource operation, and (b) > > > locking of the entire table (which is hard) when marking a resource > > > operation complete. > > > > > > I chatted to Anant about this today and he mentioned that they had > > > solved the locking problem by dispatching updates to a queue that > is > > > read by a single engine per stack. > > > > > > My approach also has the neat side-effects of pushing the data > > > required to resolve get_resource and get_att (without having to > > > reload the resources again and query them) as well as to update > > > dependencies (e.g. because of a replacement or deletion) along with > > > the flow of triggers. I don't know if anything similar is at work > here. > > > > > > It's entirely possible that the best design might combine elements > > > of both approaches. > > > > > > The same open questions I detailed under my proposal also apply to > > > this one, if I understand correctly. > > > > > > > > > I'm certain that I won't have represented everyone's work fairly > > > here, so I encourage folks to dive in and correct any errors about > > > theirs and ask any questions you might have about mine. (In case > you > > > have been living under a rock, note that I'll be out of the office > > > for the rest of the week due to Thanksgiving so don't expect > > > immediate replies.) > > > > > > I also think this would be a great time for the wider Heat > community > > > to dive in and start asking questions and suggesting ideas. We need > > > to, ahem, converge on a shared understanding of the design so we > can > > > all get to work delivering it for Kilo. > > > > > > > > > > > > Agree, we need to get moving on this. > > > > > > -Angus > > > > > > > > > > > > cheers, > > > Zane. > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > Thanks Zane for your e-mail Zane and summarizing everyone's work. > > > > The design goals mentioned above looks more of performance goals and > > constraints to me. I understand that it is unacceptable to have a poorly > > performing engine and Resource plug-ins broken. Convergence spec clearly > > mentions that the existing Resource plugins should not be changed. > > > > IMHO, and my teams' HO, the design goals of convergence would be: > > 1. Stability: No transient failures, either in Openstack/external > > services or resources themselves, should fail the stack. Therefore, we > > need to have Observers to check for divergence and converge a resource > > if needed, to bring back to stable state. > > 2. Resiliency: Heat engines should be able to take up tasks in case of > > failures/restarts. > > 3. Backward compatibility: "We don't break the user space." No existing > > stacks should break. > > > > We started the PoC with these goals in mind, any performance > > optimization would be a plus point for us. Note than I am neglecting the > > performance goal, just that it should be next in the pipeline. The > > questions we should ask ourselves is whether we are storing enough data > > (state of stack) in DB to enable resiliency? Are we distributing the > > load evenly to all Heat engines? Does our notification mechanism > > provides us some form of guarantee or acknowledgement? > > > > In retrospective, we had to struggle a lot to understand the existing > > Heat engine. We couldn't have done justice by just creating another > > project in GitHub and without any concrete understanding of existing > > state-of-affairs. We are not at the same page with Heat core members, we > > are novice and cores are experts. > > > > I am glad that we experimented with the Heat engine directly. The > > current Heat engine is not resilient and the messaging also lacks > > reliability. We (my team and I guess cores also) understand that async > > message passing would be the way to go as synchronous RPC calls simply > > wouldn't scale. But with async message passing there has to be some > > mechanism of ACKing back, which I think lacks in current infrastructure. > > > > How could we provide stable user defined stack if the underlying Heat > > core lacks it? Convergence is all about stable stacks. To make the > > current Heat core stable we need to have, at the least: > > 1. Some mechanism to ACK back messages over AMQP. Or some other solid > > mechanism of message passing. > > 2. Some mechanism for fault tolerance in Heat engine using external > > tools/infrastructure like Celerey/Zookeeper. Without external > > infrastructure/tool we will end-up bloating Heat engine with lot of > > boiler-plate code to achieve this. We had recommended Celery in our > > previous e-mail (from Vishnu.) > > > > It was due to our experiments with Heat engines for this PoC, we could > > come up with above recommendations. > > > > Sate of our PoC > > --------------- > > > > On GitHub: https://github.com/anantpatil/heat-convergence-poc > > > > Our current implementation of PoC locks the stack after each > > notification to mark the graph as traversed and produce next level of > > resources for convergence. We are facing challenges in > > removing/minimizing these locks. We also have two different school of > > thoughts for solving this lock issue as mentioned above in Vishnu's > > e-mail. I will these descibe in detail the Wiki. There would different > > branches in our GitHub for these two approaches. > > > > It would be helpful if you explained why you need to _lock_ the stack. > MVCC in the database should be enough here. Basically you need to: > > begin transaction > update traversal information > select resolvable nodes > {in code not sql -- send converge commands into async queue} > commit > > Any failure inside this transation should rollback the transaction and > retry this. It is o-k to have duplicate converge commands for a resource. > > This should be the single point of synchronization between workers that > are resolving resources. Or perhaps this is the lock you meant? Either > way, this isn't avoidable if you want to make sure everything is attempted > at least once without having to continuously poll and re-poll the stack > to look for unresolved resources. That is an option, but not one that I > think is going to be as simple as the transactional method. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Dec 2 18:33:05 2014 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 02 Dec 2014 12:33:05 -0600 Subject: [openstack-dev] oslo.concurrency 0.3.0 released In-Reply-To: <547DE8F7.7090206@nemebean.com> References: <547DE8F7.7090206@nemebean.com> Message-ID: <547E05E1.6010008@nemebean.com> We've discovered a couple of problems as a result of this release. pep8 in most/all of the projects using oslo.concurrency is failing due to the move out of the oslo namespace package and the fact that hacking doesn't know how to handle it, and nova unit tests are failing due to a problem with the way some mocking was done. Fixes for both of these problems are in progress and should hopefully be available soon. -Ben On 12/02/2014 10:29 AM, Ben Nemec wrote: > The Oslo team is pleased to announce the release of oslo.concurrency 0.3.0. > > This release includes a number of fixes for problems found during the > initial adoptions of the library, as well as some functionality > improvements. > > For more details, please see the git log history below and > https://launchpad.net/oslo.concurrency/+milestone/0.3.0 > > Please report issues through launchpad: > https://launchpad.net/oslo.concurrency > > openstack/oslo.concurrency 0.2.0..HEAD > > 54c84da Add external lock fixture > 19f07c6 Add a TODO for retrying pull request #20 > 46c836e Allow the lock delay to be provided > 3bda65c Allow for providing a customized semaphore container > 656f908 Move locale files to proper place > faa30f8 Flesh out the README > bca4a0d Move out of the oslo namespace package > 58de317 Improve testing in py3 environment > fa52a63 Only modify autoindex.rst if it exists > 63e618b Imported Translations from Transifex > d5ea62c lockutils-wrapper cleanup > 78ba143 Don't use variables that aren't initialized > > diffstat (except docs and test files): > > .gitignore | 1 + > .testr.conf | 2 +- > README.rst | 4 +- > .../locale/en_GB/LC_MESSAGES/oslo.concurrency.po | 16 +- > oslo.concurrency/locale/oslo.concurrency.pot | 16 +- > oslo/concurrency/__init__.py | 29 ++ > oslo/concurrency/_i18n.py | 32 -- > oslo/concurrency/fixture/__init__.py | 13 + > oslo/concurrency/fixture/lockutils.py | 51 -- > oslo/concurrency/lockutils.py | 376 -------------- > oslo/concurrency/openstack/__init__.py | 0 > oslo/concurrency/openstack/common/__init__.py | 0 > oslo/concurrency/openstack/common/fileutils.py | 146 ------ > oslo/concurrency/opts.py | 45 -- > oslo/concurrency/processutils.py | 340 ------------ > oslo_concurrency/__init__.py | 0 > oslo_concurrency/_i18n.py | 32 ++ > oslo_concurrency/fixture/__init__.py | 0 > oslo_concurrency/fixture/lockutils.py | 76 +++ > oslo_concurrency/lockutils.py | 502 ++++++++++++++++++ > oslo_concurrency/openstack/__init__.py | 0 > oslo_concurrency/openstack/common/__init__.py | 0 > oslo_concurrency/openstack/common/fileutils.py | 146 ++++++ > oslo_concurrency/opts.py | 45 ++ > oslo_concurrency/processutils.py | 340 ++++++++++++ > requirements-py3.txt | 1 + > requirements.txt | 1 + > setup.cfg | 9 +- > tests/test_lockutils.py | 575 > ++++++++++++++++++++ > tests/test_processutils.py | 519 > +++++++++++++++++++ > tests/test_warning.py | 29 ++ > tests/unit/__init__.py | 0 > tests/unit/test_lockutils.py | 543 > ------------------- > tests/unit/test_lockutils_eventlet.py | 59 --- > tests/unit/test_processutils.py | 518 ------------------ > tox.ini | 8 +- > 42 files changed, 3515 insertions(+), 2135 deletions(-) > > Requirements updates: > > diff --git a/requirements-py3.txt b/requirements-py3.txt > index b1a8722..a27b434 100644 > --- a/requirements-py3.txt > +++ b/requirements-py3.txt > @@ -13,0 +14 @@ six>=1.7.0 > +retrying>=1.2.2,!=1.3.0 # Apache-2.0 > diff --git a/requirements.txt b/requirements.txt > index b1a8722..a27b434 100644 > --- a/requirements.txt > +++ b/requirements.txt > @@ -13,0 +14 @@ six>=1.7.0 > +retrying>=1.2.2,!=1.3.0 # Apache-2.0 > From anant.techie at gmail.com Tue Dec 2 18:37:31 2014 From: anant.techie at gmail.com (Anant Patil) Date: Wed, 3 Dec 2014 00:07:31 +0530 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <1417464757-sup-2646@fewbar.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <1417464757-sup-2646@fewbar.com> Message-ID: Yes, that's the synchronization block for which we use the stack lock. Currently, a thread spin waits to acquire the lock to enter this critical section. I don't really know how to do application level transaction. Is there an external library for that? AFAIK, we cannot switch from DB transaction to application level to do some computation to resolve next set of resources, submit to async queue, and again continue with the DB transaction. The transnational method looks attractive and clean to me but I am limited by my knowledge.Or do you mean to have DB transaction object made available via DB APIs and use them in the application? Please share your thoughts. To avoid locking the stack, we were thinking of designating a single engine with responsibility of processing all notifications for a stack. All the workers will notify on the stack topic, which only one engine listens to and then the notifications end up in a queue (local to the engine, per stack), from where they are taken up one-by-one to continue the stack operation. The convergence jobs are produced by this engine for the stack it is responsible for, and they might end-up in any of the engines. But the notifications for a stack are directed to one engine to avoid contention for lock. The convergence load is leveled and distributed, and stack lock is not needed. - Anant On Tue, Dec 2, 2014 at 1:49 AM, Clint Byrum wrote: > Excerpts from Anant Patil's message of 2014-11-30 23:02:29 -0800: > > On 27-Nov-14 18:03, Murugan, Visnusaran wrote: > > > Hi Zane, > > > > > > > > > > > > At this stage our implementation (as mentioned in wiki > > > ) achieves > your > > > design goals. > > > > > > > > > > > > 1. In case of a parallel update, our implementation adjusts graph > > > according to new template and waits for dispatched resource tasks to > > > complete. > > > > > > 2. Reason for basing our PoC on Heat code: > > > > > > a. To solve contention processing parent resource by all > dependent > > > resources in parallel. > > > > > > b. To avoid porting issue from PoC to HeatBase. (just to be aware > > > of potential issues asap) > > > > > > 3. Resource timeout would be helpful, but I guess its resource > > > specific and has to come from template and default values from plugins. > > > > > > 4. We see resource notification aggregation and processing next > > > level of resources without contention and with minimal DB usage as the > > > problem area. We are working on the following approaches in *parallel.* > > > > > > a. Use a Queue per stack to serialize notification. > > > > > > b. Get parent ProcessLog (ResourceID, EngineID) and initiate > > > convergence upon first child notification. Subsequent children who fail > > > to get parent resource lock will directly send message to waiting > parent > > > task (topic=stack_id.parent_resource_id) > > > > > > Based on performance/feedback we can select either or a mashed version. > > > > > > > > > > > > Advantages: > > > > > > 1. Failed Resource tasks can be re-initiated after ProcessLog > > > table lookup. > > > > > > 2. One worker == one resource. > > > > > > 3. Supports concurrent updates > > > > > > 4. Delete == update with empty stack > > > > > > 5. Rollback == update to previous know good/completed stack. > > > > > > > > > > > > Disadvantages: > > > > > > 1. Still holds stackLock (WIP to remove with ProcessLog) > > > > > > > > > > > > Completely understand your concern on reviewing our code, since commits > > > are numerous and there is change of course at places. Our start commit > > > is [c1b3eb22f7ab6ea60b095f88982247dd249139bf] though this might not > help J > > > > > > > > > > > > Your Thoughts. > > > > > > > > > > > > Happy Thanksgiving. > > > > > > Vishnu. > > > > > > > > > > > > *From:*Angus Salkeld [mailto:asalkeld at mirantis.com] > > > *Sent:* Thursday, November 27, 2014 9:46 AM > > > *To:* OpenStack Development Mailing List (not for usage questions) > > > *Subject:* Re: [openstack-dev] [Heat] Convergence proof-of-concept > showdown > > > > > > > > > > > > On Thu, Nov 27, 2014 at 12:20 PM, Zane Bitter > > > wrote: > > > > > > A bunch of us have spent the last few weeks working independently > on > > > proof of concept designs for the convergence architecture. I think > > > those efforts have now reached a sufficient level of maturity that > > > we should start working together on synthesising them into a plan > > > that everyone can forge ahead with. As a starting point I'm going > to > > > summarise my take on the three efforts; hopefully the authors of > the > > > other two will weigh in to give us their perspective. > > > > > > > > > Zane's Proposal > > > =============== > > > > > > > https://github.com/zaneb/heat-convergence-prototype/tree/distributed-graph > > > > > > I implemented this as a simulator of the algorithm rather than > using > > > the Heat codebase itself in order to be able to iterate rapidly on > > > the design, and indeed I have changed my mind many, many times in > > > the process of implementing it. Its notable departure from a > > > realistic simulation is that it runs only one operation at a time - > > > essentially giving up the ability to detect race conditions in > > > exchange for a completely deterministic test framework. You just > > > have to imagine where the locks need to be. Incidentally, the test > > > framework is designed so that it can easily be ported to the actual > > > Heat code base as functional tests so that the same scenarios could > > > be used without modification, allowing us to have confidence that > > > the eventual implementation is a faithful replication of the > > > simulation (which can be rapidly experimented on, adjusted and > > > tested when we inevitably run into implementation issues). > > > > > > This is a complete implementation of Phase 1 (i.e. using existing > > > resource plugins), including update-during-update, resource > > > clean-up, replace on update and rollback; with tests. > > > > > > Some of the design goals which were successfully incorporated: > > > - Minimise changes to Heat (it's essentially a distributed version > > > of the existing algorithm), and in particular to the database > > > - Work with the existing plugin API > > > - Limit total DB access for Resource/Stack to O(n) in the number of > > > resources > > > - Limit overall DB access to O(m) in the number of edges > > > - Limit lock contention to only those operations actually > contending > > > (i.e. no global locks) > > > - Each worker task deals with only one resource > > > - Only read resource attributes once > > > > > > > > > Open questions: > > > - What do we do when we encounter a resource that is in progress > > > from a previous update while doing a subsequent update? Obviously > we > > > don't want to interrupt it, as it will likely be left in an unknown > > > state. Making a replacement is one obvious answer, but in many > cases > > > there could be serious down-sides to that. How long should we wait > > > before trying it? What if it's still in progress because the engine > > > processing the resource already died? > > > > > > > > > > > > Also, how do we implement resource level timeouts in general? > > > > > > > > > > > > > > > Micha?'s Proposal > > > ================= > > > > > > https://github.com/inc0/heat-convergence-prototype/tree/iterative > > > > > > Note that a version modified by me to use the same test scenario > > > format (but not the same scenarios) is here: > > > > > > > https://github.com/zaneb/heat-convergence-prototype/tree/iterative-adapted > > > > > > This is based on my simulation framework after a fashion, but with > > > everything implemented synchronously and a lot of handwaving about > > > how the actual implementation could be distributed. The central > > > premise is that at each step of the algorithm, the entire graph is > > > examined for tasks that can be performed next, and those are then > > > started. Once all are complete (it's synchronous, remember), the > > > next step is run. Keen observers will be asking how we know when it > > > is time to run the next step in a distributed version of this > > > algorithm, where it will be run and what to do about resources that > > > are in an intermediate state at that time. All of these questions > > > remain unanswered. > > > > > > > > > > > > Yes, I was struggling to figure out how it could manage an IN_PROGRESS > > > state as it's stateless. So you end up treading on the other action's > toes. > > > > > > Assuming we use the resource's state (IN_PROGRESS) you could get around > > > that. Then you kick off a converge when ever an action completes (if > > > there is nothing new to be > > > > > > done then do nothing). > > > > > > > > > > > > > > > A non-exhaustive list of concerns I have: > > > - Replace on update is not implemented yet > > > - AFAIK rollback is not implemented yet > > > - The simulation doesn't actually implement the proposed > architecture > > > - This approach is punishingly heavy on the database - O(n^2) or > worse > > > > > > > > > > > > Yes, re-reading the state of all resources when ever run a new converge > > > is worrying, but I think Michal had some ideas to minimize this. > > > > > > > > > > > > - A lot of phase 2 is mixed in with phase 1 here, making it > > > difficult to evaluate which changes need to be made first and > > > whether this approach works with existing plugins > > > - The code is not really based on how Heat works at the moment, so > > > there would be either a major redesign required or lots of radical > > > changes in Heat or both > > > > > > I think there's a fair chance that given another 3-4 weeks to work > > > on this, all of these issues and others could probably be resolved. > > > The question for me at this point is not so much "if" but "why". > > > > > > Micha? believes that this approach will make Phase 2 easier to > > > implement, which is a valid reason to consider it. However, I'm not > > > aware of any particular issues that my approach would cause in > > > implementing phase 2 (note that I have barely looked into it at all > > > though). In fact, I very much want Phase 2 to be entirely > > > encapsulated by the Resource class, so that the plugin type (legacy > > > vs. convergence-enabled) is transparent to the rest of the system. > > > Only in this way can we be sure that we'll be able to maintain > > > support for legacy plugins. So a phase 1 that mixes in aspects of > > > phase 2 is actually a bad thing in my view. > > > > > > I really appreciate the effort that has gone into this already, but > > > in the absence of specific problems with building phase 2 on top of > > > another approach that are solved by this one, I'm ready to call > this > > > a distraction. > > > > > > > > > > > > In it's defence, I like the simplicity of it. The concepts and code are > > > easy to understand - tho' part of this is doesn't implement all the > > > stuff on your list yet. > > > > > > > > > > > > > > > > > > Anant & Friends' Proposal > > > ========================= > > > > > > First off, I have found this very difficult to review properly > since > > > the code is not separate from the huge mass of Heat code and nor is > > > the commit history in the form that patch submissions would take > > > (but rather includes backtracking and iteration on the design). As > a > > > result, most of the information here has been gleaned from > > > discussions about the code rather than direct review. I have > > > repeatedly suggested that this proof of concept work should be done > > > using the simulator framework instead, unfortunately so far to no > avail. > > > > > > The last we heard on the mailing list about this, resource clean-up > > > had not yet been implemented. That was a major concern because that > > > is the more difficult half of the algorithm. Since then there have > > > been a lot more commits, but it's not yet clear whether resource > > > clean-up, update-during-update, replace-on-update and rollback have > > > been implemented, though it is clear that at least some progress > has > > > been made on most or all of them. Perhaps someone can give us an > update. > > > > > > > > > https://github.com/anantpatil/heat-convergence-poc > > > > > > > > > > > > AIUI this code also mixes phase 2 with phase 1, which is a concern. > > > For me the highest priority for phase 1 is to be sure that it works > > > with existing plugins. Not only because we need to continue to > > > support them, but because converting all of our existing > > > 'integration-y' unit tests to functional tests that operate in a > > > distributed system is virtually impossible in the time frame we > have > > > available. So the existing test code needs to stick around, and the > > > existing stack create/update/delete mechanisms need to remain in > > > place until such time as we have equivalent functional test > coverage > > > to begin eliminating existing unit tests. (We'll also, of course, > > > need to have unit tests for the individual elements of the new > > > distributed workflow, functional tests to confirm that the > > > distributed workflow works in principle as a whole - the scenarios > > > from the simulator can help with _part_ of this - and, not least, > an > > > algorithm that is as similar as possible to the current one so that > > > our existing tests remain at least somewhat representative and > don't > > > require too many major changes themselves.) > > > > > > Speaking of tests, I gathered that this branch included tests, but > I > > > don't know to what extent there are automated end-to-end functional > > > tests of the algorithm? > > > > > > From what I can gather, the approach seems broadly similar to the > > > one I eventually settled on also. The major difference appears to > be > > > in how we merge two or more streams of execution (i.e. when one > > > resource depends on two or more others). In my approach, the > > > dependencies are stored in the resources and each joining of > streams > > > creates a database row to track it, which is easily locked with > > > contention on the lock extending only to those resources which are > > > direct dependencies of the one waiting. In this approach, both the > > > dependencies and the progress through the graph are stored in a > > > database table, necessitating (a) reading of the entire table (as > it > > > relates to the current stack) on every resource operation, and (b) > > > locking of the entire table (which is hard) when marking a resource > > > operation complete. > > > > > > I chatted to Anant about this today and he mentioned that they had > > > solved the locking problem by dispatching updates to a queue that > is > > > read by a single engine per stack. > > > > > > My approach also has the neat side-effects of pushing the data > > > required to resolve get_resource and get_att (without having to > > > reload the resources again and query them) as well as to update > > > dependencies (e.g. because of a replacement or deletion) along with > > > the flow of triggers. I don't know if anything similar is at work > here. > > > > > > It's entirely possible that the best design might combine elements > > > of both approaches. > > > > > > The same open questions I detailed under my proposal also apply to > > > this one, if I understand correctly. > > > > > > > > > I'm certain that I won't have represented everyone's work fairly > > > here, so I encourage folks to dive in and correct any errors about > > > theirs and ask any questions you might have about mine. (In case > you > > > have been living under a rock, note that I'll be out of the office > > > for the rest of the week due to Thanksgiving so don't expect > > > immediate replies.) > > > > > > I also think this would be a great time for the wider Heat > community > > > to dive in and start asking questions and suggesting ideas. We need > > > to, ahem, converge on a shared understanding of the design so we > can > > > all get to work delivering it for Kilo. > > > > > > > > > > > > Agree, we need to get moving on this. > > > > > > -Angus > > > > > > > > > > > > cheers, > > > Zane. > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > Thanks Zane for your e-mail Zane and summarizing everyone's work. > > > > The design goals mentioned above looks more of performance goals and > > constraints to me. I understand that it is unacceptable to have a poorly > > performing engine and Resource plug-ins broken. Convergence spec clearly > > mentions that the existing Resource plugins should not be changed. > > > > IMHO, and my teams' HO, the design goals of convergence would be: > > 1. Stability: No transient failures, either in Openstack/external > > services or resources themselves, should fail the stack. Therefore, we > > need to have Observers to check for divergence and converge a resource > > if needed, to bring back to stable state. > > 2. Resiliency: Heat engines should be able to take up tasks in case of > > failures/restarts. > > 3. Backward compatibility: "We don't break the user space." No existing > > stacks should break. > > > > We started the PoC with these goals in mind, any performance > > optimization would be a plus point for us. Note than I am neglecting the > > performance goal, just that it should be next in the pipeline. The > > questions we should ask ourselves is whether we are storing enough data > > (state of stack) in DB to enable resiliency? Are we distributing the > > load evenly to all Heat engines? Does our notification mechanism > > provides us some form of guarantee or acknowledgement? > > > > In retrospective, we had to struggle a lot to understand the existing > > Heat engine. We couldn't have done justice by just creating another > > project in GitHub and without any concrete understanding of existing > > state-of-affairs. We are not at the same page with Heat core members, we > > are novice and cores are experts. > > > > I am glad that we experimented with the Heat engine directly. The > > current Heat engine is not resilient and the messaging also lacks > > reliability. We (my team and I guess cores also) understand that async > > message passing would be the way to go as synchronous RPC calls simply > > wouldn't scale. But with async message passing there has to be some > > mechanism of ACKing back, which I think lacks in current infrastructure. > > > > How could we provide stable user defined stack if the underlying Heat > > core lacks it? Convergence is all about stable stacks. To make the > > current Heat core stable we need to have, at the least: > > 1. Some mechanism to ACK back messages over AMQP. Or some other solid > > mechanism of message passing. > > 2. Some mechanism for fault tolerance in Heat engine using external > > tools/infrastructure like Celerey/Zookeeper. Without external > > infrastructure/tool we will end-up bloating Heat engine with lot of > > boiler-plate code to achieve this. We had recommended Celery in our > > previous e-mail (from Vishnu.) > > > > It was due to our experiments with Heat engines for this PoC, we could > > come up with above recommendations. > > > > Sate of our PoC > > --------------- > > > > On GitHub: https://github.com/anantpatil/heat-convergence-poc > > > > Our current implementation of PoC locks the stack after each > > notification to mark the graph as traversed and produce next level of > > resources for convergence. We are facing challenges in > > removing/minimizing these locks. We also have two different school of > > thoughts for solving this lock issue as mentioned above in Vishnu's > > e-mail. I will these descibe in detail the Wiki. There would different > > branches in our GitHub for these two approaches. > > > > It would be helpful if you explained why you need to _lock_ the stack. > MVCC in the database should be enough here. Basically you need to: > > begin transaction > update traversal information > select resolvable nodes > {in code not sql -- send converge commands into async queue} > commit > > Any failure inside this transation should rollback the transaction and > retry this. It is o-k to have duplicate converge commands for a resource. > > This should be the single point of synchronization between workers that > are resolving resources. Or perhaps this is the lock you meant? Either > way, this isn't avoidable if you want to make sure everything is attempted > at least once without having to continuously poll and re-poll the stack > to look for unresolved resources. That is an option, but not one that I > think is going to be as simple as the transactional method. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Dec 2 18:38:32 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 2 Dec 2014 13:38:32 -0500 Subject: [openstack-dev] oslo.concurrency 0.3.0 released In-Reply-To: <547DE8F7.7090206@nemebean.com> References: <547DE8F7.7090206@nemebean.com> Message-ID: One of the changes in this release was a move from using the ?oslo? namespace to using a non-namespaced package ?oslo_concurrency?. We included some shims to allow imports to work correctly, but the hacking rule to verify whether an import is actually a module was not recognizing these shims correctly and so a whole lot of test jobs failed. We are sorry for the inconvenience. A new version of hacking, 0.9.4, has been released with a fix for this problem. If your pep8 tests failed, you should recheck them when the current infrastructure mirror issue is resolved. If you have a local tox environment, you can rebuild it with ?tox -e pep8 -r? to install the new version of hacking and verify that it works before rechecking on the shared CI systems. We will be providing more guidance on the namespace package change when more of the Oslo libraries are updated. If you?re curious, you can check out http://specs.openstack.org/openstack/oslo-specs/specs/kilo/drop-namespace-packages.html now. Thanks, Doug On Dec 2, 2014, at 11:29 AM, Ben Nemec wrote: > The Oslo team is pleased to announce the release of oslo.concurrency 0.3.0. > > This release includes a number of fixes for problems found during the > initial adoptions of the library, as well as some functionality > improvements. > > For more details, please see the git log history below and > https://launchpad.net/oslo.concurrency/+milestone/0.3.0 > > Please report issues through launchpad: > https://launchpad.net/oslo.concurrency > > openstack/oslo.concurrency 0.2.0..HEAD > > 54c84da Add external lock fixture > 19f07c6 Add a TODO for retrying pull request #20 > 46c836e Allow the lock delay to be provided > 3bda65c Allow for providing a customized semaphore container > 656f908 Move locale files to proper place > faa30f8 Flesh out the README > bca4a0d Move out of the oslo namespace package > 58de317 Improve testing in py3 environment > fa52a63 Only modify autoindex.rst if it exists > 63e618b Imported Translations from Transifex > d5ea62c lockutils-wrapper cleanup > 78ba143 Don't use variables that aren't initialized > > diffstat (except docs and test files): > > .gitignore | 1 + > .testr.conf | 2 +- > README.rst | 4 +- > .../locale/en_GB/LC_MESSAGES/oslo.concurrency.po | 16 +- > oslo.concurrency/locale/oslo.concurrency.pot | 16 +- > oslo/concurrency/__init__.py | 29 ++ > oslo/concurrency/_i18n.py | 32 -- > oslo/concurrency/fixture/__init__.py | 13 + > oslo/concurrency/fixture/lockutils.py | 51 -- > oslo/concurrency/lockutils.py | 376 -------------- > oslo/concurrency/openstack/__init__.py | 0 > oslo/concurrency/openstack/common/__init__.py | 0 > oslo/concurrency/openstack/common/fileutils.py | 146 ------ > oslo/concurrency/opts.py | 45 -- > oslo/concurrency/processutils.py | 340 ------------ > oslo_concurrency/__init__.py | 0 > oslo_concurrency/_i18n.py | 32 ++ > oslo_concurrency/fixture/__init__.py | 0 > oslo_concurrency/fixture/lockutils.py | 76 +++ > oslo_concurrency/lockutils.py | 502 ++++++++++++++++++ > oslo_concurrency/openstack/__init__.py | 0 > oslo_concurrency/openstack/common/__init__.py | 0 > oslo_concurrency/openstack/common/fileutils.py | 146 ++++++ > oslo_concurrency/opts.py | 45 ++ > oslo_concurrency/processutils.py | 340 ++++++++++++ > requirements-py3.txt | 1 + > requirements.txt | 1 + > setup.cfg | 9 +- > tests/test_lockutils.py | 575 > ++++++++++++++++++++ > tests/test_processutils.py | 519 > +++++++++++++++++++ > tests/test_warning.py | 29 ++ > tests/unit/__init__.py | 0 > tests/unit/test_lockutils.py | 543 > ------------------- > tests/unit/test_lockutils_eventlet.py | 59 --- > tests/unit/test_processutils.py | 518 ------------------ > tox.ini | 8 +- > 42 files changed, 3515 insertions(+), 2135 deletions(-) > > Requirements updates: > > diff --git a/requirements-py3.txt b/requirements-py3.txt > index b1a8722..a27b434 100644 > --- a/requirements-py3.txt > +++ b/requirements-py3.txt > @@ -13,0 +14 @@ six>=1.7.0 > +retrying>=1.2.2,!=1.3.0 # Apache-2.0 > diff --git a/requirements.txt b/requirements.txt > index b1a8722..a27b434 100644 > --- a/requirements.txt > +++ b/requirements.txt > @@ -13,0 +14 @@ six>=1.7.0 > +retrying>=1.2.2,!=1.3.0 # Apache-2.0 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From nati.ueno at gmail.com Tue Dec 2 18:40:57 2014 From: nati.ueno at gmail.com (Nati Ueno) Date: Wed, 3 Dec 2014 03:40:57 +0900 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: Hi folks Congrats! Henry and Kevin. I'll keep contributing the community, but Thank you for working with me as core team :) Best Nachi 2014-12-03 2:05 GMT+09:00 Carl Baldwin : > +1 from me for all the changes. I appreciate the work from all four > of these excellent contributors. I'm happy to welcome Henry and Kevin > as new core reviewers. I also look forward to continuing to work with > Nachi and Bob as important members of the community. > > Carl > > On Tue, Dec 2, 2014 at 8:59 AM, Kyle Mestery wrote: >> Now that we're in the thick of working hard on Kilo deliverables, I'd >> like to make some changes to the neutron core team. Reviews are the >> most important part of being a core reviewer, so we need to ensure >> cores are doing reviews. The stats for the 180 day period [1] indicate >> some changes are needed for cores who are no longer reviewing. >> >> First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from >> neutron-core. Bob and Nachi have been core members for a while now. >> They have contributed to Neutron over the years in reviews, code and >> leading sub-teams. I'd like to thank them for all that they have done >> over the years. I'd also like to propose that should they start >> reviewing more going forward the core team looks to fast track them >> back into neutron-core. But for now, their review stats place them >> below the rest of the team for 180 days. >> >> As part of the changes, I'd also like to propose two new members to >> neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have >> been very active in reviews, meetings, and code for a while now. Henry >> lead the DB team which fixed Neutron DB migrations during Juno. Kevin >> has been actively working across all of Neutron, he's done some great >> work on security fixes and stability fixes in particular. Their >> comments in reviews are insightful and they have helped to onboard new >> reviewers and taken the time to work with people on their patches. >> >> Existing neutron cores, please vote +1/-1 for the addition of Henry >> and Kevin to the core team. >> >> Thanks! >> Kyle >> >> [1] http://stackalytics.com/report/contribution/neutron-group/180 >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Nachi Ueno email:nati.ueno at gmail.com twitter:http://twitter.com/nati From legiangthanh at gmail.com Tue Dec 2 18:51:09 2014 From: legiangthanh at gmail.com (thanh le giang) Date: Wed, 3 Dec 2014 01:51:09 +0700 Subject: [openstack-dev] [CI]Setup CI system behind proxy In-Reply-To: References: Message-ID: Hi, I want to setup everything behind proxy, but i am stuck at step configure zuul connect to gerrit. I can't find any option to set proxy on zuul.conf file. Thanks 2014-12-03 1:00 GMT+07:00 Zaro : > Could you please clarify? Do you mean you want to setup everything behind > a proxy? or do you mean you want to setup just gerrit behind a proxy? > > On Mon, Dec 1, 2014 at 6:26 PM, thanh le giang > wrote: > >> Dear all >> >> I have set up a CI system successfully with directly access to internet. >> Now I have another requirement which requires setting up CI system behind >> proxy but i can't find any way to configure zuul to connect to gerrit >> through proxy. >> >> Any advice is appreciated. >> >> Thanks and Regards >> >> -- >> L.G.Thanh >> >> Email: legiangthan at gmail.com >> lgthanh at fit.hcmus.edu.vn >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- L.G.Thanh Email: legiangthan at gmail.com lgthanh at fit.hcmus.edu.vn -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmarkov at mirantis.com Tue Dec 2 18:56:58 2014 From: nmarkov at mirantis.com (Nikolay Markov) Date: Tue, 2 Dec 2014 22:56:58 +0400 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: <547DE513.1080203@redhat.com> References: <547DE513.1080203@redhat.com> Message-ID: Hello all, I actually tried to use Pecan and even created a couple of PoCs, but there due to historical reasons of how our API is organized it will take much more time to implement all workarounds we need to issues Pecan doesn't solve out of the box, like working with non-RESTful URLs, reverse URL lookup, returning custom body in 404 response, wrapping errors to JSON automatically, etc. As far as I see, each OpenStack project implements its own workarounds for these issues, but still it requires much less men and hours for us to move to Flask-Restful instead of Pecan, because all these problems are already solved there. BTW, I know a lot of pretty big projects using Flask (it's the second most popular Web framework after Django in Python Web community), they even have their own "hall of fame": http://flask.pocoo.org/community/poweredby/ . On Tue, Dec 2, 2014 at 7:13 PM, Ryan Brown wrote: > On 12/02/2014 09:55 AM, Igor Kalnitsky wrote: >> Hi, Sebastian, >> >> Thank you for raising this topic again. >> >> [snip] >> >> Personally, I'd like to use Flask instead of Pecan, because first one >> is more production-ready tool and I like its design. But I believe >> this should be resolved by voting. >> >> Thanks, >> Igor >> >> On Tue, Dec 2, 2014 at 4:19 PM, Sebastian Kalinowski >> wrote: >>> Hi all, >>> >>> [snip explanation+history] >>> >>> Best, >>> Sebastian > > Given that Pecan is used for other OpenStack projects and has plenty of > builtin functionality (REST support, sessions, etc) I'd prefer it for a > number of reasons. > > 1) Wouldn't have to pull in plugins for standard (in Pecan) things > 2) Pecan is built for high traffic, where Flask is aimed at much smaller > projects > 3) Already used by other OpenStack projects, so common patterns can be > reused as oslo libs > > Of course, the Flask community seems larger (though the average flask > project seems pretty small). > > I'm not sure what determines "production readiness", but it seems to me > like Fuel developers fall more in Pecan's target audience than in Flask's. > > My $0.02, > Ryan > > -- > Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Nick Markov From ayoung at redhat.com Tue Dec 2 19:02:14 2014 From: ayoung at redhat.com (Adam Young) Date: Tue, 02 Dec 2014 14:02:14 -0500 Subject: [openstack-dev] [keystone][oslo] Handling contexts and policy enforcement in services In-Reply-To: <8E09079C-B047-4319-9660-B1118FD9DC00@doughellmann.com> References: <1417398680.3087.1.camel@redhat.com> <8E09079C-B047-4319-9660-B1118FD9DC00@doughellmann.com> Message-ID: <547E0CB6.3080507@redhat.com> On 12/01/2014 09:09 AM, Doug Hellmann wrote: > On Nov 30, 2014, at 8:51 PM, Jamie Lennox wrote: > >> TL;DR: I think we can handle most of oslo.context with some additions to >> auth_token middleware and simplify policy enforcement (from a service >> perspective) at the same time. >> >> There is currently a push to release oslo.context as a >> library, for reference: >> https://github.com/openstack/oslo.context/blob/master/oslo_context/context.py >> >> Whilst I love the intent to standardize this >> functionality I think that many of the requirements in there >> are incorrect and don't apply to all services. It is my >> understanding for example that read_only, show_deleted are >> essentially nova requirements, and the use of is_admin needs >> to be killed off, not standardized. >> >> Currently each service builds a context based on headers >> made available from auth_token middleware and some >> additional interpretations based on that user >> authentication. Each service does this slightly differently >> based on its needs/when it copied it from nova. >> >> I propose that auth_token middleware essentially handle the >> creation and management of an authentication object that >> will be passed and used by all services. This will >> standardize so much of the oslo.context library that I'm not >> sure it will be still needed. I bring this up now as I am >> wanting to push this way and don't want to change things >> after everyone has adopted oslo.context. > We put the context class in its own library because both oslo.messaging and oslo.log [1] need to have input into its API. If the middleware wants to get involved in adding to the API that?s fine, but context is not only used for authentication so the middleware can?t own it. I think oslo context is mixing in the auth/access info with the rest of the request. In policy, the acces info is on the left, and the target is on the right. The target should be exposed by the end service, but the access info comes from the token. I took a swag at what this should really look like from a keystone perspective. I started with a core domain model in pure python objects: Spec is here. https://review.openstack.org/#/c/135774/ Code is here. https://review.openstack.org/#/c/137231/ However, I am going to move that code to the keystoneclient repository. It will be populated from the token JSON and available for consumption by keystonemiddleware.auth_token, policy (whereever that ends up living) as well as any other code that wants it. Policy enforcement should not make use of the Oslo Context code. It will be a complete, self contained library. Instead, each of the services will start with the access info from KC and then provide their own data for the target side of policy. We have a chain of specs. I provided an overview of where policy enforcement is going here: https://adam.younglogic.com/2014/11/dynamic-policy-in-keystone/ > > [1] https://review.openstack.org/132551 > > >> The current release of auth_token middleware creates and >> passes to services (via env['keystone.token_auth']) an auth >> plugin that can be passed to clients to use the current user >> authentication. My intention here is to expand that object >> to expose all of the authentication information required for >> the services to operate. >> >> There are two components to context that I can see: >> >> - The current authentication information that is retrieved >> from auth_token middleware. >> - service specific context added based on that user >> information eg read_only, show_deleted, is_admin, >> resource_id >> >> Regarding the first point of current authentication there >> are three places I can see this used: >> >> - communicating with other services as that user >> - associating resources with a user/project >> - policy enforcement > One of the specific logging requests we?ve had from operators is to have the logs show the authentication context clearly and consistently (i.e., the same format whether domains are used in the deployment or not). That?s an aspects of the spec linked above. > >> Addressing each of the 'current authentication' needs: >> >> - As mentioned for service to service communication >> auth_token middleware already provides an auth_plugin >> that can be used with (at this point most) of the >> clients. This greatly simplifies reusing an existing >> token and correctly using the service catalog as each >> client would do this differently. In future this plugin >> will be extended to provide support for concepts such as >> filling in the X-Service-Token [1] on behalf of the >> service, managing the request id, and generally >> standardizing service->service communication without >> requiring explicit support from every project and client. >> >> - Given that this authentication plugin is built within >> auth_token middleware it is a fairly trivial step to >> provide public properties on this object to give access >> to the current user_id, project_id and other relevant >> authentication data that the services can access. This is >> fairly well handled today but it means it is done without >> the service having to fetch all these objects from >> headers. > That sounds like a good source of data to populate the context object. > >> - With upcoming changes to policy to handle features such >> as the X-Service-Token the existing context will need to >> gain a bunch of new entries. With the keystone team >> looking to wrap policy enforcement into its own >> standalone library it makes more sense to provide this >> authentication object directly to policy enforcement. >> This will allow the keystone team to manipulate policy >> data from both auth_token and the enforcement side, >> letting us introduce new features to policy transparent >> to the services. It will also standardize the naming of >> variables within these policy files. >> >> What is left for a context object after this is managing >> serialization and deserialization of this auth object and >> any additional fields (read_only etc) that are generally >> calculated at context creation time. This would be a very >> small library. > That?s not all it will do, but it will be small. As I mentioned above, we isolated it in its own library to control dependencies because several aspects of the system want to add to the API. > >> There are still a number of steps to getting there: >> >> - Adding enough data to the existing authentication plugin >> to allow policy enforcement and general usage. >> - Making the authentication object serializable for >> transmitting between services. >> - Extracting policy enforcement into a library. >> >> However I think that this approach brings enough benefits to >> hold off on releasing and standardizing the use of the >> current context objects. >> >> I'd love to hear everyone thoughts on this, and where it >> would fall down. I see there could be some issues with how >> the context would fit into nova's versioned objects for >> example - but I think this would be the same issues that an >> oslo.context library would face anyway. >> >> Jamie >> >> >> [1] This is where service->service communication includes >> the service token as well as the user token to allow smarter >> policy and resource access. For example, a user can't access >> certain neutron functions directly however it should be >> allowed when nova calls neutron on behalf of a user, or an >> object that a service made on behalf of a user can only be >> deleted when the service makes the request on behalf of that >> user. >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From iwienand at redhat.com Tue Dec 2 19:22:31 2014 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 03 Dec 2014 06:22:31 +1100 Subject: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023) In-Reply-To: <1417495283-sup-444@fewbar.com> References: <54755DE7.2070107@redhat.com> <547607C3.3090604@nemebean.com> <1417495283-sup-444@fewbar.com> Message-ID: <547E1177.4030705@redhat.com> On 12/02/2014 03:46 PM, Clint Byrum wrote: > 1) Conform all o-r-c scripts to the logging standards we have in > OpenStack, or write new standards for diskimage-builder and conform > them to those standards. Abolish non-conditional xtrace in any script > conforming to the standards. Honestly in the list of things that need doing in openstack, this must be near the bottom. The whole reason I wrote this is because "disk-image-create -x ..." doesn't do what any reasonable person expects it to; i.e. trace all the scripts it starts. Having a way to trace execution of all d-i-b scripts is all that's needed and gives sufficient detail to debug issues. -i From sean.k.mooney at intel.com Tue Dec 2 19:44:23 2014 From: sean.k.mooney at intel.com (Mooney, Sean K) Date: Tue, 2 Dec 2014 19:44:23 +0000 Subject: [openstack-dev] [nova] [libvirt] enabling per node filtering of mempage sizes Message-ID: <4B1BB321037C0849AAE171801564DFA630E602DE@IRSMSX107.ger.corp.intel.com> Hi all I have submitted a small blueprint to allow filtering of available memory pages Reported by libvirt. https://blueprints.launchpad.net/nova/+spec/libvirt-allowed-mempage-sizes I believe that this change is small enough to not require a spec as per http://docs.openstack.org/developer/nova/devref/kilo.blueprints.html if a core (and others are welcome too :)) has time to review my blueprint and confirm that a spec is not required I would be grateful as the spd is rapidly approaching I have wip code developed which I hope to make available for review once I add unit tests. All relevant detail (copied below) are included in the whiteboard for the blueprint. Regards Sean Problem description =================== In the Kilo cycle, the virt drivers large pages feature[1] was introduced to allow a guests to request the type of memory backing that they desire via a flavor or image metadata. In certain configurations, it may be desired or required to filter the memory pages available to vms booted on a node. At present no mechanism exists to allow filtering of reported memory pages. Use Cases ---------- On a host that only supports vhost-user or ivshmem, all VMs are required to use large page memory. If a vm is booted with standard pages with these interfaces, network connectivity will not available. In this case it is desirable to filter out small/4k pages when reporting available memory to the scheduler. Proposed change =============== This blueprint proposes adding a new config variable (allowed_memory_pagesize) to the libvirt section of the nova.conf. cfg.ListOpt('allowed_memory_pagesize', default=['any'], help='List of allowed memory page sizes' 'Syntax is SizeA,SizeB e.g. small,large' 'valid sizes are: small,large,any,4,2048,1048576') The _get_host_capabilities function in nova/nova/virt/libvirt/driver.py will be modified to filter the mempages reported for each cell based on the value of CONF.libvirt.allowed_memory_pagesize If small is set then only 4k pages will be reported. If large is set 2MB and 1GB will be reported. If any is set no filtering will be applied. The default value of "any" was chosen to ensure that this change has no effect on existing deployment. References ========== [1] - https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages From doug at doughellmann.com Tue Dec 2 19:48:35 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 2 Dec 2014 14:48:35 -0500 Subject: [openstack-dev] [oslo][qa] Moving the hacking project under the QA program Message-ID: <970ED1FB-A411-4A35-A0F1-47073A269F9F@doughellmann.com> The hacking project is currently managed by the Oslo team. Normally Joe Gordon handles all of the release work, and so I haven?t bothered to look at how the Launchpad project is set up or how the branches are managed before today. However, today?s issue with oslo.concurrency resulted in a need to hurry a release, and in the process of doing that I realized that it?s not set up at all like the other Oslo projects. When I started thinking about how to get that done, I also realized that maybe it?s a better fit for the QA program anyway, since it has to do with code quality and isn?t really a ?library?. I talked to Matt and Joe and they agreed that the QA program would be willing to take over managing hacking. Matt posted a governance change, and this email thread is mostly so we?ll have the thought process behind the move on the record [1]. I don?t really expect any significant changes to the way hacking is managed. From my perspective, the change is more about standardizing the Oslo library management further and less about hacking. Joe is happy to have the core review team largely stay the same, although we should ask members of oslo-core if they want to still be on hacking-core rather than just assuming (talk to jogo on IRC to make sure you?re on the list if you want to be). Doug [1] https://review.openstack.org/138499 From clint at fewbar.com Tue Dec 2 19:49:34 2014 From: clint at fewbar.com (Clint Byrum) Date: Tue, 02 Dec 2014 11:49:34 -0800 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <1417464757-sup-2646@fewbar.com> Message-ID: <1417548734-sup-3382@fewbar.com> Excerpts from Anant Patil's message of 2014-12-02 10:37:31 -0800: > Yes, that's the synchronization block for which we use the stack lock. > Currently, a thread spin waits to acquire the lock to enter this critical > section. > > I don't really know how to do application level transaction. Is there an > external library for that? AFAIK, we cannot switch from DB transaction to > application level to do some computation to resolve next set of resources, > submit to async queue, and again continue with the DB transaction. The > transnational method looks attractive and clean to me but I am limited by > my knowledge.Or do you mean to have DB transaction object made available > via DB APIs and use them in the application? Please share your thoughts. > Every DB request has a context attached to the currently running greenthread which has the DB session object. So yes, you do begin the transaction in one db API call, and try to commit it in another after having attempted any application logic. The whole thing should always be in a try/except retry loop to handle deadlocks and conflicts introduced by multi-master sync replication like Galera. For non-transactional backends, they would have to use a spin lock like you're using now. > To avoid locking the stack, we were thinking of designating a single engine > with responsibility of processing all notifications for a stack. All the > workers will notify on the stack topic, which only one engine listens to > and then the notifications end up in a queue (local to the engine, per > stack), from where they are taken up one-by-one to continue the stack > operation. The convergence jobs are produced by this engine for the stack > it is responsible for, and they might end-up in any of the engines. But the > notifications for a stack are directed to one engine to avoid contention > for lock. The convergence load is leveled and distributed, and stack lock > is not needed. > No don't do that. You already have that engine, it is the database engine and it is intended to be used for synchronization and will achieve a higher degree of concurrency in bigger stacks because it will only block two workers when they try to inspect or change the same rows. From krotscheck at gmail.com Tue Dec 2 19:49:42 2014 From: krotscheck at gmail.com (Michael Krotscheck) Date: Tue, 02 Dec 2014 19:49:42 +0000 Subject: [openstack-dev] [Fuel][Nailgun] Web framework References: <547DE513.1080203@redhat.com> Message-ID: This sounds more like you need to pay off technical debt and clean up your API. Michael On Tue Dec 02 2014 at 10:58:43 AM Nikolay Markov wrote: > Hello all, > > I actually tried to use Pecan and even created a couple of PoCs, but > there due to historical reasons of how our API is organized it will > take much more time to implement all workarounds we need to issues > Pecan doesn't solve out of the box, like working with non-RESTful > URLs, reverse URL lookup, returning custom body in 404 response, > wrapping errors to JSON automatically, etc. > > As far as I see, each OpenStack project implements its own workarounds > for these issues, but still it requires much less men and hours for us > to move to Flask-Restful instead of Pecan, because all these problems > are already solved there. > > BTW, I know a lot of pretty big projects using Flask (it's the second > most popular Web framework after Django in Python Web community), they > even have their own "hall of fame": > http://flask.pocoo.org/community/poweredby/ . > > On Tue, Dec 2, 2014 at 7:13 PM, Ryan Brown wrote: > > On 12/02/2014 09:55 AM, Igor Kalnitsky wrote: > >> Hi, Sebastian, > >> > >> Thank you for raising this topic again. > >> > >> [snip] > >> > >> Personally, I'd like to use Flask instead of Pecan, because first one > >> is more production-ready tool and I like its design. But I believe > >> this should be resolved by voting. > >> > >> Thanks, > >> Igor > >> > >> On Tue, Dec 2, 2014 at 4:19 PM, Sebastian Kalinowski > >> wrote: > >>> Hi all, > >>> > >>> [snip explanation+history] > >>> > >>> Best, > >>> Sebastian > > > > Given that Pecan is used for other OpenStack projects and has plenty of > > builtin functionality (REST support, sessions, etc) I'd prefer it for a > > number of reasons. > > > > 1) Wouldn't have to pull in plugins for standard (in Pecan) things > > 2) Pecan is built for high traffic, where Flask is aimed at much smaller > > projects > > 3) Already used by other OpenStack projects, so common patterns can be > > reused as oslo libs > > > > Of course, the Flask community seems larger (though the average flask > > project seems pretty small). > > > > I'm not sure what determines "production readiness", but it seems to me > > like Fuel developers fall more in Pecan's target audience than in > Flask's. > > > > My $0.02, > > Ryan > > > > -- > > Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Best regards, > Nick Markov > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Tue Dec 2 19:59:25 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Tue, 2 Dec 2014 21:59:25 +0200 Subject: [openstack-dev] [oslo][qa] Moving the hacking project under the QA program In-Reply-To: <970ED1FB-A411-4A35-A0F1-47073A269F9F@doughellmann.com> References: <970ED1FB-A411-4A35-A0F1-47073A269F9F@doughellmann.com> Message-ID: On Tue, Dec 2, 2014 at 9:48 PM, Doug Hellmann wrote: > The hacking project is currently managed by the Oslo team. Normally Joe > Gordon handles all of the release work, and so I haven?t bothered to look > at how the Launchpad project is set up or how the branches are managed > before today. However, today?s issue with oslo.concurrency resulted in a > need to hurry a release, and in the process of doing that I realized that > it?s not set up at all like the other Oslo projects. When I started > thinking about how to get that done, I also realized that maybe it?s a > better fit for the QA program anyway, since it has to do with code quality > and isn?t really a ?library?. > One of the outcomes if this whole incident is we just grew the hacking-release team to include qa-release as well. So in case another issue like this arises we just need one of three people to be available to tag a release. > > I talked to Matt and Joe and they agreed that the QA program would be > willing to take over managing hacking. Matt posted a governance change, and > this email thread is mostly so we?ll have the thought process behind the > move on the record [1]. > > I don?t really expect any significant changes to the way hacking is > managed. From my perspective, the change is more about standardizing the > Oslo library management further and less about hacking. Joe is happy to > have the core review team largely stay the same, although we should ask > members of oslo-core if they want to still be on hacking-core rather than > just assuming (talk to jogo on IRC to make sure you?re on the list if you > want to be). > Yup, if you are currently oslo-core and would like to continue being hacking-core just find me on IRC. > > Doug > > [1] https://review.openstack.org/138499 > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmarkov at mirantis.com Tue Dec 2 20:00:56 2014 From: nmarkov at mirantis.com (Nikolay Markov) Date: Wed, 3 Dec 2014 00:00:56 +0400 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> Message-ID: Michael, we already solved all issues I described, and I just don't want to solve them once again after moving to another framework. Also, I think, nothing of these wishes contradicts with good API design. On Tue, Dec 2, 2014 at 10:49 PM, Michael Krotscheck wrote: > This sounds more like you need to pay off technical debt and clean up your > API. > > Michael > > On Tue Dec 02 2014 at 10:58:43 AM Nikolay Markov > wrote: >> >> Hello all, >> >> I actually tried to use Pecan and even created a couple of PoCs, but >> there due to historical reasons of how our API is organized it will >> take much more time to implement all workarounds we need to issues >> Pecan doesn't solve out of the box, like working with non-RESTful >> URLs, reverse URL lookup, returning custom body in 404 response, >> wrapping errors to JSON automatically, etc. >> >> As far as I see, each OpenStack project implements its own workarounds >> for these issues, but still it requires much less men and hours for us >> to move to Flask-Restful instead of Pecan, because all these problems >> are already solved there. >> >> BTW, I know a lot of pretty big projects using Flask (it's the second >> most popular Web framework after Django in Python Web community), they >> even have their own "hall of fame": >> http://flask.pocoo.org/community/poweredby/ . >> >> On Tue, Dec 2, 2014 at 7:13 PM, Ryan Brown wrote: >> > On 12/02/2014 09:55 AM, Igor Kalnitsky wrote: >> >> Hi, Sebastian, >> >> >> >> Thank you for raising this topic again. >> >> >> >> [snip] >> >> >> >> Personally, I'd like to use Flask instead of Pecan, because first one >> >> is more production-ready tool and I like its design. But I believe >> >> this should be resolved by voting. >> >> >> >> Thanks, >> >> Igor >> >> >> >> On Tue, Dec 2, 2014 at 4:19 PM, Sebastian Kalinowski >> >> wrote: >> >>> Hi all, >> >>> >> >>> [snip explanation+history] >> >>> >> >>> Best, >> >>> Sebastian >> > >> > Given that Pecan is used for other OpenStack projects and has plenty of >> > builtin functionality (REST support, sessions, etc) I'd prefer it for a >> > number of reasons. >> > >> > 1) Wouldn't have to pull in plugins for standard (in Pecan) things >> > 2) Pecan is built for high traffic, where Flask is aimed at much smaller >> > projects >> > 3) Already used by other OpenStack projects, so common patterns can be >> > reused as oslo libs >> > >> > Of course, the Flask community seems larger (though the average flask >> > project seems pretty small). >> > >> > I'm not sure what determines "production readiness", but it seems to me >> > like Fuel developers fall more in Pecan's target audience than in >> > Flask's. >> > >> > My $0.02, >> > Ryan >> > >> > -- >> > Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. >> > >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> -- >> Best regards, >> Nick Markov >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Nick Markov From apevec at gmail.com Tue Dec 2 20:59:32 2014 From: apevec at gmail.com (Alan Pevec) Date: Tue, 2 Dec 2014 21:59:32 +0100 Subject: [openstack-dev] [stable] New config options, no default change In-Reply-To: <350487463.25449839.1417539160631.JavaMail.zimbra@redhat.com> References: <20141111103053.GD7366@redhat.com> <5461FC66.6060103@openstack.org> <5468F85B.10901@electronicjungle.net> <54748DC0.50400@redhat.com> <547CF9A1.4070009@electronicjungle.net> <350487463.25449839.1417539160631.JavaMail.zimbra@redhat.com> Message-ID: > Will the bugs created by this end up in the openstack-manuals project (which I don't think is the right place for them in this case) or has it been set up to create them somewhere else (or not at all) when the commits are against the stable branches? Docs team, how do you handle DocImpact on stable/branches, do you mind (mis)using your tag like this? I thought DocImpat only triggers a notification for the docs team, if there's more machinery behind it like automatic bug filing then we should use a different tag as stable relnotes marker. Cheers, Alan From slukjanov at mirantis.com Tue Dec 2 21:00:29 2014 From: slukjanov at mirantis.com (Sergey Lukjanov) Date: Wed, 3 Dec 2014 01:00:29 +0400 Subject: [openstack-dev] [infra] Gerrit downtime on Dec 6 for project renames Message-ID: Hi, on Saturday, Dec 6 at 16:00 UTC Gerrit will be unavailable for about 30 minutes while we rename some projects. Existing reviews, project watches, etc, should all be carried over. So far, we have the following list of projects to rename: * stackforge/heat-translator -> openstack/heat-translator - https://review.openstack.org/#/c/131558/ * stackforge/tooz -> openstack/tooz - https://review.openstack.org/#/c/135215/ * openstack-infra/gerrit-powered-agenda -> openstack-infra/yaml2ical - https://review.openstack.org/138432/ Some other projects could be added to the list. Thanks. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne at openstack.org Tue Dec 2 21:28:21 2014 From: anne at openstack.org (Anne Gentle) Date: Tue, 2 Dec 2014 15:28:21 -0600 Subject: [openstack-dev] [stable] New config options, no default change In-Reply-To: References: <20141111103053.GD7366@redhat.com> <5461FC66.6060103@openstack.org> <5468F85B.10901@electronicjungle.net> <54748DC0.50400@redhat.com> <547CF9A1.4070009@electronicjungle.net> <350487463.25449839.1417539160631.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Dec 2, 2014 at 2:59 PM, Alan Pevec wrote: > > Will the bugs created by this end up in the openstack-manuals project > (which I don't think is the right place for them in this case) or has it > been set up to create them somewhere else (or not at all) when the commits > are against the stable branches? > > Docs team, how do you handle DocImpact on stable/branches, do you mind > (mis)using your tag like this? > I thought DocImpat only triggers a notification for the docs team, if > there's more machinery behind it like automatic bug filing then we > should use a different tag as stable relnotes marker. > We have recently enabled DocImpact tracking for more projects. DocImpact currently creates a bug in the project indicated by the project's build config in openstack-infra/project-config/gerrit/projects.yaml setting docimpact-group:, such as: docimpact-group: oslo If you want to track your DocImpact feel free to use a Launchpad project for docimpact-group that makes sense to you. Anne > Cheers, > Alan > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Tue Dec 2 21:44:17 2014 From: sgordon at redhat.com (Steve Gordon) Date: Tue, 2 Dec 2014 16:44:17 -0500 (EST) Subject: [openstack-dev] [stable] New config options, no default change In-Reply-To: References: <5468F85B.10901@electronicjungle.net> <54748DC0.50400@redhat.com> <547CF9A1.4070009@electronicjungle.net> <350487463.25449839.1417539160631.JavaMail.zimbra@redhat.com> Message-ID: <593930077.25671392.1417556657112.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Anne Gentle" > To: "OpenStack Development Mailing List (not for usage questions)" > > On Tue, Dec 2, 2014 at 2:59 PM, Alan Pevec wrote: > > > > Will the bugs created by this end up in the openstack-manuals project > > (which I don't think is the right place for them in this case) or has it > > been set up to create them somewhere else (or not at all) when the commits > > are against the stable branches? > > > > Docs team, how do you handle DocImpact on stable/branches, do you mind > > (mis)using your tag like this? > > I thought DocImpat only triggers a notification for the docs team, if > > there's more machinery behind it like automatic bug filing then we > > should use a different tag as stable relnotes marker. > > > > We have recently enabled DocImpact tracking for more projects. DocImpact > currently creates a bug in the project indicated by the project's build > config in openstack-infra/project-config/gerrit/projects.yaml setting > docimpact-group:, such as: > docimpact-group: oslo > > If you want to track your DocImpact feel free to use a Launchpad project > for docimpact-group that makes sense to you. > Anne In this case we're referring to how we handle DocImpact for specific branches of a project (say stable/juno of Nova, for example). I don't think we'd want to change the DocImpact handling for the entire project to go somewhere other than openstack-manuals. As far as I know the current setup doesn't support us changing the handling per branch though, only per project. -Steve From anne at openstack.org Tue Dec 2 21:59:37 2014 From: anne at openstack.org (Anne Gentle) Date: Tue, 2 Dec 2014 15:59:37 -0600 Subject: [openstack-dev] [OpenStack-docs] [stable] New config options, no default change In-Reply-To: <593930077.25671392.1417556657112.JavaMail.zimbra@redhat.com> References: <5468F85B.10901@electronicjungle.net> <54748DC0.50400@redhat.com> <547CF9A1.4070009@electronicjungle.net> <350487463.25449839.1417539160631.JavaMail.zimbra@redhat.com> <593930077.25671392.1417556657112.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Dec 2, 2014 at 3:44 PM, Steve Gordon wrote: > ----- Original Message ----- > > From: "Anne Gentle" > > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > > > > On Tue, Dec 2, 2014 at 2:59 PM, Alan Pevec wrote: > > > > > > Will the bugs created by this end up in the openstack-manuals project > > > (which I don't think is the right place for them in this case) or has > it > > > been set up to create them somewhere else (or not at all) when the > commits > > > are against the stable branches? > > > > > > Docs team, how do you handle DocImpact on stable/branches, do you mind > > > (mis)using your tag like this? > > > I thought DocImpat only triggers a notification for the docs team, if > > > there's more machinery behind it like automatic bug filing then we > > > should use a different tag as stable relnotes marker. > > > > > > > We have recently enabled DocImpact tracking for more projects. DocImpact > > currently creates a bug in the project indicated by the project's build > > config in openstack-infra/project-config/gerrit/projects.yaml setting > > docimpact-group:, such as: > > docimpact-group: oslo > > > > If you want to track your DocImpact feel free to use a Launchpad project > > for docimpact-group that makes sense to you. > > Anne > > In this case we're referring to how we handle DocImpact for specific > branches of a project (say stable/juno of Nova, for example). I don't think > we'd want to change the DocImpact handling for the entire project to go > somewhere other than openstack-manuals. As far as I know the current setup > doesn't support us changing the handling per branch though, only per > project. > Yep, I do agree. And honestly we don't have the resources to cover stable branches docs in addition to the ongoing. Is there another way to tag your work so that you remember to put it in release notes? Anne > > -Steve > > _______________________________________________ > OpenStack-docs mailing list > OpenStack-docs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Tue Dec 2 22:13:22 2014 From: sgordon at redhat.com (Steve Gordon) Date: Tue, 2 Dec 2014 17:13:22 -0500 (EST) Subject: [openstack-dev] [Telco][NFV] Framing use case contributions In-Reply-To: <118363379.25687877.1417558219823.JavaMail.zimbra@redhat.com> Message-ID: <1749440907.25689824.1417558402631.JavaMail.zimbra@redhat.com> Hi all, Based on last week's conversation [1] I have tried to update the use cases page to remove the emphasis on requirements/solutions and focus on the "what" and "why" and to provide some guidance as to initial scope to help those wishing to submit use cases in framing them. Please review and see if I have missed anything: https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases#Contributing_Use_Cases Thanks, Steve [1] http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-11-26-22.00.log.html From clint at fewbar.com Tue Dec 2 22:30:47 2014 From: clint at fewbar.com (Clint Byrum) Date: Tue, 02 Dec 2014 14:30:47 -0800 Subject: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023) In-Reply-To: <547E1177.4030705@redhat.com> References: <54755DE7.2070107@redhat.com> <547607C3.3090604@nemebean.com> <1417495283-sup-444@fewbar.com> <547E1177.4030705@redhat.com> Message-ID: <1417559171-sup-8883@fewbar.com> Excerpts from Ian Wienand's message of 2014-12-02 11:22:31 -0800: > On 12/02/2014 03:46 PM, Clint Byrum wrote: > > 1) Conform all o-r-c scripts to the logging standards we have in > > OpenStack, or write new standards for diskimage-builder and conform > > them to those standards. Abolish non-conditional xtrace in any script > > conforming to the standards. > > Honestly in the list of things that need doing in openstack, this must > be near the bottom. > > The whole reason I wrote this is because "disk-image-create -x ..." > doesn't do what any reasonable person expects it to; i.e. trace all > the scripts it starts. > > Having a way to trace execution of all d-i-b scripts is all that's > needed and gives sufficient detail to debug issues. Several developers have expressed their concern for an all-or-nothing approach to this. The concern is that when you turn off the "trace all" you lose the author-intended "info" level messages. I for one find the idea of printing every cp, cat, echo and ls command out rather frustratingly verbose when scanning logs from a normal run. Do I want it sometimes? YES, but it will actually hinder normal image building iteration if we only have a toggle of "all trace" or "no trace". From apevec at gmail.com Tue Dec 2 22:41:47 2014 From: apevec at gmail.com (Alan Pevec) Date: Tue, 2 Dec 2014 23:41:47 +0100 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: <547DD4F8.6020306@redhat.com> References: <547DD4F8.6020306@redhat.com> Message-ID: >> General: cap Oslo and client library versions - sync from >> openstack/requirements stable/juno, would be good to include in >> the release. >> https://review.openstack.org/#/q/status:open+branch:stable/juno+topic:openstack/requirements,n,z > > +2, >> > let's keep all deps in sync. Those updates do not break anything > for existing users. Just spotted it, there is now proposal to revert caps in Juno: https://review.openstack.org/138546 Doug, shall we stop merging caps to projects in Juno? Cheers, Alan From doug at doughellmann.com Tue Dec 2 23:07:48 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 2 Dec 2014 18:07:48 -0500 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: References: <547DD4F8.6020306@redhat.com> Message-ID: <98D81949-644E-4DCE-8F29-18C64F06DAE2@doughellmann.com> On Dec 2, 2014, at 5:41 PM, Alan Pevec wrote: >>> General: cap Oslo and client library versions - sync from >>> openstack/requirements stable/juno, would be good to include in >>> the release. >>> https://review.openstack.org/#/q/status:open+branch:stable/juno+topic:openstack/requirements,n,z >> >> +2, >>> >> let's keep all deps in sync. Those updates do not break anything >> for existing users. > > Just spotted it, there is now proposal to revert caps in Juno: > > https://review.openstack.org/138546 > > Doug, shall we stop merging caps to projects in Juno? Today we found that when we have caps in place that do not overlap with the versions used in master, we can?t upgrade services one at a time on a host running multiple services. We didn?t have this problem between icehouse and juno because I used the same cap values for both releases, so we didn?t trigger any problems with grenade. One solution is to undo the caps and then add caps when we discover issues in new versions of libraries and stable branches. Another is to require applications to work with ?old? versions of libraries and degrade their feature set, so that we can keep the lower bounds overlapping. In retrospect, this issue with caps was obvious, but I don?t remember it being raise in the planning. As Sean pointed out on IRC today, we should have someone write a spec for changing the way we deal with requirements so we can think about it before deciding what to do. After the releases today, the ?no more alpha versions for Oslo? ship has sailed. Removing the caps will at least let us move ahead while we figure out what to do for stable branches. Doug From apevec at gmail.com Tue Dec 2 23:10:21 2014 From: apevec at gmail.com (Alan Pevec) Date: Wed, 3 Dec 2014 00:10:21 +0100 Subject: [openstack-dev] [OpenStack-docs] [stable] New config options, no default change In-Reply-To: References: <5468F85B.10901@electronicjungle.net> <54748DC0.50400@redhat.com> <547CF9A1.4070009@electronicjungle.net> <350487463.25449839.1417539160631.JavaMail.zimbra@redhat.com> <593930077.25671392.1417556657112.JavaMail.zimbra@redhat.com> Message-ID: >> In this case we're referring to how we handle DocImpact for specific >> branches of a project (say stable/juno of Nova, for example). I don't think >> we'd want to change the DocImpact handling for the entire project to go >> somewhere other than openstack-manuals. As far as I know the current setup >> doesn't support us changing the handling per branch though, only per >> project. > > > Yep, I do agree. And honestly we don't have the resources to cover stable > branches docs in addition to the ongoing. > > Is there another way to tag your work so that you remember to put it in > release notes? We just started discussing this and it is not used yet and we'll pick some other tag. While at it, is there an OpenStack-wide registry for tags in the commit messages? For now I've been simply collecting stable releasenote-worthy changes manually in etherpad https://etherpad.openstack.org/p/StableJuno Cheers, Alan From anne at openstack.org Tue Dec 2 23:26:42 2014 From: anne at openstack.org (Anne Gentle) Date: Tue, 2 Dec 2014 17:26:42 -0600 Subject: [openstack-dev] [OpenStack-docs] [stable] New config options, no default change In-Reply-To: References: <5468F85B.10901@electronicjungle.net> <54748DC0.50400@redhat.com> <547CF9A1.4070009@electronicjungle.net> <350487463.25449839.1417539160631.JavaMail.zimbra@redhat.com> <593930077.25671392.1417556657112.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Dec 2, 2014 at 5:10 PM, Alan Pevec wrote: > >> In this case we're referring to how we handle DocImpact for specific > >> branches of a project (say stable/juno of Nova, for example). I don't > think > >> we'd want to change the DocImpact handling for the entire project to go > >> somewhere other than openstack-manuals. As far as I know the current > setup > >> doesn't support us changing the handling per branch though, only per > >> project. > > > > > > Yep, I do agree. And honestly we don't have the resources to cover stable > > branches docs in addition to the ongoing. > > > > Is there another way to tag your work so that you remember to put it in > > release notes? > > We just started discussing this and it is not used yet and we'll pick > some other tag. > While at it, is there an OpenStack-wide registry for tags in the > commit messages? > Yep, there sure is. https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references > > For now I've been simply collecting stable releasenote-worthy changes > manually in etherpad https://etherpad.openstack.org/p/StableJuno > > > Cheers, > Alan > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Tue Dec 2 23:41:07 2014 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 02 Dec 2014 18:41:07 -0500 Subject: [openstack-dev] [Heat] Rework auto-scaling support in Heat In-Reply-To: References: <20141128073257.GA7165@localhost> <547CE873.1020205@redhat.com> Message-ID: <547E4E13.2040208@redhat.com> On 01/12/14 18:34, Angus Salkeld wrote: > I'd suggest a combination between A and B. We may not be as far apart as I thought. > 1) Separate some of the autoscaling logic into libraries in Heat > 2) Get the separated REST service in place and working (using the above > heat library) > 3) Add an environment option to be able to switch Heat resources to use > the new REST service > 4) after a cycle remove the internal support within Heat That sounds quite sensible to me. The part I care about doing incrementally is splitting the logic (and the DB state) out into a library, i.e. (1). The rest of your plan actually sounds like a better way of switching over to the ReST service once that is done. +1 cheers, Zane. From yongli.he at intel.com Wed Dec 3 00:17:01 2014 From: yongli.he at intel.com (He, Yongli) Date: Wed, 3 Dec 2014 00:17:01 +0000 Subject: [openstack-dev] [third-party]Time for Additional Meeting for third-party In-Reply-To: <547CCA69.4000909@anteaya.info> References: <547CCA69.4000909@anteaya.info> Message-ID: <837B116B6E5B934DA06D9AD0FD79C6A3018E9C60@SHSMSX104.ccr.corp.intel.com> anteaya, UTC 7:00 AM to UTC9:00, or UTC11:30 to UTC13:00 is ideal time for china. if there is no time slot there, just pick up any time between UTC 7:00 AM to UCT 13:00. ( UTC9:00 to UTC 11:30 is on road to home and dinner.) Yongi He -----Original Message----- From: Anita Kuno [mailto:anteaya at anteaya.info] Sent: Tuesday, December 02, 2014 4:07 AM To: openstack Development Mailing List; openstack-infra at lists.openstack.org Subject: [openstack-dev] [third-party]Time for Additional Meeting for third-party One of the actions from the Kilo Third-Party CI summit session was to start up an additional meeting for CI operators to participate from non-North American time zones. Please reply to this email with times/days that would work for you. The current third party meeting is on Mondays at 1800 utc which works well since Infra meetings are on Tuesdays. If we could find a time that works for Europe and APAC that is also on Monday that would be ideal. Josh Hesketh has said he will try to be available for these meetings, he is in Australia. Let's get a sense of what days and timeframes work for those interested and then we can narrow it down and pick a channel. Thanks everyone, Anita. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jim at jimrollenhagen.com Wed Dec 3 00:19:49 2014 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 2 Dec 2014 16:19:49 -0800 Subject: [openstack-dev] [Ironic] Weekly subteam status report Message-ID: <20141203001949.GA5494@jimrollenhagen.com> Hi all, Following is the subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. Testing (adam_g) - Nova stable merges are being bit by a race in the sideways grenade job: https://bugs.launchpad.net/devstack/+bug/1398128 / https://review.openstack.org/#/c/13815 - pxe_ssh race http://status.openstack.org/elastic-recheck/#1393099 seems to have mysteriously subsided over the last week. Bugs (dtantsur) - (As of Mon Dec 1 12:15 UTC) Open: 102 (+1). 4 new (+1), 25 (+1) in progress, 0 critical, 10 high and 3 incomplete Drivers IPA (jroll/JayF/JoshNang) nothing new // jroll DRAC (lucas) nothing new // lucasagomes iLO (wanyen) nothing new iRMC (naohirot) - Updated iRMC 3 Specs, Power, Virtual Media Deploy, and Management - https://review.openstack.org/#/c/134487/ - https://review.openstack.org/#/c/134865/ - https://review.openstack.org/#/c/136020/ Oslo (GheRivero) - Sync pending from oslo.incubator - Config file generation https://review.openstack.org/#/c/128005/4 * Another approach: https://review.openstack.org/#/c/137447/ - Moves all config options into ironic/opts.py and one extra opts file per driver - Refactoring Policy https://review.openstack.org/#/c/126265/ - Full sync - Waiting for the two reviews above https://review.openstack.org/#/c/128051/ - Updated oslo.* Releases this week: - oslo.utils will be the one with more impact on ironic. New utils included: * get_my_ip * uuidutils * is_int_like * is_valid_ip_* - oslo.config * New IP type - oslo.concurrency, oslo.utils, oslo.config , oslo.messaging, oslo.db, oslo.1i8n, oslo.rootwrap, oslo.serialization, oslosphinx // jim [0] https://etherpad.openstack.org/p/IronicWhiteBoard From mriedem at linux.vnet.ibm.com Wed Dec 3 00:48:36 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Tue, 02 Dec 2014 18:48:36 -0600 Subject: [openstack-dev] oslo.concurrency 0.3.0 released In-Reply-To: <547E05E1.6010008@nemebean.com> References: <547DE8F7.7090206@nemebean.com> <547E05E1.6010008@nemebean.com> Message-ID: <547E5DE4.6090800@linux.vnet.ibm.com> On 12/2/2014 12:33 PM, Ben Nemec wrote: > We've discovered a couple of problems as a result of this release. pep8 > in most/all of the projects using oslo.concurrency is failing due to the > move out of the oslo namespace package and the fact that hacking doesn't > know how to handle it, and nova unit tests are failing due to a problem > with the way some mocking was done. > > Fixes for both of these problems are in progress and should hopefully be > available soon. > > -Ben > > On 12/02/2014 10:29 AM, Ben Nemec wrote: >> The Oslo team is pleased to announce the release of oslo.concurrency 0.3.0. >> >> This release includes a number of fixes for problems found during the >> initial adoptions of the library, as well as some functionality >> improvements. >> >> For more details, please see the git log history below and >> https://launchpad.net/oslo.concurrency/+milestone/0.3.0 >> >> Please report issues through launchpad: >> https://launchpad.net/oslo.concurrency >> >> openstack/oslo.concurrency 0.2.0..HEAD >> >> 54c84da Add external lock fixture >> 19f07c6 Add a TODO for retrying pull request #20 >> 46c836e Allow the lock delay to be provided >> 3bda65c Allow for providing a customized semaphore container >> 656f908 Move locale files to proper place >> faa30f8 Flesh out the README >> bca4a0d Move out of the oslo namespace package >> 58de317 Improve testing in py3 environment >> fa52a63 Only modify autoindex.rst if it exists >> 63e618b Imported Translations from Transifex >> d5ea62c lockutils-wrapper cleanup >> 78ba143 Don't use variables that aren't initialized >> >> diffstat (except docs and test files): >> >> .gitignore | 1 + >> .testr.conf | 2 +- >> README.rst | 4 +- >> .../locale/en_GB/LC_MESSAGES/oslo.concurrency.po | 16 +- >> oslo.concurrency/locale/oslo.concurrency.pot | 16 +- >> oslo/concurrency/__init__.py | 29 ++ >> oslo/concurrency/_i18n.py | 32 -- >> oslo/concurrency/fixture/__init__.py | 13 + >> oslo/concurrency/fixture/lockutils.py | 51 -- >> oslo/concurrency/lockutils.py | 376 -------------- >> oslo/concurrency/openstack/__init__.py | 0 >> oslo/concurrency/openstack/common/__init__.py | 0 >> oslo/concurrency/openstack/common/fileutils.py | 146 ------ >> oslo/concurrency/opts.py | 45 -- >> oslo/concurrency/processutils.py | 340 ------------ >> oslo_concurrency/__init__.py | 0 >> oslo_concurrency/_i18n.py | 32 ++ >> oslo_concurrency/fixture/__init__.py | 0 >> oslo_concurrency/fixture/lockutils.py | 76 +++ >> oslo_concurrency/lockutils.py | 502 ++++++++++++++++++ >> oslo_concurrency/openstack/__init__.py | 0 >> oslo_concurrency/openstack/common/__init__.py | 0 >> oslo_concurrency/openstack/common/fileutils.py | 146 ++++++ >> oslo_concurrency/opts.py | 45 ++ >> oslo_concurrency/processutils.py | 340 ++++++++++++ >> requirements-py3.txt | 1 + >> requirements.txt | 1 + >> setup.cfg | 9 +- >> tests/test_lockutils.py | 575 >> ++++++++++++++++++++ >> tests/test_processutils.py | 519 >> +++++++++++++++++++ >> tests/test_warning.py | 29 ++ >> tests/unit/__init__.py | 0 >> tests/unit/test_lockutils.py | 543 >> ------------------- >> tests/unit/test_lockutils_eventlet.py | 59 --- >> tests/unit/test_processutils.py | 518 ------------------ >> tox.ini | 8 +- >> 42 files changed, 3515 insertions(+), 2135 deletions(-) >> >> Requirements updates: >> >> diff --git a/requirements-py3.txt b/requirements-py3.txt >> index b1a8722..a27b434 100644 >> --- a/requirements-py3.txt >> +++ b/requirements-py3.txt >> @@ -13,0 +14 @@ six>=1.7.0 >> +retrying>=1.2.2,!=1.3.0 # Apache-2.0 >> diff --git a/requirements.txt b/requirements.txt >> index b1a8722..a27b434 100644 >> --- a/requirements.txt >> +++ b/requirements.txt >> @@ -13,0 +14 @@ six>=1.7.0 >> +retrying>=1.2.2,!=1.3.0 # Apache-2.0 >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Was a bug reported for the nova unit tests that were mocking out external_lock in lockutils? I didn't see one so I opened a bug [1] and wrote the elastic-recheck query against that. I'm working on fixing the tests in the meantime but I'll gladly stop if someone else has a fix up for review. [1] https://bugs.launchpad.net/nova/+bug/1398624 -- Thanks, Matt Riedemann From ramy.asselin at hp.com Wed Dec 3 00:46:59 2014 From: ramy.asselin at hp.com (Asselin, Ramy) Date: Wed, 3 Dec 2014 00:46:59 +0000 Subject: [openstack-dev] [CI]Setup CI system behind proxy In-Reply-To: References: Message-ID: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A408AEF@G9W0752.americas.hpqcorp.net> Hi, I tried this a while back and it does not work [1]. Last I looked, there was no config option, and nothing in the code to allow it. I?m not aware of any patches that change that. I got mine working by getting port 29418 opened for review.openstack.org ip address. Ramy [1] http://lists.openstack.org/pipermail/openstack-dev/2014-June/037030.html From: thanh le giang [mailto:legiangthanh at gmail.com] Sent: Tuesday, December 02, 2014 10:51 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [CI]Setup CI system behind proxy Hi, I want to setup everything behind proxy, but i am stuck at step configure zuul connect to gerrit. I can't find any option to set proxy on zuul.conf file. Thanks 2014-12-03 1:00 GMT+07:00 Zaro >: Could you please clarify? Do you mean you want to setup everything behind a proxy? or do you mean you want to setup just gerrit behind a proxy? On Mon, Dec 1, 2014 at 6:26 PM, thanh le giang > wrote: Dear all I have set up a CI system successfully with directly access to internet. Now I have another requirement which requires setting up CI system behind proxy but i can't find any way to configure zuul to connect to gerrit through proxy. Any advice is appreciated. Thanks and Regards -- L.G.Thanh Email: legiangthan at gmail.com lgthanh at fit.hcmus.edu.vn _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- L.G.Thanh Email: legiangthan at gmail.com lgthanh at fit.hcmus.edu.vn -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkamaldinov at mirantis.com Wed Dec 3 00:54:37 2014 From: rkamaldinov at mirantis.com (Ruslan Kamaldinov) Date: Wed, 3 Dec 2014 04:54:37 +0400 Subject: [openstack-dev] [Murano] - Install Guide In-Reply-To: <90a10d18dc28419ab7ba4dfae97fd709@BY2PR42MB101.048d.mgd.msft.net> References: <90a10d18dc28419ab7ba4dfae97fd709@BY2PR42MB101.048d.mgd.msft.net> Message-ID: Murano installation options are described at: http://murano.readthedocs.org/en/latest/ Thanks, Ruslan On Tue, Dec 2, 2014 at 1:41 PM, wrote: > Hi All, > > > > I am new to Murano and trying to integrate with Openstack Juno. > > Any build guides or help would be appreciated. > > > > Warm Regards, > > Raghavendra Lad > IDC?IC-P-Capability > > Infrastructure Consulting, > > Infrastructure Services ? Accenture Operations > > Mobile - +91 9880040919 > > > > > ________________________________ > > This message is for the designated recipient only and may contain > privileged, proprietary, or otherwise confidential information. If you have > received it in error, please notify the sender immediately and delete the > original. Any other use of the e-mail by you is prohibited. Where allowed by > local law, electronic communications with Accenture and its affiliates, > including e-mail and instant messaging (including content), may be scanned > by our systems for the purposes of information security and assessment of > internal compliance with Accenture policy. > ______________________________________________________________________________________ > > www.accenture.com > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From michael.hinnant at hp.com Wed Dec 3 00:57:13 2014 From: michael.hinnant at hp.com (Hinnant, Michael) Date: Wed, 3 Dec 2014 00:57:13 +0000 Subject: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon In-Reply-To: References: Message-ID: <977A19CD-54D0-4475-8829-3F9758CFB311@hp.com> I would agree in principle - keeping the user from losing data with accidental side clicks is important, and having a modal on a modal is not a great experience. We could make the closing of the modal upon outside click just contextual on whether or not the user has entered data / modified anything, but that will likely just seem inconsistent to the user. I?d vote for the static behavior and rely on the explicit close options to allow the user to exit out (esc, cancel, and [x]). Michael On Dec 2, 2014, at 8:30 AM, Thai Q Tran > wrote: I like David's approach, but having two modals (one for the form, one for the confirmation) on top of each other is a bit weird and would require constant checks for input. We already have three ways to close the dialog today: via the cancel button, X button, and ESC key. It's more important to me that I don't lose work by accidentally clicking outside. So from this perspective, I think that having a static behavior is the way to go. Regardless of what approach we pick, its an improvement over what we have today. Timur Sufiev ---12/02/2014 04:22:00 AM---Hello, Horizoneers and UX-ers! The default behavior of modals in Horizon (defined in turn by Bootstr From: Timur Sufiev > To: "OpenStack Development Mailing List (not for usage questions)" > Date: 12/02/2014 04:22 AM Subject: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon ________________________________ Hello, Horizoneers and UX-ers! The default behavior of modals in Horizon (defined in turn by Bootstrap defaults) regarding their closing is to simply close the modal once user clicks somewhere outside of it (on the backdrop element below and around the modal). This is not very convenient for the modal forms containing a lot of input - when it is closed without a warning all the data the user has already provided is lost. Keeping this in mind, I've made a patch [1] changing default Bootstrap 'modal_backdrop' parameter to 'static', which means that forms are not closed once the user clicks on a backdrop, while it's still possible to close them by pressing 'Esc' or clicking on the 'X' link at the top right border of the form. Also the patch [1] allows to customize this behavior (between 'true'-current one/'false' - no backdrop element/'static') on a per-form basis. What I didn't know at the moment I was uploading my patch is that David Lyle had been working on a similar solution [2] some time ago. It's a bit more elaborate than mine: if the user has already filled some some inputs in the form, then a confirmation dialog is shown, otherwise the form is silently dismissed as it happens now. The whole point of writing about this in the ML is to gather opinions which approach is better: * stick to the current behavior; * change the default behavior to 'static'; * use the David's solution with confirmation dialog (once it'll be rebased to the current codebase). What do you think? [1] https://review.openstack.org/#/c/113206/ [2] https://review.openstack.org/#/c/23037/ P.S. I remember that I promised to write this email a week ago, but better late than never :). -- Timur Sufiev_______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mordred at inaugust.com Wed Dec 3 01:04:18 2014 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 03 Dec 2014 02:04:18 +0100 Subject: [openstack-dev] [docs] Older Developer documentation Icehouse? Havana? In-Reply-To: <87a9363a73.fsf@sparky.home> References: <87a9363a73.fsf@sparky.home> Message-ID: <547E6192.5000108@inaugust.com> On 12/02/2014 06:14 AM, Russell Sim wrote: > Hi, > > From what I can see it seems like the developer documentation available > on the OpenStack website is generated from the git repositories. > > http://docs.openstack.org/developer/openstack-projects.html > > Are older versions of this documentation currently generated and hosted > somewhere? Or is it possible to generate versions of this developer > documentation for each release and host it on the same website? > Yup - we do this just for you ... http://docs.openstack.org/developer/nova/havana/ Monty From joehuang at huawei.com Wed Dec 3 01:51:57 2014 From: joehuang at huawei.com (joehuang) Date: Wed, 3 Dec 2014 01:51:57 +0000 Subject: [openstack-dev] =?windows-1252?q?_=5Ball=5D_=5Btc=5D_=5BPTL=5D_Ca?= =?windows-1252?q?scading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com>, <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> Dear all & TC & PTL, In the 40 minutes cross-project summit session ?Approaches for scaling out?[1], almost 100 peoples attended the meeting, and the conclusion is that cells can not cover the use cases and requirements which the OpenStack cascading solution[2] aim to address, the background is included in the mail. After the summit, we just ported the PoC[3] source code from IceHouse based to Juno based. Now, let's move forward: The major task is to introduce new driver/agent to existing core projects, for the core idea of cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. a). Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases. b). Volunteer as the cross project coordinator. c). Volunteers for implementation and CI. Background of OpenStack cascading vs cells: 1. Use cases a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" ), establishing globally addressable tenants which result in efficient services deployment. b). Telefonica use case[5], create virtual DC( data center) cross multiple physical DCs with seamless experience. c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV cloud, it?s in nature the cloud will be distributed but inter-connected in many data centers. 2.requirements a). The operator has multiple sites cloud; each site can use one or multiple vendor?s OpenStack distributions. b). Each site with its own requirements and upgrade schedule while maintaining standard OpenStack API c). The multi-site cloud must provide unified resource management with global Open API exposed, for example create virtual DC cross multiple physical DCs with seamless experience. Although a prosperity orchestration layer could be developed for the multi-site cloud, but it's prosperity API in the north bound interface. The cloud operators want the ecosystem friendly global open API for the mutli-site cloud for global access. 3. What problems does cascading solve that cells doesn't cover: OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core architecture idea of OpenStack cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks (from different vendor's distribution, or different version ) which may located in different sites (or data centers ) through the OpenStack API, meanwhile the cloud still expose OpenStack API as the north-bound API in the cloud level. 4. Why cells can?t do that: Cells provide the scale out capability to Nova, but from the OpenStack as a whole point of view, it?s still working like one OpenStack instance. a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. This approach provides the multi-site cloud with one unified API endpoint and unified resource management, but consolidation of multi-vendor/multi-version OpenStack instances across one or more data centers cannot be fulfilled. b). Each site installed one child cell and accompanied standalone Cinder, Neutron(or Nova-network), Glance, Ceilometer. This approach makes multi-vendor/multi-version OpenStack distribution co-existence in multi-site seem to be feasible, but the requirement for unified API endpoint and unified resource management cannot be fulfilled. Cross Neutron networking automation is also missing, which should otherwise be done manually or use proprietary orchestration layer. For more information about cascading and cells, please refer to the discussion thread before Paris Summit [7]. [1]Approaches for scaling out: https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack [2]OpenStack cascading solution: https://wiki.openstack.org/wiki/OpenStack_cascading_solution [3]Cascading PoC: https://github.com/stackforge/tricircle [4]Vodafone use case (9'02" to 12'30"): https://www.youtube.com/watch?v=-KOJYvhmxQI [5]Telefonica use case: http://www.telefonica.com/en/descargas/mwc/present_20140224.pdf [6]ETSI NFV use cases: http://www.etsi.org/deliver/etsi_gs/nfv/001_099/001/01.01.01_60/gs_nfv001v010101p.pdf [7]Cascading thread before design summit: http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-td54115.html Best Regards Chaoyi Huang (joehuang) From aaronorosen at gmail.com Wed Dec 3 02:13:37 2014 From: aaronorosen at gmail.com (Aaron Rosen) Date: Tue, 2 Dec 2014 18:13:37 -0800 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: +1 for Kevin and Henry! On Tue, Dec 2, 2014 at 10:40 AM, Nati Ueno wrote: > Hi folks > > Congrats! Henry and Kevin. > I'll keep contributing the community, but Thank you for working with > me as core team :) > > Best > Nachi > > 2014-12-03 2:05 GMT+09:00 Carl Baldwin : > > +1 from me for all the changes. I appreciate the work from all four > > of these excellent contributors. I'm happy to welcome Henry and Kevin > > as new core reviewers. I also look forward to continuing to work with > > Nachi and Bob as important members of the community. > > > > Carl > > > > On Tue, Dec 2, 2014 at 8:59 AM, Kyle Mestery > wrote: > >> Now that we're in the thick of working hard on Kilo deliverables, I'd > >> like to make some changes to the neutron core team. Reviews are the > >> most important part of being a core reviewer, so we need to ensure > >> cores are doing reviews. The stats for the 180 day period [1] indicate > >> some changes are needed for cores who are no longer reviewing. > >> > >> First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from > >> neutron-core. Bob and Nachi have been core members for a while now. > >> They have contributed to Neutron over the years in reviews, code and > >> leading sub-teams. I'd like to thank them for all that they have done > >> over the years. I'd also like to propose that should they start > >> reviewing more going forward the core team looks to fast track them > >> back into neutron-core. But for now, their review stats place them > >> below the rest of the team for 180 days. > >> > >> As part of the changes, I'd also like to propose two new members to > >> neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have > >> been very active in reviews, meetings, and code for a while now. Henry > >> lead the DB team which fixed Neutron DB migrations during Juno. Kevin > >> has been actively working across all of Neutron, he's done some great > >> work on security fixes and stability fixes in particular. Their > >> comments in reviews are insightful and they have helped to onboard new > >> reviewers and taken the time to work with people on their patches. > >> > >> Existing neutron cores, please vote +1/-1 for the addition of Henry > >> and Kevin to the core team. > >> > >> Thanks! > >> Kyle > >> > >> [1] http://stackalytics.com/report/contribution/neutron-group/180 > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Nachi Ueno > email:nati.ueno at gmail.com > twitter:http://twitter.com/nati > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaronorosen at gmail.com Wed Dec 3 02:21:22 2014 From: aaronorosen at gmail.com (Aaron Rosen) Date: Tue, 2 Dec 2014 18:21:22 -0800 Subject: [openstack-dev] [congress] low-hanging-fruit Message-ID: Hi, At this morning's irc meeting there were several newcomers that were looking to start contributing to congress. As promised we've come up with several low hanging bugs to start getting your feet wet: https://bugs.launchpad.net/congress/+bugs?field.tag=low-hanging-fruit Best, Aaron -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbirru at gmail.com Wed Dec 3 03:19:11 2014 From: mbirru at gmail.com (Murali B) Date: Wed, 3 Dec 2014 08:49:11 +0530 Subject: [openstack-dev] SRIOV failures error- Message-ID: Hi Itzik, Thank you for your reply Please find the below output for "#echo 'use nova;select hypervisor_hostname,pci_stats from compute_nodes;' | mysql -u root" MariaDB [nova]> select hypervisor_hostname,pci_stats from compute_nodes; +---------------------+-------------------------------------------------------------------------------------------+ | hypervisor_hostname | pci_stats | +---------------------+-------------------------------------------------------------------------------------------+ | compute2 | [] | | xilinx-r720 | [{"count": 1, "vendor_id": "8086", "physical_network": "physnet2", "product_id": "10ed"}] | | compute1 | [] | | compute4 | [] | +---------------------+-------------------------------------------------------------------------------------------+ we have enabled SRIOV agent on compute node 4 as well as xilinx-r720 Thanks -Murali -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Wed Dec 3 04:48:53 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Wed, 3 Dec 2014 10:48:53 +0600 Subject: [openstack-dev] [Mistral] Event Subscription In-Reply-To: References: Message-ID: <20352182-4C59-4C60-833E-3DA3DCDD3E90@mirantis.com> > On 02 Dec 2014, at 23:53, W Chan wrote: > > On processing the events, I was thinking a separate entity. But you gave me an idea, how about a system action for publishing the events that the current executors can run? Yes, sounds great! I really like the idea. > Alternately, instead of making HTTP calls, what do you think if mistral just post the events to the exchange(s) that the subscribers provided and let the subscribers decide how to consume the events (i.e post to webhook, etc.) from these exchanges? This will simplify implementation somewhat. The engine can just take care of publishing the events to the exchanges and call it done. Yep. I understand the general idea but am still a little confused. Can you please share the details of what you mean by ?exchange? and who is going to consume events? If, like previously said, it will be just sending actions to executors then it?s ok, I got it. And you can just use executor client from rpc.py to do the job. Or here you mean something different? Would be nice it we could present a communication diagram (engine -> event occurred -> ? ). Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Wed Dec 3 05:00:32 2014 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 03 Dec 2014 16:00:32 +1100 Subject: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023) In-Reply-To: <1417559171-sup-8883@fewbar.com> References: <54755DE7.2070107@redhat.com> <547607C3.3090604@nemebean.com> <1417495283-sup-444@fewbar.com> <547E1177.4030705@redhat.com> <1417559171-sup-8883@fewbar.com> Message-ID: <547E98F0.8010406@redhat.com> On 12/03/2014 09:30 AM, Clint Byrum wrote: > I for one find the idea of printing every cp, cat, echo and ls command out > rather frustratingly verbose when scanning logs from a normal run. I for one find this ongoing discussion over a flag whose own help says "-x -- turn on tracing" not doing the blindly obvious thing of turning on tracing and the seeming inability to reach to a conclusion on a posted review over 3 months a troubling narrative for potential consumers of diskimage-builder. -i From irenab at mellanox.com Wed Dec 3 06:40:26 2014 From: irenab at mellanox.com (Irena Berezovsky) Date: Wed, 3 Dec 2014 06:40:26 +0000 Subject: [openstack-dev] SRIOV failures error- In-Reply-To: References: Message-ID: Hi Murali, Seems there is a mismatch between pci_whitelist configuration and requested network. In the table below: "physical_network": "physnet2" In the error you sent, there is: [InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=58584ee1-8a41-4979-9905-4d18a3df3425,spec=[{physical_network='physnet1'}])], Please check the neutron and nova configuration for physical_network. Cheers, Irena From: Murali B [mailto:mbirru at gmail.com] Sent: Wednesday, December 03, 2014 5:19 AM To: openstack-dev at lists.openstack.org; itzikb at redhat.com Subject: [openstack-dev] SRIOV failures error- Hi Itzik, Thank you for your reply Please find the below output for "#echo 'use nova;select hypervisor_hostname,pci_stats from compute_nodes;' | mysql -u root" MariaDB [nova]> select hypervisor_hostname,pci_stats from compute_nodes; +---------------------+-------------------------------------------------------------------------------------------+ | hypervisor_hostname | pci_stats | +---------------------+-------------------------------------------------------------------------------------------+ | compute2 | [] | | xilinx-r720 | [{"count": 1, "vendor_id": "8086", "physical_network": "physnet2", "product_id": "10ed"}] | | compute1 | [] | | compute4 | [] | +---------------------+-------------------------------------------------------------------------------------------+ we have enabled SRIOV agent on compute node 4 as well as xilinx-r720 Thanks -Murali -------------- next part -------------- An HTML attachment was scrubbed... URL: From joshua.hesketh at rackspace.com Wed Dec 3 07:03:37 2014 From: joshua.hesketh at rackspace.com (Joshua Hesketh) Date: Wed, 3 Dec 2014 18:03:37 +1100 Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: <837B116B6E5B934DA06D9AD0FD79C6A3018E9C60@SHSMSX104.ccr.corp.intel.com> References: <547CCA69.4000909@anteaya.info> <837B116B6E5B934DA06D9AD0FD79C6A3018E9C60@SHSMSX104.ccr.corp.intel.com> Message-ID: <547EB5C9.8020008@rackspace.com> Hey, 0700 -> 1000 UTC would work for me most weeks fwiw. Cheers, Josh Rackspace Australia On 12/3/14 11:17 AM, He, Yongli wrote: > anteaya, > > UTC 7:00 AM to UTC9:00, or UTC11:30 to UTC13:00 is ideal time for china. > > if there is no time slot there, just pick up any time between UTC 7:00 AM to UCT 13:00. ( UTC9:00 to UTC 11:30 is on road to home and dinner.) > > Yongi He > -----Original Message----- > From: Anita Kuno [mailto:anteaya at anteaya.info] > Sent: Tuesday, December 02, 2014 4:07 AM > To: openstack Development Mailing List; openstack-infra at lists.openstack.org > Subject: [openstack-dev] [third-party]Time for Additional Meeting for third-party > > One of the actions from the Kilo Third-Party CI summit session was to start up an additional meeting for CI operators to participate from non-North American time zones. > > Please reply to this email with times/days that would work for you. The current third party meeting is on Mondays at 1800 utc which works well since Infra meetings are on Tuesdays. If we could find a time that works for Europe and APAC that is also on Monday that would be ideal. > > Josh Hesketh has said he will try to be available for these meetings, he is in Australia. > > Let's get a sense of what days and timeframes work for those interested and then we can narrow it down and pick a channel. > > Thanks everyone, > Anita. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra From marc at koderer.com Wed Dec 3 07:03:52 2014 From: marc at koderer.com (Marc Koderer) Date: Wed, 3 Dec 2014 08:03:52 +0100 Subject: [openstack-dev] [Manila] Manila project use-cases In-Reply-To: References: <1B324A03-7768-4603-9544-01EF53CA7BBC@koderer.com> Message-ID: <99BBEF13-EE7C-4B0C-9EA5-D9A941D40AC4@koderer.com> Hi Valeriy, thanks for feedback. My answers see below. Am 02.12.2014 um 16:44 schrieb Valeriy Ponomaryov : > Hello Marc, > > Here, I tried to cover mentioned use cases with "implemented or not" notes: > > 1) Implemented, but details of implementation are different for different share drivers. > 2) Not clear for me. If you mean possibility to mount one share to any amount of VMs, then yes. That means you have an existing shared volume in your storage system and import it to manila (like cinder manage). I guess this is not implemented yet. > 3) Nova is used only in one case - Generic Driver that uses Cinder volumes. So, it can be said, that Manila interface does allow to use "flat" network and a share driver just should have implementation for it. I will assume you mean usage of generic driver and possibility to mount shares to different machines except Nova VMs. - In that case network architecture should allow to make connection in general. If it is allowed, then should not be any problems with mount to any machine. Just access-allow operations should be performed. This point was actually a copy from the wiki [1]. I just removed the Bare-metal point since for me it doesn?t matter whether the infrastructure service is a Bare-metal machine or not. > 4) Access can be shared, but it is not as flexible as could be wanted. Owner of a share can grant access to all, and if there is network connectivity between user and share host, then user will be able to mount having provided access. Also a copy from the wiki. > 5) Manila can not remove some "mount" of some share, it can remove access for possibility to mount in general. So, looks like not implemented. > 6) Implemented. > 7) Not implemented yet. > 8) No "cloning", but we have snapshot-approach as for volumes in cinder. Regards Marc > > Regards, > Valeriy Ponomaryov > Mirantis > > On Tue, Dec 2, 2014 at 4:22 PM, Marc Koderer wrote: > Hello Manila Team, > > We identified use cases for Manila during an internal workshop > with our operators. I would like to share them with you and > update the wiki [1] since it seems to be outdated. > > Before that I would like to gather feedback and you might help me > with identifying things that aren?t implemented yet. > > Our list: > > 1.) Create a share and use it in a tenant > Initial creation of a shared storage volume and assign it to several VM?s > > 2.) Assign an preexisting share to a VM with Manila > Import an existing Share with data and it to several VM?s in case of > migrating an existing production - services to Openstack. > > 3.) External consumption of a share > Accommodate and provide mechanisms for last-mile consumption of shares by > consumers of the service that aren't mediated by Nova. > > 4.) Cross Tenant sharing > Coordinate shares across tenants > > 5.) Detach a share and don?t destroy data (deactivate) > Share is flagged as inactive and data are not destroyed for later > usage or in case of legal requirements. > > 6.) Unassign and delete data of a share > Destroy entire share with all data and free space for further usage. > > 7.) Resize Share > Resize existing and assigned share on the fly. > > 8.) Copy existing share > Copy existing share between different storage technologies > > Regards > Marc > Deutsche Telekom > > [1]: https://wiki.openstack.org/wiki/Manila/usecases > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Kind Regards > Valeriy Ponomaryov > www.mirantis.com > vponomaryov at mirantis.com > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From vponomaryov at mirantis.com Wed Dec 3 07:39:59 2014 From: vponomaryov at mirantis.com (Valeriy Ponomaryov) Date: Wed, 3 Dec 2014 10:39:59 +0300 Subject: [openstack-dev] [Manila] Manila project use-cases In-Reply-To: <99BBEF13-EE7C-4B0C-9EA5-D9A941D40AC4@koderer.com> References: <1B324A03-7768-4603-9544-01EF53CA7BBC@koderer.com> <99BBEF13-EE7C-4B0C-9EA5-D9A941D40AC4@koderer.com> Message-ID: According to (2) - yes, analog of Cinder's "manage/unmanage" is not implemented in Manila yet. On Wed, Dec 3, 2014 at 9:03 AM, Marc Koderer wrote: > Hi Valeriy, > > thanks for feedback. My answers see below. > > Am 02.12.2014 um 16:44 schrieb Valeriy Ponomaryov < > vponomaryov at mirantis.com>: > > > Hello Marc, > > > > Here, I tried to cover mentioned use cases with "implemented or not" > notes: > > > > 1) Implemented, but details of implementation are different for > different share drivers. > > 2) Not clear for me. If you mean possibility to mount one share to any > amount of VMs, then yes. > > That means you have an existing shared volume in your storage system and > import > it to manila (like cinder manage). I guess this is not implemented yet. > > > 3) Nova is used only in one case - Generic Driver that uses Cinder > volumes. So, it can be said, that Manila interface does allow to use "flat" > network and a share driver just should have implementation for it. I will > assume you mean usage of generic driver and possibility to mount shares to > different machines except Nova VMs. - In that case network architecture > should allow to make connection in general. If it is allowed, then should > not be any problems with mount to any machine. Just access-allow operations > should be performed. > > This point was actually a copy from the wiki [1]. I just removed the > Bare-metal point > since for me it doesn?t matter whether the infrastructure service is a > Bare-metal machine or not. > > > 4) Access can be shared, but it is not as flexible as could be wanted. > Owner of a share can grant access to all, and if there is network > connectivity between user and share host, then user will be able to mount > having provided access. > > Also a copy from the wiki. > > > 5) Manila can not remove some "mount" of some share, it can remove > access for possibility to mount in general. So, looks like not implemented. > > 6) Implemented. > > 7) Not implemented yet. > > 8) No "cloning", but we have snapshot-approach as for volumes in cinder. > > Regards > Marc > > > > > Regards, > > Valeriy Ponomaryov > > Mirantis > > > > On Tue, Dec 2, 2014 at 4:22 PM, Marc Koderer wrote: > > Hello Manila Team, > > > > We identified use cases for Manila during an internal workshop > > with our operators. I would like to share them with you and > > update the wiki [1] since it seems to be outdated. > > > > Before that I would like to gather feedback and you might help me > > with identifying things that aren?t implemented yet. > > > > Our list: > > > > 1.) Create a share and use it in a tenant > > Initial creation of a shared storage volume and assign it to > several VM?s > > > > 2.) Assign an preexisting share to a VM with Manila > > Import an existing Share with data and it to several VM?s in case of > > migrating an existing production - services to Openstack. > > > > 3.) External consumption of a share > > Accommodate and provide mechanisms for last-mile consumption of > shares by > > consumers of the service that aren't mediated by Nova. > > > > 4.) Cross Tenant sharing > > Coordinate shares across tenants > > > > 5.) Detach a share and don?t destroy data (deactivate) > > Share is flagged as inactive and data are not destroyed for later > > usage or in case of legal requirements. > > > > 6.) Unassign and delete data of a share > > Destroy entire share with all data and free space for further usage. > > > > 7.) Resize Share > > Resize existing and assigned share on the fly. > > > > 8.) Copy existing share > > Copy existing share between different storage technologies > > > > Regards > > Marc > > Deutsche Telekom > > > > [1]: https://wiki.openstack.org/wiki/Manila/usecases > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > -- > > Kind Regards > > Valeriy Ponomaryov > > www.mirantis.com > > vponomaryov at mirantis.com > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Kind Regards Valeriy Ponomaryov www.mirantis.com vponomaryov at mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Wed Dec 3 07:40:02 2014 From: gkotton at vmware.com (Gary Kotton) Date: Wed, 3 Dec 2014 07:40:02 +0000 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: +1 for both. Long overdue! From: Aaron Rosen > Reply-To: OpenStack List > Date: Wednesday, December 3, 2014 at 4:13 AM To: OpenStack List > Subject: Re: [openstack-dev] [neutron] Changes to the core team +1 for Kevin and Henry! On Tue, Dec 2, 2014 at 10:40 AM, Nati Ueno > wrote: Hi folks Congrats! Henry and Kevin. I'll keep contributing the community, but Thank you for working with me as core team :) Best Nachi 2014-12-03 2:05 GMT+09:00 Carl Baldwin >: > +1 from me for all the changes. I appreciate the work from all four > of these excellent contributors. I'm happy to welcome Henry and Kevin > as new core reviewers. I also look forward to continuing to work with > Nachi and Bob as important members of the community. > > Carl > > On Tue, Dec 2, 2014 at 8:59 AM, Kyle Mestery > wrote: >> Now that we're in the thick of working hard on Kilo deliverables, I'd >> like to make some changes to the neutron core team. Reviews are the >> most important part of being a core reviewer, so we need to ensure >> cores are doing reviews. The stats for the 180 day period [1] indicate >> some changes are needed for cores who are no longer reviewing. >> >> First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from >> neutron-core. Bob and Nachi have been core members for a while now. >> They have contributed to Neutron over the years in reviews, code and >> leading sub-teams. I'd like to thank them for all that they have done >> over the years. I'd also like to propose that should they start >> reviewing more going forward the core team looks to fast track them >> back into neutron-core. But for now, their review stats place them >> below the rest of the team for 180 days. >> >> As part of the changes, I'd also like to propose two new members to >> neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have >> been very active in reviews, meetings, and code for a while now. Henry >> lead the DB team which fixed Neutron DB migrations during Juno. Kevin >> has been actively working across all of Neutron, he's done some great >> work on security fixes and stability fixes in particular. Their >> comments in reviews are insightful and they have helped to onboard new >> reviewers and taken the time to work with people on their patches. >> >> Existing neutron cores, please vote +1/-1 for the addition of Henry >> and Kevin to the core team. >> >> Thanks! >> Kyle >> >> [1] http://stackalytics.com/report/contribution/neutron-group/180 >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Nachi Ueno email:nati.ueno at gmail.com twitter:http://twitter.com/nati _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Wed Dec 3 07:40:10 2014 From: aj at suse.com (Andreas Jaeger) Date: Wed, 03 Dec 2014 08:40:10 +0100 Subject: [openstack-dev] [OpenStack-docs] [stable] New config options, no default change In-Reply-To: References: <5468F85B.10901@electronicjungle.net> <54748DC0.50400@redhat.com> <547CF9A1.4070009@electronicjungle.net> <350487463.25449839.1417539160631.JavaMail.zimbra@redhat.com> <593930077.25671392.1417556657112.JavaMail.zimbra@redhat.com> Message-ID: <547EBE5A.2080900@suse.com> On 12/03/2014 12:10 AM, Alan Pevec wrote: >>> In this case we're referring to how we handle DocImpact for specific >>> branches of a project (say stable/juno of Nova, for example). I don't think >>> we'd want to change the DocImpact handling for the entire project to go >>> somewhere other than openstack-manuals. As far as I know the current setup >>> doesn't support us changing the handling per branch though, only per >>> project. >> >> >> Yep, I do agree. And honestly we don't have the resources to cover stable >> branches docs in addition to the ongoing. >> >> Is there another way to tag your work so that you remember to put it in >> release notes? > > We just started discussing this and it is not used yet and we'll pick > some other tag. > While at it, is there an OpenStack-wide registry for tags in the > commit messages? Not sure - but if you do define one, please send a patch for the infra-manual: http://docs.openstack.org/infra/manual/developers.html#peer-review Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany GF:Jeff Hawn,Jennifer Guild,Felix Imend?rffer,HRB 21284 (AG N?rnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From roman.dobosz at intel.com Wed Dec 3 07:44:57 2014 From: roman.dobosz at intel.com (Roman Dobosz) Date: Wed, 3 Dec 2014 08:44:57 +0100 Subject: [openstack-dev] [nova] Host health monitoring Message-ID: <20141203084457.b2fbb17d004166e43560f91a@intel.com> Hi, I've just started to work on the topic of detection if host is alive or not: https://blueprints.launchpad.net/nova/+spec/host-health-monitoring I'll appreciate any comments :) -- Kind regards Roman From slukjanov at mirantis.com Wed Dec 3 08:13:42 2014 From: slukjanov at mirantis.com (Sergey Lukjanov) Date: Wed, 3 Dec 2014 12:13:42 +0400 Subject: [openstack-dev] [sahara] Asia friendly IRC meeting time In-Reply-To: References: <54749FBF.5030503@redhat.com> Message-ID: There were no objections, so, we'll make a meeting tomorrow at 1400 UTC On Thu, Nov 27, 2014 at 1:43 PM, Sergey Lukjanov wrote: > If there will be no objections till the next week, we'll try the new time > (14:00UTC) next week. > > On Thu, Nov 27, 2014 at 4:06 AM, Zhidong Yu wrote: > >> Thank you all! >> >> On Thu, Nov 27, 2014 at 1:15 AM, Sergey Lukjanov >> wrote: >> >>> I think that 6 am for US west works much better than 3 am for Saratov. >>> >>> So, I'm ok with keeping current time and add 1400 UTC. >>> >>> 18:00UTC: Moscow (9pm) China(2am) US West(10am)/US East (1pm) >>> 14:00UTC: Moscow (5pm) China(10pm) US (W 6am / E 9am) >>> >>> I think it's the best option to make all of us able to join. >>> >>> >>> On Wed, Nov 26, 2014 at 8:33 AM, Zhidong Yu wrote: >>> >>>> If 6am works for people in US west, then I'm fine with Matt's >>>> suggestion (UTC14:00). >>>> >>>> Thanks, Zhidong >>>> >>>> On Tue, Nov 25, 2014 at 11:26 PM, Matthew Farrellee >>>> wrote: >>>> >>>>> On 11/25/2014 02:37 AM, Zhidong Yu wrote: >>>>> >>>>> Current meeting time: >>>>>> 18:00UTC: Moscow (9pm) China(2am) US West(10am) >>>>>> >>>>>> My proposal: >>>>>> 18:00UTC: Moscow (9pm) China(2am) US West(10am) >>>>>> 00:00UTC: Moscow (3am) China(8am) US West(4pm) >>>>>> >>>>> >>>>> fyi, a number of us are UW East (US West + 3 hours), so... >>>>> >>>>> current meeting time: >>>>> 18:00UTC: Moscow (9pm) China(2am) US West(10am)/US East (1pm) >>>>> >>>>> and during daylight savings it's US West(11am)/US East(2pm) >>>>> >>>>> so the proposal is: >>>>> 18:00UTC: Moscow (9pm) China(2am) US (W 10am / E 1pm) >>>>> 00:00UTC: Moscow (3am) China(8am) US (W 4pm / E 7pm) >>>>> >>>>> given it's literally impossible to schedule a meeting during business >>>>> hours across saratov, china and the us, that's a pretty reasonable >>>>> proposal. my concern is that 00:00UTC may be thin on saratov & US >>>>> participants. >>>>> >>>>> also consider alternating the existing schedule w/ something that's ~4 >>>>> hours earlier... >>>>> 14:00UTC: Moscow (5pm) China(10pm) US (W 6am / E 9am) >>>>> >>>>> best, >>>>> >>>>> >>>>> matt >>>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> >>> -- >>> Sincerely yours, >>> Sergey Lukjanov >>> Sahara Technical Lead >>> (OpenStack Data Processing) >>> Principal Software Engineer >>> Mirantis Inc. >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Sincerely yours, > Sergey Lukjanov > Sahara Technical Lead > (OpenStack Data Processing) > Principal Software Engineer > Mirantis Inc. > -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijh.hust at gmail.com Wed Dec 3 08:14:02 2014 From: lijh.hust at gmail.com (Li Junhong) Date: Wed, 3 Dec 2014 16:14:02 +0800 Subject: [openstack-dev] [nova] The VM migration among compute nodes Message-ID: Dear All, I'm working on migrating the VMs among compute nodes, I read the python-novaclient and nova codes and did some investigation in nova to see the options to migrate VM between hosts, shown as below, could you guys help to have a look, add your comments and correct me if I miss anything? It seems that the following existing commands all have their pro and cons. Is there a command that satisfies the following requirement: - It can migrate the running or non-running VMs among hosts - The migration target host can be identified - The new VM on the migration target host looks exactly the same as the original one, the changes on the VM operating system can be retained The following commands are pairs, the first one is for a single server, while the second one is for host (the commands will iterate the VMs running on the host and run the first command against each of them) ? -- Best Regards! ------------------------------------------------------------------------ Junhong, Li -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: VM migration.jpg Type: image/jpeg Size: 186391 bytes Desc: not available URL: From omrim at mellanox.com Wed Dec 3 08:15:07 2014 From: omrim at mellanox.com (Omri Marcovitch) Date: Wed, 3 Dec 2014 08:15:07 +0000 Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: <547EB5C9.8020008@rackspace.com> References: <547CCA69.4000909@anteaya.info> <837B116B6E5B934DA06D9AD0FD79C6A3018E9C60@SHSMSX104.ccr.corp.intel.com> <547EB5C9.8020008@rackspace.com> Message-ID: Hello Anteaya, A meeting between 8:00 - 16:00 UTC time will be great (Israel). Thanks Omri -----Original Message----- From: Joshua Hesketh [mailto:joshua.hesketh at rackspace.com] Sent: Wednesday, December 03, 2014 9:04 AM To: He, Yongli; OpenStack Development Mailing List (not for usage questions); openstack-infra at lists.openstack.org Subject: Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party Hey, 0700 -> 1000 UTC would work for me most weeks fwiw. Cheers, Josh Rackspace Australia On 12/3/14 11:17 AM, He, Yongli wrote: > anteaya, > > UTC 7:00 AM to UTC9:00, or UTC11:30 to UTC13:00 is ideal time for china. > > if there is no time slot there, just pick up any time between UTC 7:00 AM to UCT 13:00. ( UTC9:00 to UTC 11:30 is on road to home and dinner.) > > Yongi He > -----Original Message----- > From: Anita Kuno [mailto:anteaya at anteaya.info] > Sent: Tuesday, December 02, 2014 4:07 AM > To: openstack Development Mailing List; openstack-infra at lists.openstack.org > Subject: [openstack-dev] [third-party]Time for Additional Meeting for third-party > > One of the actions from the Kilo Third-Party CI summit session was to start up an additional meeting for CI operators to participate from non-North American time zones. > > Please reply to this email with times/days that would work for you. The current third party meeting is on Mondays at 1800 utc which works well since Infra meetings are on Tuesdays. If we could find a time that works for Europe and APAC that is also on Monday that would be ideal. > > Josh Hesketh has said he will try to be available for these meetings, he is in Australia. > > Let's get a sense of what days and timeframes work for those interested and then we can narrow it down and pick a channel. > > Thanks everyone, > Anita. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From azemlyanov at mirantis.com Wed Dec 3 08:17:57 2014 From: azemlyanov at mirantis.com (Anton Zemlyanov) Date: Wed, 3 Dec 2014 12:17:57 +0400 Subject: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon In-Reply-To: References: Message-ID: The confirmation modal on the top of the other modal is weird and cumbersome. I think disabling clicking outside space (staic backdrop) is the right way to go. Anton On Tue, Dec 2, 2014 at 8:30 PM, Thai Q Tran wrote: > I like David's approach, but having two modals (one for the form, one for > the confirmation) on top of each other is a bit weird and would require > constant checks for input. > We already have three ways to close the dialog today: via the cancel > button, X button, and ESC key. It's more important to me that I don't lose > work by accidentally clicking outside. So from this perspective, I think > that having a static behavior is the way to go. Regardless of what approach > we pick, its an improvement over what we have today. > > > [image: Inactive hide details for Timur Sufiev ---12/02/2014 04:22:00 > AM---Hello, Horizoneers and UX-ers! The default behavior of modal]Timur > Sufiev ---12/02/2014 04:22:00 AM---Hello, Horizoneers and UX-ers! The > default behavior of modals in Horizon (defined in turn by Bootstr > > From: Timur Sufiev > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > Date: 12/02/2014 04:22 AM > Subject: [openstack-dev] [horizon] [ux] Changing how the modals are > closed in Horizon > ------------------------------ > > > > Hello, Horizoneers and UX-ers! > > The default behavior of modals in Horizon (defined in turn by Bootstrap > defaults) regarding their closing is to simply close the modal once user > clicks somewhere outside of it (on the backdrop element below and around > the modal). This is not very convenient for the modal forms containing a > lot of input - when it is closed without a warning all the data the user > has already provided is lost. Keeping this in mind, I've made a patch [1] > changing default Bootstrap 'modal_backdrop' parameter to 'static', which > means that forms are not closed once the user clicks on a backdrop, while > it's still possible to close them by pressing 'Esc' or clicking on the 'X' > link at the top right border of the form. Also the patch [1] allows to > customize this behavior (between 'true'-current one/'false' - no backdrop > element/'static') on a per-form basis. > > What I didn't know at the moment I was uploading my patch is that David > Lyle had been working on a similar solution [2] some time ago. It's a bit > more elaborate than mine: if the user has already filled some some inputs > in the form, then a confirmation dialog is shown, otherwise the form is > silently dismissed as it happens now. > > The whole point of writing about this in the ML is to gather opinions > which approach is better: > * stick to the current behavior; > * change the default behavior to 'static'; > * use the David's solution with confirmation dialog (once it'll be rebased > to the current codebase). > > What do you think? > > [1] *https://review.openstack.org/#/c/113206/* > > [2] *https://review.openstack.org/#/c/23037/* > > > P.S. I remember that I promised to write this email a week ago, but better > late than never :). > > -- > Timur Sufiev_______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From obondarev at mirantis.com Wed Dec 3 08:44:19 2014 From: obondarev at mirantis.com (Oleg Bondarev) Date: Wed, 3 Dec 2014 11:44:19 +0300 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: +1! Congrats, Henry and Kevin! On Tue, Dec 2, 2014 at 6:59 PM, Kyle Mestery wrote: > Now that we're in the thick of working hard on Kilo deliverables, I'd > like to make some changes to the neutron core team. Reviews are the > most important part of being a core reviewer, so we need to ensure > cores are doing reviews. The stats for the 180 day period [1] indicate > some changes are needed for cores who are no longer reviewing. > > First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from > neutron-core. Bob and Nachi have been core members for a while now. > They have contributed to Neutron over the years in reviews, code and > leading sub-teams. I'd like to thank them for all that they have done > over the years. I'd also like to propose that should they start > reviewing more going forward the core team looks to fast track them > back into neutron-core. But for now, their review stats place them > below the rest of the team for 180 days. > > As part of the changes, I'd also like to propose two new members to > neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have > been very active in reviews, meetings, and code for a while now. Henry > lead the DB team which fixed Neutron DB migrations during Juno. Kevin > has been actively working across all of Neutron, he's done some great > work on security fixes and stability fixes in particular. Their > comments in reviews are insightful and they have helped to onboard new > reviewers and taken the time to work with people on their patches. > > Existing neutron cores, please vote +1/-1 for the addition of Henry > and Kevin to the core team. > > Thanks! > Kyle > > [1] http://stackalytics.com/report/contribution/neutron-group/180 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mengxiandong at gmail.com Wed Dec 3 08:44:48 2014 From: mengxiandong at gmail.com (Xiandong Meng) Date: Wed, 3 Dec 2014 16:44:48 +0800 Subject: [openstack-dev] [nova] The VM migration among compute nodes In-Reply-To: References: Message-ID: Junhong, It seems you have set a per-condition here: A Kilo controller should be able to manage to a Juno compute node without any problem. I am not so confident about this per-requites. We also need confirmation from the community about it. Regards, Xiandong Meng mengxiandong at gmail.com On Wed, Dec 3, 2014 at 4:14 PM, Li Junhong wrote: > Dear All, > > I'm working on migrating the VMs among compute nodes, I read the > python-novaclient and nova codes and did some investigation in nova to see > the options to migrate VM between hosts, shown as below, could you guys > help to have a look, add your comments and correct me if I miss anything? > It seems that the following existing commands all have their pro and cons. > Is there a command that satisfies the following requirement: > > - It can migrate the running or non-running VMs among hosts > - The migration target host can be identified > - The new VM on the migration target host looks exactly the same as > the original one, the changes on the VM operating system can be retained > > The following commands are pairs, the first one is for a single server, > while the second one is for host (the commands will iterate the VMs running > on the host and run the first command against each of them) > > > ? > -- > Best Regards! > ------------------------------------------------------------------------ > Junhong, Li > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: VM migration.jpg Type: image/jpeg Size: 186391 bytes Desc: not available URL: From thierry at openstack.org Wed Dec 3 08:59:16 2014 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 03 Dec 2014 09:59:16 +0100 Subject: [openstack-dev] [stable] Proposal to add Dave Walker back to stable-maint-core In-Reply-To: <5476ECE4.5040307@redhat.com> References: <5476E5EB.6040707@openstack.org> <5476ECE4.5040307@redhat.com> Message-ID: <547ED0E4.4040709@openstack.org> Ihar Hrachyshka wrote: > +2 OK, lots of +2s and no objection, Daviey is back in ! Thanks, -- Thierry Carrez (ttx) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From tony at bakeyournoodle.com Wed Dec 3 09:00:22 2014 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 3 Dec 2014 20:00:22 +1100 Subject: [openstack-dev] [nova] Adding temporary code to nova to work around bugs in system utilities Message-ID: <20141203090022.GS84915@thor.bakeyournoodle.com> Hi All, I'd like to accomplish 2 things with this message: 1) Unblock (one way or another) https://review.openstack.org/#/c/123957 2) Create some form of consensus on when it's okay to add temporary code to nova to work around bugs in external utilities. So some background on this specific issue. The issue was first reported in July 2014 at [1] and then clarified at [2]. The synopsis of the bug is that calling qemu-img convert -O raw /may/ generate a corrupt output file if the source image isn't fully flushed to disk. The coreutils folk discovered something similar in 2011 *sigh* The clear and correct solution is to ensure that qemu-img uses FIEMAP_FLAG_SYNC. This in turn produces a measurable slowdown in that code path, so additionally it's best if qemu-img uses an alternate method to determine data status in a disk image. This has been done and will be included in qemu 2.2.0 when it's released. These fixes prompted a more substantial rework of that code in qemu. Which is awesome but not *required* to fix the bug in qemu. While we wait for $distros to get the fixed qemu nova is still vulnerable to the bug. To that end I proposed a work around in nova that forces images retrieved from glance to disk with an fsync() prior to calling qemu-img on them. I admit that this is ugly and has a performance impact. In order to reduce the impact of the fsync() I considered: 1) Testing the qemu version and only fsync()ing on affected versions. - Vendors will backport the fix to there version of qemu. The fixed version will still claim to be 2.1.0 (for example) and therefore trigger the fsync() when not required. Given how unreliable this will be I dismissed it as an option 2) API Change - In the case of this specific bug we only need to fsync() in certain scenarios. It would be easy to add a flag to IMAGE_API.download() to determine if this fsync() is required. This has the nice property of only having a performance impact in the suspect case (personally I'll take slow-and-correct over fast-and-buggy any day). My hesitation is that after we've modified the API it's very hard to remove that change when we decide the work around is redundant. 3) Config file option - For many of the same reasons as the API change this seemed like a bad idea. Does anyone have any other ideas? One thing that I haven't done is measure the impact of the fsync() on any reasonable workload. This is mainly because I don't really know how. Sure I could do some statistics in devstack but I don't really think they'd be meaningful. Also the size of the image in glance is fairly important. An fsync() of an 100Gb image is many times more painful than an 1Gb image. While in Paris I was asked to look at other code paths in nova where we use qemu-img convert. I'm doing this analysis. To date I have some suspicions that snapshot (and migration) are affected, but no data that confirms or debases that. I continue to look at the appropriate code in nova, libvirt and qemu. I understand that there is more work to be done in this area, and I'm happy to do it. Having said that from where I sit that work is not directly related to the bug that started this. As the idea is to remove this code as soon as all the distros we care about have a fixed qemu I started an albeit brief discussion here[3] on which distros are in that list. Armed with that list I have opened (or am in the process of opening) bugs for each version of each distribution to make them aware of the issue and the fix. I have a status page at [4]. okay I think I'm done raving. So moving forward: 1) So what should I do with the open review? 2) What can we learn from this in terms of how we work around key utilities that are not in our direct power to change. - Is taking ugly code for "some time" okay? I understand that this is a complex issue as we're relying on $developer to be around (or leave enough information for those that follow) to determine when it's okay to remove the ugliness. If you made it this far bravo! Yours Tony. [1] https://bugs.launchpad.net/nova/+bug/1350766 [2] https://bugs.launchpad.net/qemu/+bug/1368815 [3] http://lists.openstack.org/pipermail/openstack-dev/2014-November/050526.html [4] https://wiki.openstack.org/wiki/Bug1368815 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From sahid.ferdjaoui at redhat.com Wed Dec 3 09:03:06 2014 From: sahid.ferdjaoui at redhat.com (Sahid Orentino Ferdjaoui) Date: Wed, 3 Dec 2014 10:03:06 +0100 Subject: [openstack-dev] [nova] [libvirt] enabling per node filtering of mempage sizes In-Reply-To: <4B1BB321037C0849AAE171801564DFA630E602DE@IRSMSX107.ger.corp.intel.com> References: <4B1BB321037C0849AAE171801564DFA630E602DE@IRSMSX107.ger.corp.intel.com> Message-ID: <20141203090306.GA8186@redhat.redhat.com> On Tue, Dec 02, 2014 at 07:44:23PM +0000, Mooney, Sean K wrote: > Hi all > > I have submitted a small blueprint to allow filtering of available memory pages > Reported by libvirt. Can you address this with aggregate? this will also avoid to do something specific in the driver libvirt. Which will have to be extended to other drivers at the end. > https://blueprints.launchpad.net/nova/+spec/libvirt-allowed-mempage-sizes > > I believe that this change is small enough to not require a spec as per > http://docs.openstack.org/developer/nova/devref/kilo.blueprints.html > > if a core (and others are welcome too :)) has time to review my blueprint and confirm > that a spec is not required I would be grateful as the spd is rapidly approaching > > I have wip code developed which I hope to make available for review once > I add unit tests. > > All relevant detail (copied below) are included in the whiteboard for the blueprint. > > Regards > Sean > > Problem description > =================== > > In the Kilo cycle, the virt drivers large pages feature[1] was introduced > to allow a guests to request the type of memory backing that they desire > via a flavor or image metadata. > > In certain configurations, it may be desired or required to filter the > memory pages available to vms booted on a node. At present no mechanism > exists to allow filtering of reported memory pages. > > Use Cases > ---------- > > On a host that only supports vhost-user or ivshmem, > all VMs are required to use large page memory. > If a vm is booted with standard pages with these interfaces, > network connectivity will not available. > > In this case it is desirable to filter out small/4k pages when reporting > available memory to the scheduler. > > Proposed change > =============== > > This blueprint proposes adding a new config variable (allowed_memory_pagesize) > to the libvirt section of the nova.conf. > > cfg.ListOpt('allowed_memory_pagesize', > default=['any'], > help='List of allowed memory page sizes' > 'Syntax is SizeA,SizeB e.g. small,large' > 'valid sizes are: small,large,any,4,2048,1048576') > > The _get_host_capabilities function in nova/nova/virt/libvirt/driver.py > will be modified to filter the mempages reported for each cell based on the > value of CONF.libvirt.allowed_memory_pagesize > > If small is set then only 4k pages will be reported. > If large is set 2MB and 1GB will be reported. > If any is set no filtering will be applied. > > The default value of "any" was chosen to ensure that this change has no effect on > existing deployment. > > References > ========== > [1] - https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lijh.hust at gmail.com Wed Dec 3 09:09:00 2014 From: lijh.hust at gmail.com (Li Junhong) Date: Wed, 3 Dec 2014 17:09:00 +0800 Subject: [openstack-dev] [nova] Can the Kilo nova controller conduct the Juno compute nodes Message-ID: Hi All, Is it possible for Kilo nova controller to control the Juno compute nodes? Is this scenario supported naturally by the nova mechanism in the design and codes level? -- Best Regards! ------------------------------------------------------------------------ Junhong, Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From pasquale.porreca at dektech.com.au Wed Dec 3 09:16:48 2014 From: pasquale.porreca at dektech.com.au (Pasquale Porreca) Date: Wed, 03 Dec 2014 10:16:48 +0100 Subject: [openstack-dev] [NFV][Telco] pxe-boot In-Reply-To: <693765041.19659732.1417044729676.JavaMail.zimbra@redhat.com> References: <20141125223355.Horde.8UWcgskXSBVvU-RJrCgbtw7@mail.dektech.com.au> <693765041.19659732.1417044729676.JavaMail.zimbra@redhat.com> Message-ID: <547ED500.1070008@dektech.com.au> The use case we were thinking about is a Network Function (e.g. IMS Nodes) implementation in which the high availability is based on OpenSAF. In this scenario there is an Active/Standby cluster of 2 System Controllers (SC) plus several Payloads (PL) that boot from network, controlled by the SC. The logic of which service to deploy on each payload is inside the SC. In OpenStack both SCs and PLs will be instances running in the cloud, anyway the PLs should still boot from network under the control of the SC. In fact to use Glance to store the image for the PLs and keep the control of the PLs in the SC, the SC should trigger the boot of the PLs with requests to Nova/Glance, but an application running inside an instance should not directly interact with a cloud infrastructure service like Glance or Nova. We know that it is yet possible to achieve network booting in OpenStack using an image stored in Glance that acts like a pxe client, anyway this workaround has some drawbacks, mainly due to the fact it won't be possible to choose the specific virtual NIC on which the network boot will happen, causing DHCP requests to flow on networks where they don't belong to and possible delays in the boot of the instances. On 11/27/14 00:32, Steve Gordon wrote: > ----- Original Message ----- >> From: "Angelo Matarazzo" >> To: "OpenStack Development Mailing" , openstack-operators at lists.openstack.org >> >> >> Hi all, >> my team and I are working on pxe boot feature very similar to the >> "Discless VM" one in Active blueprint list[1] >> The blueprint [2] is no longer active and we created a new spec [3][4]. >> >> Nova core reviewers commented our spec and the first and the most >> important objection is that there is not a compelling reason to >> provide this kind of feature : booting from network. >> >> Aside from the specific implementation, I think that some members of >> TelcoWorkingGroup could be interested in and provide a use case. >> I would also like to add this item to the agenda of next meeting >> >> Any thought? > We did discuss this today, and granted it is listed as a blueprint someone in the group had expressed interest in at a point in time - though I don't believe any further work was done. The general feeling was that there isn't anything really NFV or Telco specific about this over and above the more generic use case of legacy applications. Are you able to further elaborate on the reason it's NFV or Telco specific other than because of who is requesting it in this instance? > > Thanks! > > -Steve > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr From samsong8610 at gmail.com Wed Dec 3 09:34:51 2014 From: samsong8610 at gmail.com (sam song) Date: Wed, 03 Dec 2014 17:34:51 +0800 Subject: [openstack-dev] [Ceilometer]Unit test failed on branch stable/icehouse Message-ID: <547ED93B.8070407@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, folks: I clone the stable/icehouse branch of ceilometer project from github, use "tox -r -e py26" to run unit tests on CentOS 6.5 system. Unfortunately there are many cases failed. You can find the output in test.log, and pip.log is the output of "pip list" in py26 virtualenv. All python packages are download from pypi.douban.com in China, which is fresh according to http://www.pypi-mirrors.org/. Any ideas to fix it are appreciated. Thanks in advance. Sam -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (GNU/Linux) iQEcBAEBAgAGBQJUftk7AAoJEPDWvv0r0XeReC4IAKh+cN2JHGsOEd3/VUsdBnMi Bxlvjp0uzMB8vs2zOljLntf9XZGjPBoOWZcwArYHL3aLe1A+cHxXqm6Y+i0zLmJ2 jf0oV4War+MliptaJNE6BH9URBZ4p2WqxL2k6+V9SP+9dkCHjzTIOZ4zO31jbZy3 P7lfTzQELYAys4CFl3kEV/hm2JIPZXvpVUYGdCEYBc3CVcopCq+wDBChnQx3tIhh ao4KXzx/fOHQK2HLSiv+p2y4XNWDIMA8wpGyjn97JL/eF+lckrVIBd3zFlpT+VtE nidK+rjzWpSDdObEKdEXh+C0jy0S5uBrSU+5hba0GNRVUbnnXP5jHrsP7HZ6900= =mwtP -----END PGP SIGNATURE----- -------------- next part -------------- A non-text attachment was scrubbed... Name: log.tar.gz Type: application/x-gzip Size: 15791 bytes Desc: not available URL: From joe.gordon0 at gmail.com Wed Dec 3 09:38:44 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Wed, 3 Dec 2014 11:38:44 +0200 Subject: [openstack-dev] [nova] Can the Kilo nova controller conduct the Juno compute nodes In-Reply-To: References: Message-ID: On Wed, Dec 3, 2014 at 11:09 AM, Li Junhong wrote: > Hi All, > > Is it possible for Kilo nova controller to control the Juno compute nodes? > Is this scenario supported naturally by the nova mechanism in the design > and codes level? > Yes, We gate on making sure we can run Kilo nova with Juno compute nodes. > > > -- > Best Regards! > ------------------------------------------------------------------------ > Junhong, Li > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbelova at mirantis.com Wed Dec 3 09:47:15 2014 From: dbelova at mirantis.com (Dina Belova) Date: Wed, 3 Dec 2014 12:47:15 +0300 Subject: [openstack-dev] [Ceilometer]Unit test failed on branch stable/icehouse In-Reply-To: <547ED93B.8070407@gmail.com> References: <547ED93B.8070407@gmail.com> Message-ID: Sam, It really looks like you're having old *.pyc files locally in this repo (probably left from other branch testing)... As I understood, you just cloned clean repo and first tests you've ran were these ones? If no, removing all *.pyc files will help you, otherwise I have no clear idea why you might have unit tests failing in the new just cloned repository... Need to investigate. Cheers, Dina On Wed, Dec 3, 2014 at 12:34 PM, sam song wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi, folks: > > I clone the stable/icehouse branch of ceilometer project from github, > use "tox -r -e py26" to run unit tests on CentOS 6.5 system. > Unfortunately there are many cases failed. You can find the output in > test.log, and pip.log is the output of "pip list" in py26 virtualenv. > All python packages are download from pypi.douban.com in China, which is > fresh according to http://www.pypi-mirrors.org/. > > Any ideas to fix it are appreciated. > Thanks in advance. > > Sam > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2.0.14 (GNU/Linux) > > iQEcBAEBAgAGBQJUftk7AAoJEPDWvv0r0XeReC4IAKh+cN2JHGsOEd3/VUsdBnMi > Bxlvjp0uzMB8vs2zOljLntf9XZGjPBoOWZcwArYHL3aLe1A+cHxXqm6Y+i0zLmJ2 > jf0oV4War+MliptaJNE6BH9URBZ4p2WqxL2k6+V9SP+9dkCHjzTIOZ4zO31jbZy3 > P7lfTzQELYAys4CFl3kEV/hm2JIPZXvpVUYGdCEYBc3CVcopCq+wDBChnQx3tIhh > ao4KXzx/fOHQK2HLSiv+p2y4XNWDIMA8wpGyjn97JL/eF+lckrVIBd3zFlpT+VtE > nidK+rjzWpSDdObEKdEXh+C0jy0S5uBrSU+5hba0GNRVUbnnXP5jHrsP7HZ6900= > =mwtP > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best regards, Dina Belova Software Engineer Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijh.hust at gmail.com Wed Dec 3 09:49:35 2014 From: lijh.hust at gmail.com (Li Junhong) Date: Wed, 3 Dec 2014 17:49:35 +0800 Subject: [openstack-dev] [nova] Can the Kilo nova controller conduct the Juno compute nodes In-Reply-To: References: Message-ID: Hi Joe, Thank you for your confirmative answer and the wonderful gate testing pipeline. On Wed, Dec 3, 2014 at 5:38 PM, Joe Gordon wrote: > > > On Wed, Dec 3, 2014 at 11:09 AM, Li Junhong wrote: > >> Hi All, >> >> Is it possible for Kilo nova controller to control the Juno compute >> nodes? Is this scenario supported naturally by the nova mechanism in the >> design and codes level? >> > > Yes, > > We gate on making sure we can run Kilo nova with Juno compute nodes. > > >> >> >> -- >> Best Regards! >> ------------------------------------------------------------------------ >> Junhong, Li >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best Regards! ------------------------------------------------------------------------ Junhong, Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Dec 3 09:52:10 2014 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 03 Dec 2014 10:52:10 +0100 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: References: Message-ID: <547EDD4A.8040600@openstack.org> Alan Pevec wrote: > General: cap Oslo and client library versions - sync from > openstack/requirements stable/juno, would be good to include in the > release. > https://review.openstack.org/#/q/status:open+branch:stable/juno+topic:openstack/requirements,n,z So it looks like we should hold on that, and revert the recent capping before release (see further down on this thread). > Ceilometer (all proposed by Ceilo PTL) > https://review.openstack.org/138315 Already in > https://review.openstack.org/138317 +1 > https://review.openstack.org/138320 +1 > https://review.openstack.org/138321 Impact a bit unclear, but then the fix is basic > https://review.openstack.org/138322 +1 > Cinder > https://review.openstack.org/137537 - small change and limited to the > VMWare driver +1 > Glance > https://review.openstack.org/137704 - glance_store is backward > compatible, but not sure about forcing version bump on stable -1 for 2014.2.1 exception, also needs to be revisited in light of recent capping revert proposal > https://review.openstack.org/137862 - Disable osprofiler by default to > prevent upgrade issues, disabled by default in other services Sounds like something we'd rather have in the point release than after. I don't think it's aversion incompatibility issue, but more of an upgrade potential pain. > Horizon > standing-after-freeze translation update, coming on Dec 3 > https://review.openstack.org/138018 - visible issue, no translation > string changes +1 > https://review.openstack.org/138313 - low risk patch for a highly > problematic issue Already in > Neutron > https://review.openstack.org/136294 - default SNAT, see review for > details, I cannot distil 1liner :) -1: I would rather fix the doc to match behavior, than change behavior to match the doc and lose people that were relying on it. > https://review.openstack.org/136275 - self-contained to the vendor > code, extensively tested in several deployments +0: Feels a bit large for a last-minute exception. > Nova > https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/juno+topic:1386236/juno,n,z > - soaked more than a week in master, makes numa actually work in Juno +0: Also feels a bit large for a last-minute exception > Sahara > https://review.openstack.org/135549 - fix for auto security groups, > there were some concerns, see review for details Already in -- Thierry Carrez (ttx) From lijh.hust at gmail.com Wed Dec 3 09:53:42 2014 From: lijh.hust at gmail.com (Li Junhong) Date: Wed, 3 Dec 2014 17:53:42 +0800 Subject: [openstack-dev] [nova] Can the Kilo nova controller conduct the Juno compute nodes In-Reply-To: References: Message-ID: Hi Joe, Just want to confirm one more question, in the gate testing, is the neutron/cinder/glance Kilo or Juno. Or in another word, is the controller in gate testing an all-in-one controller? On Wed, Dec 3, 2014 at 5:49 PM, Li Junhong wrote: > Hi Joe, > > Thank you for your confirmative answer and the wonderful gate testing > pipeline. > > On Wed, Dec 3, 2014 at 5:38 PM, Joe Gordon wrote: > >> >> >> On Wed, Dec 3, 2014 at 11:09 AM, Li Junhong wrote: >> >>> Hi All, >>> >>> Is it possible for Kilo nova controller to control the Juno compute >>> nodes? Is this scenario supported naturally by the nova mechanism in the >>> design and codes level? >>> >> >> Yes, >> >> We gate on making sure we can run Kilo nova with Juno compute nodes. >> >> >>> >>> >>> -- >>> Best Regards! >>> ------------------------------------------------------------------------ >>> Junhong, Li >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Best Regards! > ------------------------------------------------------------------------ > Junhong, Li > -- Best Regards! ------------------------------------------------------------------------ Junhong, Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From akislitsky at mirantis.com Wed Dec 3 09:57:03 2014 From: akislitsky at mirantis.com (Alexander Kislitsky) Date: Wed, 3 Dec 2014 13:57:03 +0400 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> Message-ID: We had used Flask in the fuel-stats. It was easy and pleasant and all project requirements was satisfied. And I saw difficulties and workarounds with Pecan, when Nick integrated it into Nailgun. So +1 for Flask. On Tue, Dec 2, 2014 at 11:00 PM, Nikolay Markov wrote: > Michael, we already solved all issues I described, and I just don't > want to solve them once again after moving to another framework. Also, > I think, nothing of these wishes contradicts with good API design. > > On Tue, Dec 2, 2014 at 10:49 PM, Michael Krotscheck > wrote: > > This sounds more like you need to pay off technical debt and clean up > your > > API. > > > > Michael > > > > On Tue Dec 02 2014 at 10:58:43 AM Nikolay Markov > > wrote: > >> > >> Hello all, > >> > >> I actually tried to use Pecan and even created a couple of PoCs, but > >> there due to historical reasons of how our API is organized it will > >> take much more time to implement all workarounds we need to issues > >> Pecan doesn't solve out of the box, like working with non-RESTful > >> URLs, reverse URL lookup, returning custom body in 404 response, > >> wrapping errors to JSON automatically, etc. > >> > >> As far as I see, each OpenStack project implements its own workarounds > >> for these issues, but still it requires much less men and hours for us > >> to move to Flask-Restful instead of Pecan, because all these problems > >> are already solved there. > >> > >> BTW, I know a lot of pretty big projects using Flask (it's the second > >> most popular Web framework after Django in Python Web community), they > >> even have their own "hall of fame": > >> http://flask.pocoo.org/community/poweredby/ . > >> > >> On Tue, Dec 2, 2014 at 7:13 PM, Ryan Brown wrote: > >> > On 12/02/2014 09:55 AM, Igor Kalnitsky wrote: > >> >> Hi, Sebastian, > >> >> > >> >> Thank you for raising this topic again. > >> >> > >> >> [snip] > >> >> > >> >> Personally, I'd like to use Flask instead of Pecan, because first one > >> >> is more production-ready tool and I like its design. But I believe > >> >> this should be resolved by voting. > >> >> > >> >> Thanks, > >> >> Igor > >> >> > >> >> On Tue, Dec 2, 2014 at 4:19 PM, Sebastian Kalinowski > >> >> wrote: > >> >>> Hi all, > >> >>> > >> >>> [snip explanation+history] > >> >>> > >> >>> Best, > >> >>> Sebastian > >> > > >> > Given that Pecan is used for other OpenStack projects and has plenty > of > >> > builtin functionality (REST support, sessions, etc) I'd prefer it for > a > >> > number of reasons. > >> > > >> > 1) Wouldn't have to pull in plugins for standard (in Pecan) things > >> > 2) Pecan is built for high traffic, where Flask is aimed at much > smaller > >> > projects > >> > 3) Already used by other OpenStack projects, so common patterns can be > >> > reused as oslo libs > >> > > >> > Of course, the Flask community seems larger (though the average flask > >> > project seems pretty small). > >> > > >> > I'm not sure what determines "production readiness", but it seems to > me > >> > like Fuel developers fall more in Pecan's target audience than in > >> > Flask's. > >> > > >> > My $0.02, > >> > Ryan > >> > > >> > -- > >> > Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. > >> > > >> > _______________________________________________ > >> > OpenStack-dev mailing list > >> > OpenStack-dev at lists.openstack.org > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> > >> -- > >> Best regards, > >> Nick Markov > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Best regards, > Nick Markov > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at etc.gen.nz Wed Dec 3 09:59:09 2014 From: andrew at etc.gen.nz (Andrew Ruthven) Date: Wed, 03 Dec 2014 22:59:09 +1300 Subject: [openstack-dev] [neutron] Private external network In-Reply-To: <891761EAFA335D44AD1FFDB9B4A8C063C7C080@G4W3216.americas.hpqcorp.net> References: <891761EAFA335D44AD1FFDB9B4A8C063C7C080@G4W3216.americas.hpqcorp.net> Message-ID: <1417600749.20200.5.camel@etc.gen.nz> Hi, I'm picking up this thread from a few months ago, as I have a requirement for a private external network and after doing some testing today came to the conclusion that this isn't currently possible. Rats. On Tue, 2014-10-14 at 17:42 +0000, A, Keshava wrote: > Hi, > > Across these private External network/tenant :: floating IP can be > shared ? I would think not. The external network and associated floating IPs should only be accessible by the tenant (or tenants?) which are granted to use it. At a guess that tenants thought above is where the RBAC considerations come in. Does anyone have any pointers to the previous discussions on this? The BP [0] which ?douard linked to states it was untargetted due to community discussions but has no links or other references to those discussions. Cheers, Andrew [0] https://blueprints.launchpad.net/neutron/+spec/sharing-model-for-external-networks -- Andrew Ruthven, Wellington, New Zealand andrew at etc.gen.nz | linux.conf.au 2015 New Zealand's only Cloud: | BeAwesome in Auckland, NZ https://catalyst.net.nz/cloud | http://lca2015.linux.org.au From thierry at openstack.org Wed Dec 3 10:02:30 2014 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 03 Dec 2014 11:02:30 +0100 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: <98D81949-644E-4DCE-8F29-18C64F06DAE2@doughellmann.com> References: <547DD4F8.6020306@redhat.com> <98D81949-644E-4DCE-8F29-18C64F06DAE2@doughellmann.com> Message-ID: <547EDFB6.8010802@openstack.org> Doug Hellmann wrote: > > On Dec 2, 2014, at 5:41 PM, Alan Pevec wrote: > >>>> General: cap Oslo and client library versions - sync from >>>> openstack/requirements stable/juno, would be good to include in >>>> the release. >>>> https://review.openstack.org/#/q/status:open+branch:stable/juno+topic:openstack/requirements,n,z >>> >>> +2, >>>> >>> let's keep all deps in sync. Those updates do not break anything >>> for existing users. >> >> Just spotted it, there is now proposal to revert caps in Juno: >> >> https://review.openstack.org/138546 >> >> Doug, shall we stop merging caps to projects in Juno? > > Today we found that when we have caps in place that do not overlap with the versions used in master, we can?t upgrade services one at a time on a host running multiple services. We didn?t have this problem between icehouse and juno because I used the same cap values for both releases, so we didn?t trigger any problems with grenade. > > One solution is to undo the caps and then add caps when we discover issues in new versions of libraries and stable branches. Another is to require applications to work with ?old? versions of libraries and degrade their feature set, so that we can keep the lower bounds overlapping. > > In retrospect, this issue with caps was obvious, but I don?t remember it being raise in the planning. As Sean pointed out on IRC today, we should have someone write a spec for changing the way we deal with requirements so we can think about it before deciding what to do. > > After the releases today, the ?no more alpha versions for Oslo? ship has sailed. Removing the caps will at least let us move ahead while we figure out what to do for stable branches. Yes, looks like we need to think a bit deeper about it, and at the very least not ship them in the point release. +2 on the requirements revert. -- Thierry Carrez (ttx) From blak111 at gmail.com Wed Dec 3 10:08:45 2014 From: blak111 at gmail.com (Kevin Benton) Date: Wed, 3 Dec 2014 02:08:45 -0800 Subject: [openstack-dev] [neutron] Private external network In-Reply-To: <1417600749.20200.5.camel@etc.gen.nz> References: <891761EAFA335D44AD1FFDB9B4A8C063C7C080@G4W3216.americas.hpqcorp.net> <1417600749.20200.5.camel@etc.gen.nz> Message-ID: There is a current blueprint under discussion[1] which would have covered the external network access control as well, however it looks like the scope is going to have to be reduced for this cycle so it will be limited to shared networks if it's accepted at all. 1. https://review.openstack.org/#/c/132661/ On Wed, Dec 3, 2014 at 1:59 AM, Andrew Ruthven wrote: > Hi, > > I'm picking up this thread from a few months ago, as I have a > requirement for a private external network and after doing some testing > today came to the conclusion that this isn't currently possible. Rats. > > On Tue, 2014-10-14 at 17:42 +0000, A, Keshava wrote: > > Hi, > > > > Across these private External network/tenant :: floating IP can be > > shared ? > > I would think not. The external network and associated floating IPs > should only be accessible by the tenant (or tenants?) which are granted > to use it. > > At a guess that tenants thought above is where the RBAC considerations > come in. > > Does anyone have any pointers to the previous discussions on this? The > BP [0] which ?douard linked to states it was untargetted due to > community discussions but has no links or other references to those > discussions. > > Cheers, > Andrew > > [0] > > https://blueprints.launchpad.net/neutron/+spec/sharing-model-for-external-networks > > > -- > Andrew Ruthven, Wellington, New Zealand > andrew at etc.gen.nz | linux.conf.au 2015 > New Zealand's only Cloud: | BeAwesome in Auckland, NZ > https://catalyst.net.nz/cloud | http://lca2015.linux.org.au > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Kevin Benton -------------- next part -------------- An HTML attachment was scrubbed... URL: From shardy at redhat.com Wed Dec 3 10:11:00 2014 From: shardy at redhat.com (Steven Hardy) Date: Wed, 3 Dec 2014 10:11:00 +0000 Subject: [openstack-dev] [tripleo] Managing no-mergepy template duplication Message-ID: <20141203101059.GA23017@t430slt.redhat.com> Hi all, Lately I've been spending more time looking at tripleo and doing some reviews. I'm particularly interested in helping the no-mergepy and subsequent puppet-software-config implementations mature (as well as improving overcloud updates via heat). Since Tomas's patch landed[1] to enable --no-mergepy in tripleo-heat-templates, it's become apparent that frequently patches are submitted which only update overcloud-source.yaml, so I've been trying to catch these and ask for a corresponding change to e.g controller.yaml. This raises the following questions: 1. Is it reasonable to -1 a patch and ask folks to update in both places? 2. How are we going to handle this duplication and divergence? 3. What's the status of getting gating CI on the --no-mergepy templates? 4. What barriers exist (now that I've implemented[2] the eliding functionality requested[3] for ResourceGroup) to moving to the --no-mergepy implementation by default? Thanks for any clarification you can provide! :) Steve [1] https://review.openstack.org/#/c/123100/ [2] https://review.openstack.org/#/c/128365/ [3] https://review.openstack.org/#/c/123713/ From berrange at redhat.com Wed Dec 3 10:12:33 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Wed, 3 Dec 2014 10:12:33 +0000 Subject: [openstack-dev] [nova] [libvirt] enabling per node filtering of mempage sizes In-Reply-To: <20141203090306.GA8186@redhat.redhat.com> References: <4B1BB321037C0849AAE171801564DFA630E602DE@IRSMSX107.ger.corp.intel.com> <20141203090306.GA8186@redhat.redhat.com> Message-ID: <20141203101233.GG10160@redhat.com> On Wed, Dec 03, 2014 at 10:03:06AM +0100, Sahid Orentino Ferdjaoui wrote: > On Tue, Dec 02, 2014 at 07:44:23PM +0000, Mooney, Sean K wrote: > > Hi all > > > > I have submitted a small blueprint to allow filtering of available memory pages > > Reported by libvirt. > > Can you address this with aggregate? this will also avoid to do > something specific in the driver libvirt. Which will have to be > extended to other drivers at the end. Agreed, I think you can address this by setting up host aggregates and then using setting the desired page size on the flavour. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From majopela at redhat.com Wed Dec 3 10:28:57 2014 From: majopela at redhat.com (=?utf-8?Q?Miguel_=C3=81ngel_Ajo?=) Date: Wed, 3 Dec 2014 11:28:57 +0100 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: <5B3A918D5E0948DA89A72345FEC7EB35@redhat.com> Congratulations to Henry and Kevin, very well deserved!, keep up the good work! :) Miguel ?ngel Ajo On Wednesday, 3 de December de 2014 at 09:44, Oleg Bondarev wrote: > +1! Congrats, Henry and Kevin! > > On Tue, Dec 2, 2014 at 6:59 PM, Kyle Mestery wrote: > > Now that we're in the thick of working hard on Kilo deliverables, I'd > > like to make some changes to the neutron core team. Reviews are the > > most important part of being a core reviewer, so we need to ensure > > cores are doing reviews. The stats for the 180 day period [1] indicate > > some changes are needed for cores who are no longer reviewing. > > > > First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from > > neutron-core. Bob and Nachi have been core members for a while now. > > They have contributed to Neutron over the years in reviews, code and > > leading sub-teams. I'd like to thank them for all that they have done > > over the years. I'd also like to propose that should they start > > reviewing more going forward the core team looks to fast track them > > back into neutron-core. But for now, their review stats place them > > below the rest of the team for 180 days. > > > > As part of the changes, I'd also like to propose two new members to > > neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have > > been very active in reviews, meetings, and code for a while now. Henry > > lead the DB team which fixed Neutron DB migrations during Juno. Kevin > > has been actively working across all of Neutron, he's done some great > > work on security fixes and stability fixes in particular. Their > > comments in reviews are insightful and they have helped to onboard new > > reviewers and taken the time to work with people on their patches. > > > > Existing neutron cores, please vote +1/-1 for the addition of Henry > > and Kevin to the core team. > > > > Thanks! > > Kyle > > > > [1] http://stackalytics.com/report/contribution/neutron-group/180 > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org (mailto:OpenStack-dev at lists.openstack.org) > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org (mailto:OpenStack-dev at lists.openstack.org) > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Wed Dec 3 10:34:40 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Wed, 3 Dec 2014 12:34:40 +0200 Subject: [openstack-dev] [nova] Can the Kilo nova controller conduct the Juno compute nodes In-Reply-To: References: Message-ID: On Wed, Dec 3, 2014 at 11:53 AM, Li Junhong wrote: > Hi Joe, > > Just want to confirm one more question, in the gate testing, is the > neutron/cinder/glance Kilo or Juno. Or in another word, is the controller > in gate testing an all-in-one controller? > Great question. In our current test neutron/cinder/glance are Kilo. But we do want to support the case where neutron/cinder/glance are Juno, as you should be able to upgrade each service independently. While we don't test it, we design around that goal, so with some testing and bug fixing it should work. > > On Wed, Dec 3, 2014 at 5:49 PM, Li Junhong wrote: > >> Hi Joe, >> >> Thank you for your confirmative answer and the wonderful gate testing >> pipeline. >> >> On Wed, Dec 3, 2014 at 5:38 PM, Joe Gordon wrote: >> >>> >>> >>> On Wed, Dec 3, 2014 at 11:09 AM, Li Junhong wrote: >>> >>>> Hi All, >>>> >>>> Is it possible for Kilo nova controller to control the Juno compute >>>> nodes? Is this scenario supported naturally by the nova mechanism in the >>>> design and codes level? >>>> >>> >>> Yes, >>> >>> We gate on making sure we can run Kilo nova with Juno compute nodes. >>> >>> >>>> >>>> >>>> -- >>>> Best Regards! >>>> ------------------------------------------------------------------------ >>>> Junhong, Li >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Best Regards! >> ------------------------------------------------------------------------ >> Junhong, Li >> > > > > -- > Best Regards! > ------------------------------------------------------------------------ > Junhong, Li > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ikliuk at mirantis.com Wed Dec 3 10:36:51 2014 From: ikliuk at mirantis.com (Ivan Kliuk) Date: Wed, 03 Dec 2014 12:36:51 +0200 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: Message-ID: <547EE7C3.6000904@mirantis.com> Well, I think if the general direction is to make Fuel using OpenStack tools and libraries as much as it's possible, that makes sense to use Pecan. Otherwise I'd prefer to swap web.py with Flask. Sincerely yours, Ivan Kliuk On 12/2/14 16:55, Igor Kalnitsky wrote: > Hi, Sebastian, > > Thank you for raising this topic again. > > Yes, indeed, we need to move out from web.py as soon as possible and > there are a lot of reasons why we should do it. But this topic is not > about "Why", this topic is about "Flask or Pecan". > > Well, currently Fuel uses both of this frameworks: > > * OSTF is using Pecan > * Fuel Stats is using Flask > > Personally, I'd like to use Flask instead of Pecan, because first one > is more production-ready tool and I like its design. But I believe > this should be resolved by voting. > > Thanks, > Igor > > On Tue, Dec 2, 2014 at 4:19 PM, Sebastian Kalinowski > wrote: >> Hi all, >> >> Some time ago we had a discussion about moving Nailgun to new web framework >> [1]. >> >> There was comparison [2] of two possible options: Pecan [3] and Flask [4]. >> We came to conclusion that we need to move Nailgun on some alive web >> framework >> instead of web.py [5] (some of the reasons: [6]) but there was no clear >> agreement >> on what framework (there were strong voices for Flask). >> >> I would like to bring this topic up again so we could discuss with broader >> audience and >> make final decision what will be our next web framework. >> >> I think that we should also consider to make that framework our "weapon of >> choice" (or so >> called standard) when creating new web services in Fuel. >> >> Best, >> Sebastian >> >> >> [1] https://lists.launchpad.net/fuel-dev/msg01397.html >> [2] >> https://docs.google.com/a/mirantis.com/document/d/1QR7YphyfN64m-e9b5rKC_U8bMtx4zjfW943BfLTqTao/edit?usp=sharing >> [3] http://www.pecanpy.org/ >> [4] http://flask.pocoo.org/ >> [5] http://webpy.org/ >> [6] https://lists.launchpad.net/fuel-dev/msg01501.html >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mathieu.rohon at gmail.com Wed Dec 3 10:41:07 2014 From: mathieu.rohon at gmail.com (Mathieu Rohon) Date: Wed, 3 Dec 2014 11:41:07 +0100 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: Hi all, It seems that a process with a survey for neutron core election/removal was about to take place [1]. Has it been applied for this proposal? This proposal has been hardly discussed during neutron meetings [2][3]. Many cores agree that the number of reviews shouldn't be the only metrics. And this statement is reflected in the Survey Questions. So I'm surprised to see such a proposal based on stackalitics figures. [1]https://etherpad.openstack.org/p/neutron-peer-review [2]http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-10-13-21.02.log.html [3]http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-10-21-14.00.log.html On Wed, Dec 3, 2014 at 9:44 AM, Oleg Bondarev wrote: > +1! Congrats, Henry and Kevin! > > On Tue, Dec 2, 2014 at 6:59 PM, Kyle Mestery wrote: >> >> Now that we're in the thick of working hard on Kilo deliverables, I'd >> like to make some changes to the neutron core team. Reviews are the >> most important part of being a core reviewer, so we need to ensure >> cores are doing reviews. The stats for the 180 day period [1] indicate >> some changes are needed for cores who are no longer reviewing. >> >> First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from >> neutron-core. Bob and Nachi have been core members for a while now. >> They have contributed to Neutron over the years in reviews, code and >> leading sub-teams. I'd like to thank them for all that they have done >> over the years. I'd also like to propose that should they start >> reviewing more going forward the core team looks to fast track them >> back into neutron-core. But for now, their review stats place them >> below the rest of the team for 180 days. >> >> As part of the changes, I'd also like to propose two new members to >> neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have >> been very active in reviews, meetings, and code for a while now. Henry >> lead the DB team which fixed Neutron DB migrations during Juno. Kevin >> has been actively working across all of Neutron, he's done some great >> work on security fixes and stability fixes in particular. Their >> comments in reviews are insightful and they have helped to onboard new >> reviewers and taken the time to work with people on their patches. >> >> Existing neutron cores, please vote +1/-1 for the addition of Henry >> and Kevin to the core team. >> >> Thanks! >> Kyle >> >> [1] http://stackalytics.com/report/contribution/neutron-group/180 >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From cmsj at tenshu.net Wed Dec 3 10:47:30 2014 From: cmsj at tenshu.net (Chris Jones) Date: Wed, 3 Dec 2014 10:47:30 +0000 Subject: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023) In-Reply-To: <547E98F0.8010406@redhat.com> References: <54755DE7.2070107@redhat.com> <547607C3.3090604@nemebean.com> <1417495283-sup-444@fewbar.com> <547E1177.4030705@redhat.com> <1417559171-sup-8883@fewbar.com> <547E98F0.8010406@redhat.com> Message-ID: <3C6FDAE6-9BEB-4E9B-91A8-BD5AE724C6D7@tenshu.net> Hi I am very sympathetic to this view. We have a patch in hand that improves the situation. We also have disagreement about the ideal situation. I +2'd Ian's patch because it makes things work better than they do now. If we can arrive at an ideal solution later, great, but the more I think about logging from a multitude of bash scripts, and tricks like XTRACE_FD, the more I think it's crazy and we should just incrementally improve the non-trace logging as a separate exercise, leaving working tracing for true debugging situations. Cheers, -- Chris Jones > On 3 Dec 2014, at 05:00, Ian Wienand wrote: > >> On 12/03/2014 09:30 AM, Clint Byrum wrote: >> I for one find the idea of printing every cp, cat, echo and ls command out >> rather frustratingly verbose when scanning logs from a normal run. > > I for one find this ongoing discussion over a flag whose own help says > "-x -- turn on tracing" not doing the blindly obvious thing of turning > on tracing and the seeming inability to reach to a conclusion on a > posted review over 3 months a troubling narrative for potential > consumers of diskimage-builder. > > -i > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From majopela at redhat.com Wed Dec 3 10:49:37 2014 From: majopela at redhat.com (=?utf-8?Q?Miguel_=C3=81ngel_Ajo?=) Date: Wed, 3 Dec 2014 11:49:37 +0100 Subject: [openstack-dev] [neutron] force_gateway_on_subnet, please don't deprecate In-Reply-To: <547DD12C.7060405@redhat.com> References: <95DCE311E0DE421D8C881E5CCD80E5D8@redhat.com> <1384913819.11781239.1417435977062.JavaMail.zimbra@redhat.com> <547DD12C.7060405@redhat.com> Message-ID: I will spend some time on it during tonight/weekend to make sure it?s removed and that we have reference implementation working at once. I propose following the bug fix way, as it?s a tiny change. https://bugs.launchpad.net/neutron/+bug/1398768 @amuller, I?m not sure I understand why does it need to be covered in the dhcp-agent side. Pushing extra routes to guest-vms?, I think we don?t cover the case of instances connected to an external network where we provide dhcp, but we may do that if we are or if we start covering that case anytime. Miguel ?ngel Ajo On Tuesday, 2 de December de 2014 at 15:48, Ihar Hrachyshka wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > On 01/12/14 21:19, Kyle Mestery wrote: > > On Mon, Dec 1, 2014 at 6:12 AM, Assaf Muller > > wrote: > > > > > > > > > ----- Original Message ----- > > > > > > > > My proposal here, is, _let?s not deprecate this setting_, as > > > > it?s a valid use case of a gateway configuration, and let?s > > > > provide it on the reference implementation. > > > > > > > > > > > > > I agree. As long as the reference implementation works with the > > > setting off there's no need to deprecate it. I still think the > > > default should be set to True though. > > > > > > Keep in mind that the DHCP agent will need changes as well. > > ++ to both suggestions Assaf. Thanks for bringing this up Miguel! > > > > > Miguel, how about sending a patch that removes deprecation warning > from the help text then? > > > > > Kyle > > > > > > > > > > TL;DR > > > > > > > > I?ve been looking at this yesterday, during a test deployment > > > > on a site where they provide external connectivity with the > > > > gateway outside subnet. > > > > > > > > And I needed to switch it of, to actually be able to have any > > > > external connectivity. > > > > > > > > https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L121 > This is handled by providing an on-link route to the gateway first, > > > > and then adding the default gateway. > > > > > > > > It looks to me very interesting (not only because it?s the only > > > > way to work on that specific site [2][3][4]), because you can > > > > dynamically wire RIPE blocks to your server, without needing to > > > > use an specific IP for external routing or broadcast purposes, > > > > and instead use the full block in openstack. > > > > > > > > > > > > I have a tiny patch to support this on the neutron l3-agent [1] > > > > I yet need to add the logic to check ?gateway outside subnet?, > > > > then add the ?onlink? route. > > > > > > > > > > > > [1] > > > > > > > > diff --git a/neutron/agent/linux/interface.py > > > > b/neutron/agent/linux/interface.py index 538527b..5a9f186 > > > > 100644 --- a/neutron/agent/linux/interface.py +++ > > > > b/neutron/agent/linux/interface.py @@ -116,15 +116,16 @@ class > > > > LinuxInterfaceDriver(object): namespace=namespace, ip=ip_cidr) > > > > > > > > - if gateway: - device.route.add_gateway(gateway) - > > > > new_onlink_routes = set(s['cidr'] for s in extra_subnets) + if > > > > gateway: + new_onlink_routes.update([gateway]) > > > > existing_onlink_routes = > > > > set(device.route.list_onlink_routes()) for route in > > > > new_onlink_routes - existing_onlink_routes: > > > > device.route.add_onlink_route(route) for route in > > > > existing_onlink_routes - new_onlink_routes: > > > > device.route.delete_onlink_route(route) + if gateway: + > > > > device.route.add_gateway(gateway) > > > > > > > > def delete_conntrack_state(self, root_helper, namespace, ip): > > > > """Delete conntrack state associated with an IP address. > > > > > > > > [2] http://www.soyoustart.com/ [3] http://www.ovh.co.uk/ [4] > > > > http://www.kimsufi.com/ > > > > > > > > > > > > Miguel ?ngel Ajo > > > > > > > > > > > > > > > > > > > > _______________________________________________ OpenStack-dev > > > > mailing list OpenStack-dev at lists.openstack.org (mailto:OpenStack-dev at lists.openstack.org) > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list OpenStack-dev at lists.openstack.org (mailto:OpenStack-dev at lists.openstack.org) > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > _______________________________________________ OpenStack-dev > > mailing list OpenStack-dev at lists.openstack.org (mailto:OpenStack-dev at lists.openstack.org) > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > > iQEcBAEBCgAGBQJUfdEsAAoJEC5aWaUY1u57a4QIANjx/wOJJKlHJ1kiE5DQ80La > WP5DYwWj64MA+pDXPoE18+JZEV+7igHD7zeKb8pua4Ql+X/EDbLG5GK1ry4EV5RC > uKnO2tht/bLfrniirqoOcL5TqybW86ZP4TLtTzV1PdAQBNGoOaRU8pox5oAkZOmm > FrFVtBqoMtUAM9X8P7OHjkkvMLfoBinhWjlnyYWrzl6ZJtTCCipWJrVesHoWAL+F > DcWotMsSMkkCAolnDE1AST4Z6pRvj7Y4lhQyZGaOtDGkYoMPBb7PTaGIltzX3ijB > ZzDwz39o+kU9pY0/7Web6tFCEw+zFFr01rVBcQXDi5cJ2wRW7uT0J/9Aw0Rrn1M= > =coN8 > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org (mailto:OpenStack-dev at lists.openstack.org) > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flavio at redhat.com Wed Dec 3 11:05:25 2014 From: flavio at redhat.com (Flavio Percoco) Date: Wed, 3 Dec 2014 12:05:25 +0100 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: References: Message-ID: <20141203110525.GJ8984@redhat.com> On 02/12/14 14:22 +0100, Alan Pevec wrote: >Hi all, > >here are exception proposal I have collected when preparing for the >2014.2.1 release, >stable-maint members please have a look! > > >General: cap Oslo and client library versions - sync from >openstack/requirements stable/juno, would be good to include in the >release. >https://review.openstack.org/#/q/status:open+branch:stable/juno+topic:openstack/requirements,n,z +2 [snip] > >Cinder >https://review.openstack.org/137537 - small change and limited to the >VMWare driver +2 > >Glance >https://review.openstack.org/137704 - glance_store is backward >compatible, but not sure about forcing version bump on stable >https://review.openstack.org/137862 - Disable osprofiler by default to >prevent upgrade issues, disabled by default in other services +2 to both [snip] Cheers, Flavio -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From flavio at redhat.com Wed Dec 3 11:24:33 2014 From: flavio at redhat.com (Flavio Percoco) Date: Wed, 3 Dec 2014 12:24:33 +0100 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: <547EDFB6.8010802@openstack.org> References: <547DD4F8.6020306@redhat.com> <98D81949-644E-4DCE-8F29-18C64F06DAE2@doughellmann.com> <547EDFB6.8010802@openstack.org> Message-ID: <20141203112433.GK8984@redhat.com> On 03/12/14 11:02 +0100, Thierry Carrez wrote: >Doug Hellmann wrote: >> >> On Dec 2, 2014, at 5:41 PM, Alan Pevec wrote: >> >>>>> General: cap Oslo and client library versions - sync from >>>>> openstack/requirements stable/juno, would be good to include in >>>>> the release. >>>>> https://review.openstack.org/#/q/status:open+branch:stable/juno+topic:openstack/requirements,n,z >>>> >>>> +2, >>>>> >>>> let's keep all deps in sync. Those updates do not break anything >>>> for existing users. >>> >>> Just spotted it, there is now proposal to revert caps in Juno: >>> >>> https://review.openstack.org/138546 >>> >>> Doug, shall we stop merging caps to projects in Juno? >> >> Today we found that when we have caps in place that do not overlap with the versions used in master, we can?t upgrade services one at a time on a host running multiple services. We didn?t have this problem between icehouse and juno because I used the same cap values for both releases, so we didn?t trigger any problems with grenade. >> >> One solution is to undo the caps and then add caps when we discover issues in new versions of libraries and stable branches. Another is to require applications to work with ?old? versions of libraries and degrade their feature set, so that we can keep the lower bounds overlapping. >> >> In retrospect, this issue with caps was obvious, but I don?t remember it being raise in the planning. As Sean pointed out on IRC today, we should have someone write a spec for changing the way we deal with requirements so we can think about it before deciding what to do. >> >> After the releases today, the ?no more alpha versions for Oslo? ship has sailed. Removing the caps will at least let us move ahead while we figure out what to do for stable branches. > >Yes, looks like we need to think a bit deeper about it, and at the very >least not ship them in the point release. > >+2 on the requirements revert. I just read this and I agree. The same thing applies for the cap planned for glance_store, which I had already -2'd. Cheers, Fla -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From joe.gordon0 at gmail.com Wed Dec 3 11:30:15 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Wed, 3 Dec 2014 13:30:15 +0200 Subject: [openstack-dev] [python-novaclient] Status of novaclient V3 In-Reply-To: References: Message-ID: On Tue, Dec 2, 2014 at 5:21 PM, Andrey Kurilin wrote: > Hi! > > While working on fixing wrong import in novaclient v3 shell, I have found > that a lot of commands, which are listed in V3 shell(novaclient.v3.shell), > are broken, because appropriate managers are missed from V3 > client(novaclient.V3.client.Client). > > Template of error is "ERROR (AttributeError): 'Client' object has no > attribute ''", where can be "floating_ip_pools", > "floating_ip", "security_groups", "dns_entries" and etc. > > I know that novaclient V3 is not finished yet, and I guess it will be not > finished. So the main question is: > What we should do with implemented code of novaclient V3 ? Should it be > ported to novaclient V2.1 or it can be removed? > I think it can be removed, as we are not going forward with the V3 API. But I will defer to Christopher Yeoh/Ken?ichi Ohmichi for the details. > > > -- > Best regards, > Andrey Kurilin. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkaminski at mirantis.com Wed Dec 3 11:45:41 2014 From: pkaminski at mirantis.com (Przemyslaw Kaminski) Date: Wed, 03 Dec 2014 12:45:41 +0100 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> Message-ID: <547EF7E5.7020509@mirantis.com> The only useful paradigm to write in Flask is MethodView's for me [1] because decorators seem hard to refactor for large projects. Please look at adding URLs -- one has to additionally specify methods to match those from the MethodView -- this is code duplication and looks ugly. It seems though that Fask-RESTful [2] fixes this but then we're dependent on 2 projects. I don't like that Flask uses a global request object [3]. From Flask documentation "Basically you can completely ignore that this is the case unless you are doing something like unit testing. You will notice that code which depends on a request object will suddenly break because there is no request object. The solution is creating a request object yourself and binding it to the context." Yeah, let's make testing even harder... Pecan looks better in respect of RESTful services [4]. POST parameters are cleanly passed as arguments to the post method. It also provides custom JSON serialization hooks [5] so we can forget about explicit serialization in handlers. So from these 2 choices I'm for Pecan. [1] http://flask.pocoo.org/docs/0.10/views/#method-views-for-apis [2] https://flask-restful.readthedocs.org/en/0.3.0/ [3] http://flask.pocoo.org/docs/0.10/quickstart/#accessing-request-data [4] http://pecan.readthedocs.org/en/latest/rest.html [5] http://pecan.readthedocs.org/en/latest/jsonify.html P. On 12/03/2014 10:57 AM, Alexander Kislitsky wrote: > We had used Flask in the fuel-stats. It was easy and pleasant and all > project requirements was satisfied. And I saw difficulties and > workarounds with Pecan, when Nick integrated it into Nailgun. > So +1 for Flask. > > > On Tue, Dec 2, 2014 at 11:00 PM, Nikolay Markov > wrote: > > Michael, we already solved all issues I described, and I just don't > want to solve them once again after moving to another framework. Also, > I think, nothing of these wishes contradicts with good API design. > > On Tue, Dec 2, 2014 at 10:49 PM, Michael Krotscheck > > wrote: > > This sounds more like you need to pay off technical debt and > clean up your > > API. > > > > Michael > > > > On Tue Dec 02 2014 at 10:58:43 AM Nikolay Markov > > > > wrote: > >> > >> Hello all, > >> > >> I actually tried to use Pecan and even created a couple of > PoCs, but > >> there due to historical reasons of how our API is organized it will > >> take much more time to implement all workarounds we need to issues > >> Pecan doesn't solve out of the box, like working with non-RESTful > >> URLs, reverse URL lookup, returning custom body in 404 response, > >> wrapping errors to JSON automatically, etc. > >> > >> As far as I see, each OpenStack project implements its own > workarounds > >> for these issues, but still it requires much less men and hours > for us > >> to move to Flask-Restful instead of Pecan, because all these > problems > >> are already solved there. > >> > >> BTW, I know a lot of pretty big projects using Flask (it's the > second > >> most popular Web framework after Django in Python Web > community), they > >> even have their own "hall of fame": > >> http://flask.pocoo.org/community/poweredby/ . > >> > >> On Tue, Dec 2, 2014 at 7:13 PM, Ryan Brown > wrote: > >> > On 12/02/2014 09:55 AM, Igor Kalnitsky wrote: > >> >> Hi, Sebastian, > >> >> > >> >> Thank you for raising this topic again. > >> >> > >> >> [snip] > >> >> > >> >> Personally, I'd like to use Flask instead of Pecan, because > first one > >> >> is more production-ready tool and I like its design. But I > believe > >> >> this should be resolved by voting. > >> >> > >> >> Thanks, > >> >> Igor > >> >> > >> >> On Tue, Dec 2, 2014 at 4:19 PM, Sebastian Kalinowski > >> >> > > wrote: > >> >>> Hi all, > >> >>> > >> >>> [snip explanation+history] > >> >>> > >> >>> Best, > >> >>> Sebastian > >> > > >> > Given that Pecan is used for other OpenStack projects and has > plenty of > >> > builtin functionality (REST support, sessions, etc) I'd > prefer it for a > >> > number of reasons. > >> > > >> > 1) Wouldn't have to pull in plugins for standard (in Pecan) > things > >> > 2) Pecan is built for high traffic, where Flask is aimed at > much smaller > >> > projects > >> > 3) Already used by other OpenStack projects, so common > patterns can be > >> > reused as oslo libs > >> > > >> > Of course, the Flask community seems larger (though the > average flask > >> > project seems pretty small). > >> > > >> > I'm not sure what determines "production readiness", but it > seems to me > >> > like Fuel developers fall more in Pecan's target audience than in > >> > Flask's. > >> > > >> > My $0.02, > >> > Ryan > >> > > >> > -- > >> > Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. > >> > > >> > _______________________________________________ > >> > OpenStack-dev mailing list > >> > OpenStack-dev at lists.openstack.org > > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> > >> -- > >> Best regards, > >> Nick Markov > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Best regards, > Nick Markov > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cbkyeoh at gmail.com Wed Dec 3 11:59:05 2014 From: cbkyeoh at gmail.com (Christopher Yeoh) Date: Wed, 3 Dec 2014 22:29:05 +1030 Subject: [openstack-dev] [python-novaclient] Status of novaclient V3 In-Reply-To: References: Message-ID: <3A4E8032-991C-4273-95A1-FBF6B5E67E03@gmail.com> > On 3 Dec 2014, at 10:00 pm, Joe Gordon wrote: > > > >> On Tue, Dec 2, 2014 at 5:21 PM, Andrey Kurilin wrote: >> Hi! >> >> While working on fixing wrong import in novaclient v3 shell, I have found that a lot of commands, which are listed in V3 shell(novaclient.v3.shell), are broken, because appropriate managers are missed from V3 client(novaclient.V3.client.Client). >> >> Template of error is "ERROR (AttributeError): 'Client' object has no attribute ''", where can be "floating_ip_pools", "floating_ip", "security_groups", "dns_entries" and etc. >> >> I know that novaclient V3 is not finished yet, and I guess it will be not finished. So the main question is: >> What we should do with implemented code of novaclient V3 ? Should it be ported to novaclient V2.1 or it can be removed? > > I think it can be removed, as we are not going forward with the V3 API. But I will defer to Christopher Yeoh/Ken?ichi Ohmichi for the details. I think it can all just be removed now. We're going to need to enhance nova client to understand micro versions instead. But for now v2.1 should look just like v2 Chris > >> >> >> -- >> Best regards, >> Andrey Kurilin. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scheuran at linux.vnet.ibm.com Wed Dec 3 12:01:20 2014 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Wed, 03 Dec 2014 13:01:20 +0100 Subject: [openstack-dev] [Neutron] DVR Slides from Paris Summit Message-ID: <1417608080.4484.4.camel@oc5515017671.ibm.com> Hi, is there a way to get access to the slides from the DVR session of the Paris summit? Unfortunately the slides in the video are not readable. Speakers were Swaminathan Vasudevan, Jack McCann, Vivekanandan, Narasimhan, Rajeev Grover, Michael Smith. So maybe one of you can post them on slideshare or somewhere else? That would be great. This is the link to the related video https://www.openstack.org/summit/openstack-paris-summit-2014/session-videos/presentation/architectural-overview-of-distributed-virtual-routers-in-openstack-neutron Thanks -- Andreas (irc: scheuran) From zfx0906 at gmail.com Wed Dec 3 12:20:45 2014 From: zfx0906 at gmail.com (zfx0906 at gmail.com) Date: Wed, 3 Dec 2014 20:20:45 +0800 Subject: [openstack-dev] How to get openstack bugs data for research? Message-ID: <2014120320204186973010@gmail.com> Hi, all I am a graduate student in Peking University, our lab do some research on open source projects. This is our introduction: https://passion-lab.org/ Now we need openstack issues data for research, I found the issues list: https://bugs.launchpad.net/openstack/ I want to download the openstack issues data, Could anyone tell me how to download the data? Or is there some link or API for download the data? And I found 9464 bugs in https://bugs.launchpad.net/openstack/ ?is this all? why so few? Many thanks! Beat regards, Feixue, Zhang zfx0906 at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ksambor at mirantis.com Wed Dec 3 12:21:52 2014 From: ksambor at mirantis.com (Kamil Sambor) Date: Wed, 3 Dec 2014 13:21:52 +0100 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: <547EF7E5.7020509@mirantis.com> References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> Message-ID: Additionaly to what Przemek wrote, also Pecan is released more often, IMHO documentation is better written, and described a lot of possibilities of modification, also as Lukasz wrote in previous thread that Pecan is used in openstack. So I'm also for Pecan Best regards, Kamil S. On Wed, Dec 3, 2014 at 12:45 PM, Przemyslaw Kaminski wrote: > The only useful paradigm to write in Flask is MethodView's for me [1] > because decorators seem hard to refactor for large projects. Please look at > adding URLs -- one has to additionally specify methods to match those from > the MethodView -- this is code duplication and looks ugly. > > It seems though that Fask-RESTful [2] fixes this but then we're dependent > on 2 projects. > > I don't like that Flask uses a global request object [3]. From Flask > documentation > > "Basically you can completely ignore that this is the case unless you are > doing something like unit testing. You will notice that code which depends > on a request object will suddenly break because there is no request object. > The solution is creating a request object yourself and binding it to the > context." > > Yeah, let's make testing even harder... > > Pecan looks better in respect of RESTful services [4]. > POST parameters are cleanly passed as arguments to the post method. It > also provides custom JSON serialization hooks [5] so we can forget about > explicit serialization in handlers. > > So from these 2 choices I'm for Pecan. > > [1] http://flask.pocoo.org/docs/0.10/views/#method-views-for-apis > [2] https://flask-restful.readthedocs.org/en/0.3.0/ > [3] http://flask.pocoo.org/docs/0.10/quickstart/#accessing-request-data > [4] http://pecan.readthedocs.org/en/latest/rest.html > [5] http://pecan.readthedocs.org/en/latest/jsonify.html > > > P. > > > On 12/03/2014 10:57 AM, Alexander Kislitsky wrote: > > We had used Flask in the fuel-stats. It was easy and pleasant and all > project requirements was satisfied. And I saw difficulties and workarounds > with Pecan, when Nick integrated it into Nailgun. > So +1 for Flask. > > > On Tue, Dec 2, 2014 at 11:00 PM, Nikolay Markov > wrote: > >> Michael, we already solved all issues I described, and I just don't >> want to solve them once again after moving to another framework. Also, >> I think, nothing of these wishes contradicts with good API design. >> >> On Tue, Dec 2, 2014 at 10:49 PM, Michael Krotscheck >> wrote: >> > This sounds more like you need to pay off technical debt and clean up >> your >> > API. >> > >> > Michael >> > >> > On Tue Dec 02 2014 at 10:58:43 AM Nikolay Markov >> > wrote: >> >> >> >> Hello all, >> >> >> >> I actually tried to use Pecan and even created a couple of PoCs, but >> >> there due to historical reasons of how our API is organized it will >> >> take much more time to implement all workarounds we need to issues >> >> Pecan doesn't solve out of the box, like working with non-RESTful >> >> URLs, reverse URL lookup, returning custom body in 404 response, >> >> wrapping errors to JSON automatically, etc. >> >> >> >> As far as I see, each OpenStack project implements its own workarounds >> >> for these issues, but still it requires much less men and hours for us >> >> to move to Flask-Restful instead of Pecan, because all these problems >> >> are already solved there. >> >> >> >> BTW, I know a lot of pretty big projects using Flask (it's the second >> >> most popular Web framework after Django in Python Web community), they >> >> even have their own "hall of fame": >> >> http://flask.pocoo.org/community/poweredby/ . >> >> >> >> On Tue, Dec 2, 2014 at 7:13 PM, Ryan Brown wrote: >> >> > On 12/02/2014 09:55 AM, Igor Kalnitsky wrote: >> >> >> Hi, Sebastian, >> >> >> >> >> >> Thank you for raising this topic again. >> >> >> >> >> >> [snip] >> >> >> >> >> >> Personally, I'd like to use Flask instead of Pecan, because first >> one >> >> >> is more production-ready tool and I like its design. But I believe >> >> >> this should be resolved by voting. >> >> >> >> >> >> Thanks, >> >> >> Igor >> >> >> >> >> >> On Tue, Dec 2, 2014 at 4:19 PM, Sebastian Kalinowski >> >> >> wrote: >> >> >>> Hi all, >> >> >>> >> >> >>> [snip explanation+history] >> >> >>> >> >> >>> Best, >> >> >>> Sebastian >> >> > >> >> > Given that Pecan is used for other OpenStack projects and has plenty >> of >> >> > builtin functionality (REST support, sessions, etc) I'd prefer it >> for a >> >> > number of reasons. >> >> > >> >> > 1) Wouldn't have to pull in plugins for standard (in Pecan) things >> >> > 2) Pecan is built for high traffic, where Flask is aimed at much >> smaller >> >> > projects >> >> > 3) Already used by other OpenStack projects, so common patterns can >> be >> >> > reused as oslo libs >> >> > >> >> > Of course, the Flask community seems larger (though the average flask >> >> > project seems pretty small). >> >> > >> >> > I'm not sure what determines "production readiness", but it seems to >> me >> >> > like Fuel developers fall more in Pecan's target audience than in >> >> > Flask's. >> >> > >> >> > My $0.02, >> >> > Ryan >> >> > >> >> > -- >> >> > Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. >> >> > >> >> > _______________________________________________ >> >> > OpenStack-dev mailing list >> >> > OpenStack-dev at lists.openstack.org >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> >> >> -- >> >> Best regards, >> >> Nick Markov >> >> >> >> _______________________________________________ >> >> OpenStack-dev mailing list >> >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> >> >> -- >> Best regards, >> Nick Markov >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > _______________________________________________ > OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akurilin at mirantis.com Wed Dec 3 12:30:12 2014 From: akurilin at mirantis.com (Andrey Kurilin) Date: Wed, 3 Dec 2014 14:30:12 +0200 Subject: [openstack-dev] [python-novaclient] Status of novaclient V3 In-Reply-To: <3A4E8032-991C-4273-95A1-FBF6B5E67E03@gmail.com> References: <3A4E8032-991C-4273-95A1-FBF6B5E67E03@gmail.com> Message-ID: Thank for answers. I sent a patch to novaclient : https://review.openstack.org/#/c/138694/ On Wed, Dec 3, 2014 at 1:59 PM, Christopher Yeoh > wrote: > > > > On 3 Dec 2014, at 10:00 pm, Joe Gordon > wrote: > > > > On Tue, Dec 2, 2014 at 5:21 PM, Andrey Kurilin > wrote: > >> Hi! >> >> While working on fixing wrong import in novaclient v3 shell, I have found >> that a lot of commands, which are listed in V3 shell(novaclient.v3.shell), >> are broken, because appropriate managers are missed from V3 >> client(novaclient.V3.client.Client). >> >> Template of error is "ERROR (AttributeError): 'Client' object has no >> attribute ''", where can be "floating_ip_pools", >> "floating_ip", "security_groups", "dns_entries" and etc. >> >> I know that novaclient V3 is not finished yet, and I guess it will be not >> finished. So the main question is: >> What we should do with implemented code of novaclient V3 ? Should it be >> ported to novaclient V2.1 or it can be removed? >> > > I think it can be removed, as we are not going forward with the V3 API. > But I will defer to Christopher Yeoh/Ken?ichi Ohmichi for the details. > > > > I think it can all just be removed now. We're going to need to enhance > nova client to understand micro versions instead. But for now v2.1 should > look just like v2 > > Chris > > > >> >> >> -- >> Best regards, >> Andrey Kurilin. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best regards, Andrey Kurilin. -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kragniz at gmail.com Wed Dec 3 12:41:47 2014 From: kragniz at gmail.com (Louis Taylor) Date: Wed, 3 Dec 2014 12:41:47 +0000 Subject: [openstack-dev] How to get openstack bugs data for research? In-Reply-To: <2014120320204186973010@gmail.com> References: <2014120320204186973010@gmail.com> Message-ID: <20141203124146.GA1521@gmail.com> On Wed, Dec 03, 2014 at 08:20:45PM +0800, zfx0906 at gmail.com wrote: > Hi, all > > I am a graduate student in Peking University, our lab do some research on open source projects. > This is our introduction: https://passion-lab.org/ > > Now we need openstack issues data for research, I found the issues list: https://bugs.launchpad.net/openstack/ > I want to download the openstack issues data, Could anyone tell me how to download the data? Or is there some link or API for download the data? > > And I found 9464 bugs in https://bugs.launchpad.net/openstack/ ??is this all? why so few? That is the number of currently open bugs. There are 47733 bugs including closed ones. Launchpad has an API [1], which can probably list the bugs filed against a project. [1] https://help.launchpad.net/API -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: Digital signature URL: From ikalnitsky at mirantis.com Wed Dec 3 12:47:10 2014 From: ikalnitsky at mirantis.com (Igor Kalnitsky) Date: Wed, 3 Dec 2014 14:47:10 +0200 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> Message-ID: > I don't like that Flask uses a global request object [3]. Przemyslaw, actually Pecan does use global objects too. BTW, what's wrong with global objects? They are thread-safe in both Pecan and Flask. > IMHO documentation is better written, and described a lot of possibilities of modification I disagree. Flask has rich documentation and more flexible, while Pecan forces us to use only its patterns and code organization. There are no possibilities to avoid this. I'm afraid with Pecan we will have to rewrite a lot of code. On Wed, Dec 3, 2014 at 2:21 PM, Kamil Sambor wrote: > Additionaly to what Przemek wrote, also Pecan is released more often, IMHO > documentation is better written, and described a lot of possibilities of > modification, also as Lukasz wrote in previous thread that Pecan is used in > openstack. > > So I'm also for Pecan > > Best regards, > Kamil S. > > On Wed, Dec 3, 2014 at 12:45 PM, Przemyslaw Kaminski > wrote: >> >> The only useful paradigm to write in Flask is MethodView's for me [1] >> because decorators seem hard to refactor for large projects. Please look at >> adding URLs -- one has to additionally specify methods to match those from >> the MethodView -- this is code duplication and looks ugly. >> >> It seems though that Fask-RESTful [2] fixes this but then we're dependent >> on 2 projects. >> >> I don't like that Flask uses a global request object [3]. From Flask >> documentation >> >> "Basically you can completely ignore that this is the case unless you are >> doing something like unit testing. You will notice that code which depends >> on a request object will suddenly break because there is no request object. >> The solution is creating a request object yourself and binding it to the >> context." >> >> Yeah, let's make testing even harder... >> >> Pecan looks better in respect of RESTful services [4]. >> POST parameters are cleanly passed as arguments to the post method. It >> also provides custom JSON serialization hooks [5] so we can forget about >> explicit serialization in handlers. >> >> So from these 2 choices I'm for Pecan. >> >> [1] http://flask.pocoo.org/docs/0.10/views/#method-views-for-apis >> [2] https://flask-restful.readthedocs.org/en/0.3.0/ >> [3] http://flask.pocoo.org/docs/0.10/quickstart/#accessing-request-data >> [4] http://pecan.readthedocs.org/en/latest/rest.html >> [5] http://pecan.readthedocs.org/en/latest/jsonify.html >> >> >> P. >> >> >> On 12/03/2014 10:57 AM, Alexander Kislitsky wrote: >> >> We had used Flask in the fuel-stats. It was easy and pleasant and all >> project requirements was satisfied. And I saw difficulties and workarounds >> with Pecan, when Nick integrated it into Nailgun. >> So +1 for Flask. >> >> >> On Tue, Dec 2, 2014 at 11:00 PM, Nikolay Markov >> wrote: >>> >>> Michael, we already solved all issues I described, and I just don't >>> want to solve them once again after moving to another framework. Also, >>> I think, nothing of these wishes contradicts with good API design. >>> >>> On Tue, Dec 2, 2014 at 10:49 PM, Michael Krotscheck >>> wrote: >>> > This sounds more like you need to pay off technical debt and clean up >>> > your >>> > API. >>> > >>> > Michael >>> > >>> > On Tue Dec 02 2014 at 10:58:43 AM Nikolay Markov >>> > wrote: >>> >> >>> >> Hello all, >>> >> >>> >> I actually tried to use Pecan and even created a couple of PoCs, but >>> >> there due to historical reasons of how our API is organized it will >>> >> take much more time to implement all workarounds we need to issues >>> >> Pecan doesn't solve out of the box, like working with non-RESTful >>> >> URLs, reverse URL lookup, returning custom body in 404 response, >>> >> wrapping errors to JSON automatically, etc. >>> >> >>> >> As far as I see, each OpenStack project implements its own workarounds >>> >> for these issues, but still it requires much less men and hours for us >>> >> to move to Flask-Restful instead of Pecan, because all these problems >>> >> are already solved there. >>> >> >>> >> BTW, I know a lot of pretty big projects using Flask (it's the second >>> >> most popular Web framework after Django in Python Web community), they >>> >> even have their own "hall of fame": >>> >> http://flask.pocoo.org/community/poweredby/ . >>> >> >>> >> On Tue, Dec 2, 2014 at 7:13 PM, Ryan Brown wrote: >>> >> > On 12/02/2014 09:55 AM, Igor Kalnitsky wrote: >>> >> >> Hi, Sebastian, >>> >> >> >>> >> >> Thank you for raising this topic again. >>> >> >> >>> >> >> [snip] >>> >> >> >>> >> >> Personally, I'd like to use Flask instead of Pecan, because first >>> >> >> one >>> >> >> is more production-ready tool and I like its design. But I believe >>> >> >> this should be resolved by voting. >>> >> >> >>> >> >> Thanks, >>> >> >> Igor >>> >> >> >>> >> >> On Tue, Dec 2, 2014 at 4:19 PM, Sebastian Kalinowski >>> >> >> wrote: >>> >> >>> Hi all, >>> >> >>> >>> >> >>> [snip explanation+history] >>> >> >>> >>> >> >>> Best, >>> >> >>> Sebastian >>> >> > >>> >> > Given that Pecan is used for other OpenStack projects and has plenty >>> >> > of >>> >> > builtin functionality (REST support, sessions, etc) I'd prefer it >>> >> > for a >>> >> > number of reasons. >>> >> > >>> >> > 1) Wouldn't have to pull in plugins for standard (in Pecan) things >>> >> > 2) Pecan is built for high traffic, where Flask is aimed at much >>> >> > smaller >>> >> > projects >>> >> > 3) Already used by other OpenStack projects, so common patterns can >>> >> > be >>> >> > reused as oslo libs >>> >> > >>> >> > Of course, the Flask community seems larger (though the average >>> >> > flask >>> >> > project seems pretty small). >>> >> > >>> >> > I'm not sure what determines "production readiness", but it seems to >>> >> > me >>> >> > like Fuel developers fall more in Pecan's target audience than in >>> >> > Flask's. >>> >> > >>> >> > My $0.02, >>> >> > Ryan >>> >> > >>> >> > -- >>> >> > Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. >>> >> > >>> >> > _______________________________________________ >>> >> > OpenStack-dev mailing list >>> >> > OpenStack-dev at lists.openstack.org >>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >>> >> >>> >> >>> >> -- >>> >> Best regards, >>> >> Nick Markov >>> >> >>> >> _______________________________________________ >>> >> OpenStack-dev mailing list >>> >> OpenStack-dev at lists.openstack.org >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> > >>> > _______________________________________________ >>> > OpenStack-dev mailing list >>> > OpenStack-dev at lists.openstack.org >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> >>> >>> >>> -- >>> Best regards, >>> Nick Markov >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From skalinowski at mirantis.com Wed Dec 3 13:03:40 2014 From: skalinowski at mirantis.com (Sebastian Kalinowski) Date: Wed, 3 Dec 2014 14:03:40 +0100 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> Message-ID: 2014-12-03 13:47 GMT+01:00 Igor Kalnitsky : > > I don't like that Flask uses a global request object [3]. > > Przemyslaw, actually Pecan does use global objects too. BTW, what's > wrong with global objects? They are thread-safe in both Pecan and > Flask. > To be fair, Pecan could also pass request and response explicit to method [1] [1] http://pecan.readthedocs.org/en/latest/contextlocals.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From angelo.matarazzo at dektech.com.au Wed Dec 3 13:21:26 2014 From: angelo.matarazzo at dektech.com.au (Angelo Matarazzo) Date: Wed, 03 Dec 2014 14:21:26 +0100 Subject: [openstack-dev] How to get openstack bugs data for research? In-Reply-To: <20141203124146.GA1521@gmail.com> References: <2014120320204186973010@gmail.com> <20141203124146.GA1521@gmail.com> Message-ID: <547F0E56.2070406@dektech.com.au> Hi, you could take a look at Stackalytics and how this website communicates with launchpad. https://wiki.openstack.org/wiki/Stackalytics/HowToRun Best, Angelo On 03/12/2014 13:41, Louis Taylor wrote: > On Wed, Dec 03, 2014 at 08:20:45PM +0800, zfx0906 at gmail.com wrote: >> Hi, all >> >> I am a graduate student in Peking University, our lab do some research on open source projects. >> This is our introduction: https://passion-lab.org/ >> >> Now we need openstack issues data for research, I found the issues list: https://bugs.launchpad.net/openstack/ >> I want to download the openstack issues data, Could anyone tell me how to download the data? Or is there some link or API for download the data? >> >> And I found 9464 bugs in https://bugs.launchpad.net/openstack/ ??is this all? why so few? > That is the number of currently open bugs. There are 47733 bugs including > closed ones. Launchpad has an API [1], which can probably list the bugs filed > against a project. > > [1] https://help.launchpad.net/API > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rcresswe at cisco.com Wed Dec 3 13:21:51 2014 From: rcresswe at cisco.com (Rob Cresswell (rcresswe)) Date: Wed, 3 Dec 2014 13:21:51 +0000 Subject: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon In-Reply-To: References: Message-ID: +1 to changing the behaviour to ?static'. Modal inside a modal is potentially slightly more useful, but looks messy and inconsistent, which I think outweighs the functionality. Rob On 2 Dec 2014, at 12:21, Timur Sufiev > wrote: Hello, Horizoneers and UX-ers! The default behavior of modals in Horizon (defined in turn by Bootstrap defaults) regarding their closing is to simply close the modal once user clicks somewhere outside of it (on the backdrop element below and around the modal). This is not very convenient for the modal forms containing a lot of input - when it is closed without a warning all the data the user has already provided is lost. Keeping this in mind, I've made a patch [1] changing default Bootstrap 'modal_backdrop' parameter to 'static', which means that forms are not closed once the user clicks on a backdrop, while it's still possible to close them by pressing 'Esc' or clicking on the 'X' link at the top right border of the form. Also the patch [1] allows to customize this behavior (between 'true'-current one/'false' - no backdrop element/'static') on a per-form basis. What I didn't know at the moment I was uploading my patch is that David Lyle had been working on a similar solution [2] some time ago. It's a bit more elaborate than mine: if the user has already filled some some inputs in the form, then a confirmation dialog is shown, otherwise the form is silently dismissed as it happens now. The whole point of writing about this in the ML is to gather opinions which approach is better: * stick to the current behavior; * change the default behavior to 'static'; * use the David's solution with confirmation dialog (once it'll be rebased to the current codebase). What do you think? [1] https://review.openstack.org/#/c/113206/ [2] https://review.openstack.org/#/c/23037/ P.S. I remember that I promised to write this email a week ago, but better late than never :). -- Timur Sufiev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkaminski at mirantis.com Wed Dec 3 13:22:05 2014 From: pkaminski at mirantis.com (Przemyslaw Kaminski) Date: Wed, 03 Dec 2014 14:22:05 +0100 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> Message-ID: <547F0E7D.5010201@mirantis.com> Yeah, didn't notice that. Honestly, I'd prefer both to be accessible as instance attributes just like in [1] but it's more of taste I guess. [1] http://tornado.readthedocs.org/en/latest/web.html#tornado.web.RequestHandler.request P. On 12/03/2014 02:03 PM, Sebastian Kalinowski wrote: > > 2014-12-03 13:47 GMT+01:00 Igor Kalnitsky >: > > > I don't like that Flask uses a global request object [3]. > > Przemyslaw, actually Pecan does use global objects too. BTW, what's > wrong with global objects? They are thread-safe in both Pecan and > Flask. > > > To be fair, Pecan could also pass request and response explicit to > method [1] > > [1] http://pecan.readthedocs.org/en/latest/contextlocals.html > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmarkov at mirantis.com Wed Dec 3 13:22:41 2014 From: nmarkov at mirantis.com (Nikolay Markov) Date: Wed, 3 Dec 2014 17:22:41 +0400 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> Message-ID: Dear colleagues, We surely may take into account the beauty of the code in both cases (as for me, Pecan loses here, too) and argue about global objects and stuff, but I'm trying to look at amount of men and time we need to move to one of these frameworks. I wouldn't say our API is badly designed, nevertheless Pecan still has a lot of issues needed to be fixed by hand. We don't want to spend much time to this task, because it is mostly the matter of convenience and simplicity for developers, it changes nothing in features or customer-facing behavior. And if we take in account the amount of hours we need to move, based on my experience Flask definitely wins here. On Wed, Dec 3, 2014 at 4:03 PM, Sebastian Kalinowski wrote: > > 2014-12-03 13:47 GMT+01:00 Igor Kalnitsky : >> >> > I don't like that Flask uses a global request object [3]. >> >> Przemyslaw, actually Pecan does use global objects too. BTW, what's >> wrong with global objects? They are thread-safe in both Pecan and >> Flask. > > > To be fair, Pecan could also pass request and response explicit to method > [1] > > [1] http://pecan.readthedocs.org/en/latest/contextlocals.html > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Nick Markov From sean.k.mooney at intel.com Wed Dec 3 13:28:36 2014 From: sean.k.mooney at intel.com (Mooney, Sean K) Date: Wed, 3 Dec 2014 13:28:36 +0000 Subject: [openstack-dev] [nova] [libvirt] enabling per node filtering of mempage sizes In-Reply-To: <20141203101233.GG10160@redhat.com> References: <4B1BB321037C0849AAE171801564DFA630E602DE@IRSMSX107.ger.corp.intel.com> <20141203090306.GA8186@redhat.redhat.com> <20141203101233.GG10160@redhat.com> Message-ID: <4B1BB321037C0849AAE171801564DFA630E6FCB8@IRSMSX108.ger.corp.intel.com> Hi Unfortunately a flavor + aggregate is not enough for our use case as it is still possible for the tenant to misconfigure a vm. The edge case not covered by flavor + aggregate that we are trying to prevent is as follows. The operator creates an aggregate containing the nodes that require all VMs to use large pages. The operator creates flavors with and without memory backing specified. The tenant selects the aggregate containing nodes that only supports hugepages and a flavor that requires small or any. Or The tenant selects a flavor that requires small or any and does not select an aggregate. In both cases because the nodes may have non huge page memory available, it is possible to schedule A vm that will not use large pages to a node that requires large pages to be used. If this happens the behavior is undefined. The vm may boot and have not network connectivity in the case of vhost-user The vm may fail to boot or it may boot in some other error state. It would be possible however to introduce a new filter (AggregateMemoryBackingFilter) The AggregateMemoryBackingFilter would work as follows. The AggregateMemoryBackingFilter will compare the extra specifications associated with the instance and enforces the constraints set in the aggregate metadata. A new MemoryBacking attribute will be added to the aggregate metadata. The MemoryBacking attribute can be set to 1 or more of the flowing: small,large,4,2048,1048576 Syntax is SizeA,SizeB e.g. 2048,1048576 If small is set then host will only be passed if the vm requests small or 4k pages. If large is set then host will only be passed if the vm requests 2MB or 1GB. If the MemoryBacking element is not set for an aggregate the AggregateMemoryBackingFilter will pass all hosts With this new filter the (flavor or image properties) + aggregate approach would work for all driver not just libvirt. If this alternative is preferred I can resubmit as a new blueprint and mark the old blueprint as superseded. Regards Sean. -----Original Message----- From: Daniel P. Berrange [mailto:berrange at redhat.com] Sent: Wednesday, December 3, 2014 10:13 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] [libvirt] enabling per node filtering of mempage sizes On Wed, Dec 03, 2014 at 10:03:06AM +0100, Sahid Orentino Ferdjaoui wrote: > On Tue, Dec 02, 2014 at 07:44:23PM +0000, Mooney, Sean K wrote: > > Hi all > > > > I have submitted a small blueprint to allow filtering of available > > memory pages Reported by libvirt. > > Can you address this with aggregate? this will also avoid to do > something specific in the driver libvirt. Which will have to be > extended to other drivers at the end. Agreed, I think you can address this by setting up host aggregates and then using setting the desired page size on the flavour. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mscherbakov at mirantis.com Wed Dec 3 13:31:18 2014 From: mscherbakov at mirantis.com (Mike Scherbakov) Date: Wed, 3 Dec 2014 16:31:18 +0300 Subject: [openstack-dev] [Nova][Cinder] Operations: adding new nodes in "disabled" state, allowed for test tenant only Message-ID: Hi all, enable_new_services in nova.conf seems to allow add new compute nodes in disabled state: https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L507-L508, so it would allow to check everything first, before allowing production workloads host a VM on it. I've filed a bug to Fuel to use this by default when we scale up the env (add more computes) [1]. A few questions: 1. can we somehow enable compute service for test tenant first? So cloud administrator would be able to run test VMs on the node, and after ensuring that everything is fine - to enable service for all tenants 2. What about Cinder? Is there a similar option / ability? 3. What about other OpenStack projects? What is your opinion, how we should approach the problem (if there is a problem)? [1] https://bugs.launchpad.net/fuel/+bug/1398817 -- Mike Scherbakov #mihgen -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsedovic at redhat.com Wed Dec 3 13:42:53 2014 From: tsedovic at redhat.com (Tomas Sedovic) Date: Wed, 03 Dec 2014 14:42:53 +0100 Subject: [openstack-dev] [tripleo] Managing no-mergepy template duplication In-Reply-To: <20141203101059.GA23017@t430slt.redhat.com> References: <20141203101059.GA23017@t430slt.redhat.com> Message-ID: <547F135D.6060805@redhat.com> On 12/03/2014 11:11 AM, Steven Hardy wrote: > Hi all, > > Lately I've been spending more time looking at tripleo and doing some > reviews. I'm particularly interested in helping the no-mergepy and > subsequent puppet-software-config implementations mature (as well as > improving overcloud updates via heat). > > Since Tomas's patch landed[1] to enable --no-mergepy in > tripleo-heat-templates, it's become apparent that frequently patches are > submitted which only update overcloud-source.yaml, so I've been trying to > catch these and ask for a corresponding change to e.g controller.yaml. > You beat me to this. Thanks for writing it up! > This raises the following questions: > > 1. Is it reasonable to -1 a patch and ask folks to update in both places? I'm in favour. > 2. How are we going to handle this duplication and divergence? I'm not sure we can. get_file doesn't handle structured data and I don't know what else we can do. Maybe we could split out all SoftwareConfig resources to separate files (like Dan did in [nova config])? But the SoftwareDeployments, nova servers, etc. have a different structure. [nova config] https://review.openstack.org/#/c/130303/ > 3. What's the status of getting gating CI on the --no-mergepy templates? Derek, can we add a job that's identical to "check-tripleo-ironic-overcloud-{f20,precise}-nonha" except it passes "--no-mergepy" to devtest.sh? > 4. What barriers exist (now that I've implemented[2] the eliding functionality > requested[3] for ResourceGroup) to moving to the --no-mergepy > implementation by default? I'm about to post a patch that moves us from ResourceGroup to AutoScalingGroup (for rolling updates), which is going to complicate this a bit. Barring that, I think you've identified all the requirements: CI job, parity between the merge/non-merge templates and a process that maintains it going forward (or puts the old ones in a maintanence-only mode). Anyone have anything else that's missing? > > Thanks for any clarification you can provide! :) > > Steve > > [1] https://review.openstack.org/#/c/123100/ > [2] https://review.openstack.org/#/c/128365/ > [3] https://review.openstack.org/#/c/123713/ > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From berrange at redhat.com Wed Dec 3 13:38:05 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Wed, 3 Dec 2014 13:38:05 +0000 Subject: [openstack-dev] [nova] [libvirt] enabling per node filtering of mempage sizes In-Reply-To: <4B1BB321037C0849AAE171801564DFA630E6FCB8@IRSMSX108.ger.corp.intel.com> References: <4B1BB321037C0849AAE171801564DFA630E602DE@IRSMSX107.ger.corp.intel.com> <20141203090306.GA8186@redhat.redhat.com> <20141203101233.GG10160@redhat.com> <4B1BB321037C0849AAE171801564DFA630E6FCB8@IRSMSX108.ger.corp.intel.com> Message-ID: <20141203133805.GQ10160@redhat.com> On Wed, Dec 03, 2014 at 01:28:36PM +0000, Mooney, Sean K wrote: > Hi > > Unfortunately a flavor + aggregate is not enough for our use case as it is still possible for the tenant to misconfigure a vm. > > The edge case not covered by flavor + aggregate that we are trying to prevent is as follows. > > The operator creates an aggregate containing the nodes that require all VMs to use large pages. > The operator creates flavors with and without memory backing specified. > > The tenant selects the aggregate containing nodes that only supports hugepages and a flavor that requires small or any. > Or > The tenant selects a flavor that requires small or any and does not select an aggregate. The tenant isn't responsible for selecting the aggregate. The operator should be associating the aggregate directly to the flavour. So the tenant merely has to select the right flavour. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From e0ne at e0ne.info Wed Dec 3 14:02:24 2014 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 3 Dec 2014 16:02:24 +0200 Subject: [openstack-dev] [Nova][Cinder] Operations: adding new nodes in "disabled" state, allowed for test tenant only In-Reply-To: References: Message-ID: Hi Mike, We've got the similar option in Cinder too: https://github.com/openstack/cinder/blob/master/cinder/db/api.py#L58 Regards, Ivan Kolodyazhny On Wed, Dec 3, 2014 at 3:31 PM, Mike Scherbakov wrote: > Hi all, > enable_new_services in nova.conf seems to allow add new compute nodes in > disabled state: > > https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L507-L508, > so it would allow to check everything first, before allowing production > workloads host a VM on it. I've filed a bug to Fuel to use this by default > when we scale up the env (add more computes) [1]. > > A few questions: > > 1. can we somehow enable compute service for test tenant first? So > cloud administrator would be able to run test VMs on the node, and after > ensuring that everything is fine - to enable service for all tenants > 2. What about Cinder? Is there a similar option / ability? > 3. What about other OpenStack projects? > > What is your opinion, how we should approach the problem (if there is a > problem)? > > [1] https://bugs.launchpad.net/fuel/+bug/1398817 > -- > Mike Scherbakov > #mihgen > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mestery at mestery.com Wed Dec 3 14:25:50 2014 From: mestery at mestery.com (Kyle Mestery) Date: Wed, 3 Dec 2014 08:25:50 -0600 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: Mathieu: The peer review proposal was NOT about removing core reviewers, that is very clear in the proposal. The peer review proposal was about deciding as a team what it means to be a core reviewer, and ensuring core reviewers are doing that. I still plan to do try out the peer review process in the coming weeks. But even with that process, reviews are the main thing a core reviewer must be doing. If you're not doing reviews upstream, especially for long stretches, you're not really a core reviewer. Thanks, Kyle On Wed, Dec 3, 2014 at 4:41 AM, Mathieu Rohon wrote: > Hi all, > > It seems that a process with a survey for neutron core > election/removal was about to take place [1]. Has it been applied for > this proposal? > This proposal has been hardly discussed during neutron meetings > [2][3]. Many cores agree that the number of reviews shouldn't be the > only metrics. And this statement is reflected in the Survey Questions. > So I'm surprised to see such a proposal based on stackalitics figures. > > [1]https://etherpad.openstack.org/p/neutron-peer-review > [2]http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-10-13-21.02.log.html > [3]http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-10-21-14.00.log.html > > On Wed, Dec 3, 2014 at 9:44 AM, Oleg Bondarev wrote: >> +1! Congrats, Henry and Kevin! >> >> On Tue, Dec 2, 2014 at 6:59 PM, Kyle Mestery wrote: >>> >>> Now that we're in the thick of working hard on Kilo deliverables, I'd >>> like to make some changes to the neutron core team. Reviews are the >>> most important part of being a core reviewer, so we need to ensure >>> cores are doing reviews. The stats for the 180 day period [1] indicate >>> some changes are needed for cores who are no longer reviewing. >>> >>> First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from >>> neutron-core. Bob and Nachi have been core members for a while now. >>> They have contributed to Neutron over the years in reviews, code and >>> leading sub-teams. I'd like to thank them for all that they have done >>> over the years. I'd also like to propose that should they start >>> reviewing more going forward the core team looks to fast track them >>> back into neutron-core. But for now, their review stats place them >>> below the rest of the team for 180 days. >>> >>> As part of the changes, I'd also like to propose two new members to >>> neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have >>> been very active in reviews, meetings, and code for a while now. Henry >>> lead the DB team which fixed Neutron DB migrations during Juno. Kevin >>> has been actively working across all of Neutron, he's done some great >>> work on security fixes and stability fixes in particular. Their >>> comments in reviews are insightful and they have helped to onboard new >>> reviewers and taken the time to work with people on their patches. >>> >>> Existing neutron cores, please vote +1/-1 for the addition of Henry >>> and Kevin to the core team. >>> >>> Thanks! >>> Kyle >>> >>> [1] http://stackalytics.com/report/contribution/neutron-group/180 >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Wed Dec 3 14:26:07 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 3 Dec 2014 09:26:07 -0500 Subject: [openstack-dev] oslo.concurrency 0.3.0 released In-Reply-To: <547E5DE4.6090800@linux.vnet.ibm.com> References: <547DE8F7.7090206@nemebean.com> <547E05E1.6010008@nemebean.com> <547E5DE4.6090800@linux.vnet.ibm.com> Message-ID: On Dec 2, 2014, at 7:48 PM, Matt Riedemann wrote: > > > On 12/2/2014 12:33 PM, Ben Nemec wrote: >> We've discovered a couple of problems as a result of this release. pep8 >> in most/all of the projects using oslo.concurrency is failing due to the >> move out of the oslo namespace package and the fact that hacking doesn't >> know how to handle it, and nova unit tests are failing due to a problem >> with the way some mocking was done. >> >> Fixes for both of these problems are in progress and should hopefully be >> available soon. >> >> -Ben >> >> On 12/02/2014 10:29 AM, Ben Nemec wrote: >>> The Oslo team is pleased to announce the release of oslo.concurrency 0.3.0. >>> >>> This release includes a number of fixes for problems found during the >>> initial adoptions of the library, as well as some functionality >>> improvements. >>> >>> For more details, please see the git log history below and >>> https://launchpad.net/oslo.concurrency/+milestone/0.3.0 >>> >>> Please report issues through launchpad: >>> https://launchpad.net/oslo.concurrency >>> >>> openstack/oslo.concurrency 0.2.0..HEAD >>> >>> 54c84da Add external lock fixture >>> 19f07c6 Add a TODO for retrying pull request #20 >>> 46c836e Allow the lock delay to be provided >>> 3bda65c Allow for providing a customized semaphore container >>> 656f908 Move locale files to proper place >>> faa30f8 Flesh out the README >>> bca4a0d Move out of the oslo namespace package >>> 58de317 Improve testing in py3 environment >>> fa52a63 Only modify autoindex.rst if it exists >>> 63e618b Imported Translations from Transifex >>> d5ea62c lockutils-wrapper cleanup >>> 78ba143 Don't use variables that aren't initialized >>> >>> diffstat (except docs and test files): >>> >>> .gitignore | 1 + >>> .testr.conf | 2 +- >>> README.rst | 4 +- >>> .../locale/en_GB/LC_MESSAGES/oslo.concurrency.po | 16 +- >>> oslo.concurrency/locale/oslo.concurrency.pot | 16 +- >>> oslo/concurrency/__init__.py | 29 ++ >>> oslo/concurrency/_i18n.py | 32 -- >>> oslo/concurrency/fixture/__init__.py | 13 + >>> oslo/concurrency/fixture/lockutils.py | 51 -- >>> oslo/concurrency/lockutils.py | 376 -------------- >>> oslo/concurrency/openstack/__init__.py | 0 >>> oslo/concurrency/openstack/common/__init__.py | 0 >>> oslo/concurrency/openstack/common/fileutils.py | 146 ------ >>> oslo/concurrency/opts.py | 45 -- >>> oslo/concurrency/processutils.py | 340 ------------ >>> oslo_concurrency/__init__.py | 0 >>> oslo_concurrency/_i18n.py | 32 ++ >>> oslo_concurrency/fixture/__init__.py | 0 >>> oslo_concurrency/fixture/lockutils.py | 76 +++ >>> oslo_concurrency/lockutils.py | 502 ++++++++++++++++++ >>> oslo_concurrency/openstack/__init__.py | 0 >>> oslo_concurrency/openstack/common/__init__.py | 0 >>> oslo_concurrency/openstack/common/fileutils.py | 146 ++++++ >>> oslo_concurrency/opts.py | 45 ++ >>> oslo_concurrency/processutils.py | 340 ++++++++++++ >>> requirements-py3.txt | 1 + >>> requirements.txt | 1 + >>> setup.cfg | 9 +- >>> tests/test_lockutils.py | 575 >>> ++++++++++++++++++++ >>> tests/test_processutils.py | 519 >>> +++++++++++++++++++ >>> tests/test_warning.py | 29 ++ >>> tests/unit/__init__.py | 0 >>> tests/unit/test_lockutils.py | 543 >>> ------------------- >>> tests/unit/test_lockutils_eventlet.py | 59 --- >>> tests/unit/test_processutils.py | 518 ------------------ >>> tox.ini | 8 +- >>> 42 files changed, 3515 insertions(+), 2135 deletions(-) >>> >>> Requirements updates: >>> >>> diff --git a/requirements-py3.txt b/requirements-py3.txt >>> index b1a8722..a27b434 100644 >>> --- a/requirements-py3.txt >>> +++ b/requirements-py3.txt >>> @@ -13,0 +14 @@ six>=1.7.0 >>> +retrying>=1.2.2,!=1.3.0 # Apache-2.0 >>> diff --git a/requirements.txt b/requirements.txt >>> index b1a8722..a27b434 100644 >>> --- a/requirements.txt >>> +++ b/requirements.txt >>> @@ -13,0 +14 @@ six>=1.7.0 >>> +retrying>=1.2.2,!=1.3.0 # Apache-2.0 >>> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > Was a bug reported for the nova unit tests that were mocking out external_lock in lockutils? I didn't see one so I opened a bug [1] and wrote the elastic-recheck query against that. I'm working on fixing the tests in the meantime but I'll gladly stop if someone else has a fix up for review. > > [1] https://bugs.launchpad.net/nova/+bug/1398624 Ben has a patch up to use the fixtures in the library instead: https://review.openstack.org/138463 > > -- > > Thanks, > > Matt Riedemann > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From skalinowski at mirantis.com Wed Dec 3 14:35:37 2014 From: skalinowski at mirantis.com (Sebastian Kalinowski) Date: Wed, 3 Dec 2014 15:35:37 +0100 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> Message-ID: I never used Flask and Pecan personally so I can only rely from what I saw in this thread and in both projects docs. I don't have strong opinion, just want to share some thoughts. I think that as a part of OpenStack community we should stick with Pecan and because of the same reason we can have a bigger impact how future versions of Pecan will look. If we will choose Flask I see is that we not only need to choose a framework, but also decide which extension will be used to provide REST support (I do not like that we just assume "flask-restful will be used"). To be honest right now I'm more convinced that we should choose Pecan. 2014-12-03 14:22 GMT+01:00 Nikolay Markov : > Dear colleagues, > > We surely may take into account the beauty of the code in both cases > (as for me, Pecan loses here, too) and argue about global objects and > stuff, but I'm trying to look at amount of men and time we need to > move to one of these frameworks. > Agree that we should look on the man-hours for implementation, but I think that it is as important as all the small things like global object etc. since they could make future development painful or pleasant. > I wouldn't say our API is badly designed, nevertheless Pecan still has > a lot of issues needed to be fixed by hand. We don't want to spend > much time to this task, because it is mostly the matter of convenience > and simplicity for developers, it changes nothing in features or > customer-facing behavior. > > And if we take in account the amount of hours we need to move, based > on my experience Flask definitely wins here. > Cannot we reuse the PoC ([1]) with Pecan that was created? There was a lot of work put into that piece of code. [1] https://review.openstack.org/#/c/99069/6 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rprikhodchenko at mirantis.com Wed Dec 3 14:41:41 2014 From: rprikhodchenko at mirantis.com (Roman Prykhodchenko) Date: Wed, 3 Dec 2014 15:41:41 +0100 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: <547F0E7D.5010201@mirantis.com> References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F0E7D.5010201@mirantis.com> Message-ID: I don?t have an opinion for now but do have some thoughts instead. We use Pecan in Ironic. I could say that it?s pretty nice when one needs to make something simple but requires some manual job to be done in more or less sophisticated cases. On the other hand we have that the Pecan team is quire agile and open to other developers so we were able to merge our patches without any problems. However, if there is a framework that fits Nailgun's needs without being patched, that may be easier for us to just use it. There?s a political side of the question as well but I?d rather touch it only if both Flask and Pecan have the same pros and cons. - romcheg > On 03 Dec 2014, at 14:22, Przemyslaw Kaminski wrote: > > Yeah, didn't notice that. Honestly, I'd prefer both to be accessible as instance attributes just like in [1] but it's more of taste I guess. > > [1] http://tornado.readthedocs.org/en/latest/web.html#tornado.web.RequestHandler.request > > P. > > On 12/03/2014 02:03 PM, Sebastian Kalinowski wrote: >> >> 2014-12-03 13:47 GMT+01:00 Igor Kalnitsky >: >> > I don't like that Flask uses a global request object [3]. >> >> Przemyslaw, actually Pecan does use global objects too. BTW, what's >> wrong with global objects? They are thread-safe in both Pecan and >> Flask. >> >> To be fair, Pecan could also pass request and response explicit to method [1] >> >> [1] http://pecan.readthedocs.org/en/latest/contextlocals.html >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vsergeyev at mirantis.com Wed Dec 3 14:43:03 2014 From: vsergeyev at mirantis.com (Victor Sergeyev) Date: Wed, 3 Dec 2014 16:43:03 +0200 Subject: [openstack-dev] oslo.db 1.2.0 released Message-ID: Hello Folks The Oslo team is pleased to announce the release of oslo.db 1.2.0. This release includes several bug fixes as well as many other changes: $ git log --abbrev-commit --pretty=oneline --no-merges 1.1.0..1.2.0 f740e3b Imported Translations from Transifex ca1ad56 Make test_models pass on py3 549ba15 Repair include_object to accommodate new objects 10e8d15 Add table name to foreign keys diff ddd11df Updated from global requirements 2269848 Handle Galera deadlock on SELECT FOR UPDATE 4b2058b Add exception filter for _sqlite_dupe_key_error b6d363d Add info on how to run unit tests 7f755bf Ensure is_backend_avail() doesn't leave open connections c54d3a9 Updated from global requirements 2099177 Add pbr to installation requirements 135701b Fix python3.x scoping issues with removed 'de' variable Thanks Doug Hellmann, Joshua Harlow, Mike Bayer, Oleksii Chuprykov, Roman Podoliaka for contributing to this release. For more details, please see the git log history below and https://launchpad.net/oslo.db/+milestone/1.2.0 Please report issues through launchpad: https://launchpad.net/oslo.db diffstat (except docs and test files): CONTRIBUTING.rst | 39 +++++++++++++++ .../locale/en_GB/LC_MESSAGES/oslo.db-log-info.po | 8 ++-- oslo.db/locale/fr/LC_MESSAGES/oslo.db-log-error.po | 56 ++++++++++++++++++++++ oslo.db/locale/fr/LC_MESSAGES/oslo.db-log-info.po | 33 +++++++++++++ .../locale/fr/LC_MESSAGES/oslo.db-log-warning.po | 40 ++++++++++++++++ oslo/db/sqlalchemy/exc_filters.py | 21 ++++++-- oslo/db/sqlalchemy/session.py | 10 ++-- oslo/db/sqlalchemy/test_migrations.py | 2 +- oslo/db/sqlalchemy/utils.py | 3 +- requirements.txt | 1 + test-requirements-py2.txt | 2 +- test-requirements-py3.txt | 2 +- tests/sqlalchemy/test_exc_filters.py | 13 +++++ tests/sqlalchemy/test_migrations.py | 7 ++- tests/sqlalchemy/test_models.py | 31 +++++------- 15 files changed, 233 insertions(+), 35 deletions(-) Requirements updates: diff --git a/requirements.txt b/requirements.txt index cc50660..f8a0d8c 100644 --- a/requirements.txt +++ b/requirements.txt @@ -4,0 +5 @@ +pbr>=0.6,!=0.7,<1.0 diff --git a/test-requirements-py2.txt b/test-requirements-py2.txt index 13cea90..ac5c18a 100644 --- a/test-requirements-py2.txt +++ b/test-requirements-py2.txt @@ -19 +19 @@ testscenarios>=0.4 -testtools>=0.9.36 +testtools>=0.9.36,!=1.2.0 diff --git a/test-requirements-py3.txt b/test-requirements-py3.txt index 4f195da..58b9a3d 100644 --- a/test-requirements-py3.txt +++ b/test-requirements-py3.txt @@ -18 +18 @@ testscenarios>=0.4 -testtools>=0.9.36 +testtools>=0.9.36,!=1.2.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkranz at redhat.com Wed Dec 3 14:43:41 2014 From: dkranz at redhat.com (David Kranz) Date: Wed, 03 Dec 2014 09:43:41 -0500 Subject: [openstack-dev] [qa] branchless tempest and the use of 'all' for extensions in tempest.conf Message-ID: <547F219D.3090600@redhat.com> A recent proposed test to tempest was making explicit calls to the nova extension discovery api rather than using test.requires_ext. The reason was because we configure tempest.conf in the gate as 'all' for extensions, and the test involved an extension that was new in Juno. So the icehouse run failed. Since the methodology of branchless tempest requires that new conf flags be added for new features, we should stop having devstack configure with 'all'. Does any one disagree with that, or have a better solution? -David From jaypipes at gmail.com Wed Dec 3 14:53:31 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 03 Dec 2014 09:53:31 -0500 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> Message-ID: <547F23EB.7010402@gmail.com> On 12/03/2014 09:35 AM, Sebastian Kalinowski wrote: > I think that as a part of OpenStack community we should stick with > Pecan and because of the same reason we can have a bigger impact how > future versions of Pecan will look. Yes, this. ++ -jay From nmarkov at mirantis.com Wed Dec 3 15:16:45 2014 From: nmarkov at mirantis.com (Nikolay Markov) Date: Wed, 3 Dec 2014 19:16:45 +0400 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: <547F23EB.7010402@gmail.com> References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F23EB.7010402@gmail.com> Message-ID: It would be great to look at some obvious points where Pecan is better than Flask despite of the fact that it's used by the community. I still don't see a single and I don't think the principle "jump from the cliff if everyone does" works well in such cases. On Wed, Dec 3, 2014 at 5:53 PM, Jay Pipes wrote: > On 12/03/2014 09:35 AM, Sebastian Kalinowski wrote: >> >> I think that as a part of OpenStack community we should stick with >> Pecan and because of the same reason we can have a bigger impact how >> future versions of Pecan will look. > > > Yes, this. ++ > > -jay > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Nick Markov From rprikhodchenko at mirantis.com Wed Dec 3 15:16:59 2014 From: rprikhodchenko at mirantis.com (Roman Prykhodchenko) Date: Wed, 3 Dec 2014 16:16:59 +0100 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: <547F23EB.7010402@gmail.com> References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F23EB.7010402@gmail.com> Message-ID: Being able to make some impact on Pecan is an advantage for sure. But there are other aspects in choosing a web framework and I?d rather discuss them. Let?s not think about what is used in other OpenStack projects for a moment and discuss technical details. > On 03 Dec 2014, at 15:53, Jay Pipes wrote: > > On 12/03/2014 09:35 AM, Sebastian Kalinowski wrote: >> I think that as a part of OpenStack community we should stick with >> Pecan and because of the same reason we can have a bigger impact how >> future versions of Pecan will look. > > Yes, this. ++ > > -jay > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From sean at dague.net Wed Dec 3 15:18:16 2014 From: sean at dague.net (Sean Dague) Date: Wed, 03 Dec 2014 10:18:16 -0500 Subject: [openstack-dev] [all] bugs with paste pipelines and multiple projects and upgrading Message-ID: <547F29B8.9050309@dague.net> We've hit two interesting issues this week around multiple projects installing into the paste pipeline of a server. 1) the pkg_resources explosion in grenade. Basically ceilometer modified swift paste.ini to add it's own code into swift (that's part of normal ceilometer install in devstack - https://github.com/openstack-dev/devstack/blob/master/lib/swift#L376-L381 This meant when we upgraded and started swift, it turns out that we're actually running old ceilometer code. A requirements mismatch caused an explosion (which we've since worked around), however demonstrates a clear problem with installing code in another project's pipeline. 2) keystone is having issues dropping XML api support. It turns out that parts of it's paste pipeline are actually provided by keystone middleware, which means that keystone can't provide a sane "this is not supported" message in a proxy class for older paste config files. I'm wondering if we need to be a lot more strict about paste manipulations, and require that all classes in the paste pipeline are owned by the project in question. They could be proxy classes to external code, but at least that would allow the project to smooth out upgrades. Otherwise everything with code in the paste.ini needs to be atomically upgraded, and we're trying to get away from atomic upgrades. -Sean -- Sean Dague http://dague.net From jaypipes at gmail.com Wed Dec 3 15:32:24 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 03 Dec 2014 10:32:24 -0500 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F23EB.7010402@gmail.com> Message-ID: <547F2D08.3030705@gmail.com> On 12/03/2014 10:16 AM, Nikolay Markov wrote: > It would be great to look at some obvious points where Pecan is better > than Flask despite of the fact that it's used by the community. I > still don't see a single and I don't think the principle "jump from > the cliff if everyone does" works well in such cases. This is part of why the Fuel development team is viewed as not working with the OpenStack community in many ways. The Fuel team is doing a remarkable job in changing previously-all-internal-to-Mirantis communication patterns to instead be on a transparent basis in the mailing lists and on IRC. I sincerely applaud the Fuel team for that. However, the OpenStack community is also about a shared set of tools, development methodologies, and common perspectives. It's expected that when you have an OpenStack REST API project, that you try to use the tools that the shared community uses, builds, and supports. Otherwise, you aren't being a team player. In the past, certain teams have chosen to use something other than Pecan due to technical reasons. For example, Zaqar's team chose to use the Falcon framework instead of the Pecan framework. Zaqar, like Swift, is a data API, not a control API, and raw performance is critical to the project's API endpoint). This is, incidentally, why the Swift team chose to use its swob framework over Webob (which Pecan uses). However, the reason that these were chosen was definitely not "it doesn't support the coding patterns I like". There's something that comes from being a team player. And one of those things is "going with the flow" when there isn't a real technical reason not to. All of us can and do find things we don't like about *all* of the projects that we work on. The difference between team players and non-team players is that team players strongly weigh their decisions and opinions based on what the team is doing and how the team can improve. Best, -jay From mtreinish at kortar.org Wed Dec 3 15:37:01 2014 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 3 Dec 2014 10:37:01 -0500 Subject: [openstack-dev] [qa] branchless tempest and the use of 'all' for extensions in tempest.conf In-Reply-To: <547F219D.3090600@redhat.com> References: <547F219D.3090600@redhat.com> Message-ID: <20141203153701.GA672@Sazabi.treinish> On Wed, Dec 03, 2014 at 09:43:41AM -0500, David Kranz wrote: > A recent proposed test to tempest was making explicit calls to the nova > extension discovery api rather than using test.requires_ext. The reason was > because we configure tempest.conf in the gate as 'all' for extensions, and > the test involved an extension that was new in Juno. So the icehouse run > failed. Since the methodology of branchless tempest requires that new conf > flags be added for new features, we should stop having devstack configure > with 'all'. Does any one disagree with that, or have a better solution? > There is a BP in progress for doing this: https://blueprints.launchpad.net/tempest/+spec/branchless-tempest-extensions The patches are in progress here: https://review.openstack.org/#/c/126422/ and https://review.openstack.org/#/c/116129/ The solution is that we use 'all' on master devstack, but we want to have a static list set for the stable branch devstacks. Until the bp is finished patches that add tests which use new extensions will be blocked. -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From Calum.Loudon at metaswitch.com Wed Dec 3 15:40:37 2014 From: Calum.Loudon at metaswitch.com (Calum Loudon) Date: Wed, 3 Dec 2014 15:40:37 +0000 Subject: [openstack-dev] [NFV][Telco] Use case discussion In-Reply-To: <547ED500.1070008@dektech.com.au> References: <20141125223355.Horde.8UWcgskXSBVvU-RJrCgbtw7@mail.dektech.com.au> <693765041.19659732.1417044729676.JavaMail.zimbra@redhat.com> <547ED500.1070008@dektech.com.au> Message-ID: Hello all Unfortunately, my complete inability to process daylight savings time changes meant I was one hour late for today's TelcoWG meeting so couldn't participate in the discussion of use cases, including the one I submitted on Session Border Control. Thanks to eavesdrop.openstack.org I've been able to catch up. Just to chip in to the discussion, I agree entirely that we should try to keep the wiki vendor neutral, but at the same time I think use cases based on real world implementations (which could of course be open source rather than from a vendor) are more powerful illustrations for the Dev community of why particular bps are needed than the more abstract presentation of use cases in the ETSI docs, which are aimed at a different purpose. I don't think it's particularly convincing or compelling as a developer to hear that some theoretical implementation of a network function you may have never heard of might need feature X, whereas understanding that there are real products out there that genuinely depend on it brings the requirement home. One of the goals of this group is to help the rest of the OpenStack community understand NFV, and I think concrete beats abstract for that. However, as Steve noted, I did try to draw out general characteristics and relate them to specific gaps and requirements and I think that's important to try to draw as much as possible that is general and vendor-neutral from specific cases. Someone asked about ability to test; I imagine most people in this group will know of the OPNFV initiative, and they are working to put together test frameworks and cases which may include real VNFs, including this specific Perimeta example. regards Calum Calum Loudon Director, Architecture +44 (0)208 366 1177 ? METASWITCH NETWORKS THE BRAINS OF THE NEW GLOBAL NETWORK www.metaswitch.com From nmarkov at mirantis.com Wed Dec 3 15:53:08 2014 From: nmarkov at mirantis.com (Nikolay Markov) Date: Wed, 3 Dec 2014 19:53:08 +0400 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: <547F2D08.3030705@gmail.com> References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> Message-ID: > However, the OpenStack community is also about a shared set of tools, > development methodologies, and common perspectives. I completely agree with you, Jay, but the same principle may be applied much wider. Why Openstack Community decided to use its own unstable project instead of existing solution, which is widely used in Python community? To avoid being a team player? Or, at least, why it's recommended way even if it doesn't provide the same features other frameworks have for a long time already? I mean, there is no doubt everyone would use stable and technically advanced tool, but imposing everyone to use it by force with a simple hope that it'll become better from this is usually a bad approach. I personally would surely contribute to Pecan in case we decide to use it and there will be some gaps and uncovered cases. I'm just curious, does it worth it? On Wed, Dec 3, 2014 at 6:32 PM, Jay Pipes wrote: > On 12/03/2014 10:16 AM, Nikolay Markov wrote: >> >> It would be great to look at some obvious points where Pecan is better >> than Flask despite of the fact that it's used by the community. I >> still don't see a single and I don't think the principle "jump from >> the cliff if everyone does" works well in such cases. > > > This is part of why the Fuel development team is viewed as not working with > the OpenStack community in many ways. The Fuel team is doing a remarkable > job in changing previously-all-internal-to-Mirantis communication patterns > to instead be on a transparent basis in the mailing lists and on IRC. I > sincerely applaud the Fuel team for that. > > However, the OpenStack community is also about a shared set of tools, > development methodologies, and common perspectives. It's expected that when > you have an OpenStack REST API project, that you try to use the tools that > the shared community uses, builds, and supports. Otherwise, you aren't being > a team player. > > In the past, certain teams have chosen to use something other than Pecan due > to technical reasons. For example, Zaqar's team chose to use the Falcon > framework instead of the Pecan framework. Zaqar, like Swift, is a data API, > not a control API, and raw performance is critical to the project's API > endpoint). This is, incidentally, why the Swift team chose to use its swob > framework over Webob (which Pecan uses). > > However, the reason that these were chosen was definitely not "it doesn't > support the coding patterns I like". There's something that comes from being > a team player. And one of those things is "going with the flow" when there > isn't a real technical reason not to. All of us can and do find things we > don't like about *all* of the projects that we work on. The difference > between team players and non-team players is that team players strongly > weigh their decisions and opinions based on what the team is doing and how > the team can improve. > > Best, > > -jay > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Nick Markov From mtreinish at kortar.org Wed Dec 3 15:54:21 2014 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 3 Dec 2014 10:54:21 -0500 Subject: [openstack-dev] [QA] Meeting Thursday December 4th at 17:00 UTC Message-ID: <20141203155421.GB672@Sazabi.treinish> Hi everyone, Just a quick reminder that the weekly OpenStack QA team IRC meeting will be tomorrow Thursday, December 4th at 17:00 UTC in the #openstack-meeting channel. The agenda for tomorrow's meeting can be found here: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting Anyone is welcome to add an item to the agenda. It's also worth noting that a few weeks ago we started having a regular dedicated Devstack topic during the meetings. So if anyone is interested in Devstack development please join the meetings to be a part of the discussion. To help people figure out what time 17:00 UTC is in other timezones tomorrow's meeting will be at: 12:00 EST 02:00 JST 03:30 ACDT 18:00 CET 11:00 CST 9:00 PST -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From lbragstad at gmail.com Wed Dec 3 15:57:09 2014 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 3 Dec 2014 09:57:09 -0600 Subject: [openstack-dev] [all] bugs with paste pipelines and multiple projects and upgrading In-Reply-To: <547F29B8.9050309@dague.net> References: <547F29B8.9050309@dague.net> Message-ID: On Wed, Dec 3, 2014 at 9:18 AM, Sean Dague wrote: > We've hit two interesting issues this week around multiple projects > installing into the paste pipeline of a server. > > 1) the pkg_resources explosion in grenade. Basically ceilometer modified > swift paste.ini to add it's own code into swift (that's part of normal > ceilometer install in devstack - > https://github.com/openstack-dev/devstack/blob/master/lib/swift#L376-L381 > > This meant when we upgraded and started swift, it turns out that we're > actually running old ceilometer code. A requirements mismatch caused an > explosion (which we've since worked around), however demonstrates a > clear problem with installing code in another project's pipeline. > > 2) keystone is having issues dropping XML api support. It turns out that > parts of it's paste pipeline are actually provided by keystone > middleware, which means that keystone can't provide a sane "this is not > supported" message in a proxy class for older paste config files. > > I made an attempt to capture some of the information on the specific grenade case we were hitting for XML removal in a bug report [1]. We can keep the classes in keystone/middleware/core.py as a workaround for now with essentially another deprecation message, but at some point we should pull the plug on defining XmlBodyMiddleware in our keystone-paste.ini [2] as it won't do anything anyway and shouldn't be in the configuration. Since this deals with a configuration change, this could "always" break a customer. What criteria should we follow for cases like this? >From visiting with Sean in -qa, typically service configurations don't change for the grenade target on upgrade, but if we have to make a change on upgrade (to clean out old cruft for example), how do we go about that? [1] https://bugs.launchpad.net/grenade/+bug/1398833 [2] https://github.com/openstack/keystone/blob/d82a3caa329e9b42c588e28f694bf847261d63d1/etc/keystone-paste.ini#L15-L22 > I'm wondering if we need to be a lot more strict about paste > manipulations, and require that all classes in the paste pipeline are > owned by the project in question. They could be proxy classes to > external code, but at least that would allow the project to smooth out > upgrades. Otherwise everything with code in the paste.ini needs to be > atomically upgraded, and we're trying to get away from atomic upgrades. > > -Sean > > -- > Sean Dague > http://dague.net > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmarkov at mirantis.com Wed Dec 3 15:58:38 2014 From: nmarkov at mirantis.com (Nikolay Markov) Date: Wed, 3 Dec 2014 19:58:38 +0400 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> Message-ID: A month or two ago I started gathering differencies between Flask and Pecan, let's take a look at technical details. Maybe there are some things that are already fixed in current versions of Pecan, feel free to comment. https://docs.google.com/document/d/1QR7YphyfN64m-e9b5rKC_U8bMtx4zjfW943BfLTqTao/edit?usp=sharing On Wed, Dec 3, 2014 at 6:53 PM, Nikolay Markov wrote: >> However, the OpenStack community is also about a shared set of tools, >> development methodologies, and common perspectives. > > I completely agree with you, Jay, but the same principle may be > applied much wider. Why Openstack Community decided to use its own > unstable project instead of existing solution, which is widely used in > Python community? To avoid being a team player? Or, at least, why it's > recommended way even if it doesn't provide the same features other > frameworks have for a long time already? I mean, there is no doubt > everyone would use stable and technically advanced tool, but imposing > everyone to use it by force with a simple hope that it'll become > better from this is usually a bad approach. > > I personally would surely contribute to Pecan in case we decide to use > it and there will be some gaps and uncovered cases. I'm just curious, > does it worth it? > > On Wed, Dec 3, 2014 at 6:32 PM, Jay Pipes wrote: >> On 12/03/2014 10:16 AM, Nikolay Markov wrote: >>> >>> It would be great to look at some obvious points where Pecan is better >>> than Flask despite of the fact that it's used by the community. I >>> still don't see a single and I don't think the principle "jump from >>> the cliff if everyone does" works well in such cases. >> >> >> This is part of why the Fuel development team is viewed as not working with >> the OpenStack community in many ways. The Fuel team is doing a remarkable >> job in changing previously-all-internal-to-Mirantis communication patterns >> to instead be on a transparent basis in the mailing lists and on IRC. I >> sincerely applaud the Fuel team for that. >> >> However, the OpenStack community is also about a shared set of tools, >> development methodologies, and common perspectives. It's expected that when >> you have an OpenStack REST API project, that you try to use the tools that >> the shared community uses, builds, and supports. Otherwise, you aren't being >> a team player. >> >> In the past, certain teams have chosen to use something other than Pecan due >> to technical reasons. For example, Zaqar's team chose to use the Falcon >> framework instead of the Pecan framework. Zaqar, like Swift, is a data API, >> not a control API, and raw performance is critical to the project's API >> endpoint). This is, incidentally, why the Swift team chose to use its swob >> framework over Webob (which Pecan uses). >> >> However, the reason that these were chosen was definitely not "it doesn't >> support the coding patterns I like". There's something that comes from being >> a team player. And one of those things is "going with the flow" when there >> isn't a real technical reason not to. All of us can and do find things we >> don't like about *all* of the projects that we work on. The difference >> between team players and non-team players is that team players strongly >> weigh their decisions and opinions based on what the team is doing and how >> the team can improve. >> >> Best, >> >> -jay >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Best regards, > Nick Markov -- Best regards, Nick Markov From Neil.Jerram at metaswitch.com Wed Dec 3 16:08:21 2014 From: Neil.Jerram at metaswitch.com (Neil Jerram) Date: Wed, 3 Dec 2014 16:08:21 +0000 Subject: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup? Message-ID: <87ppc090ne.fsf@metaswitch.com> Hi there. I've been looking into a tricky point, and hope I can succeed in expressing it clearly here... I believe it is the case, even when using a committedly Neutron-based networking implementation, that Nova is still involved a little bit in the networking setup logic. Specifically I mean the plug() and unplug() operations, whose implementations are provided by *VIFDriver classes for the various possible hypervisors. For example, for the libvirt hypervisor, LibvirtGenericVIFDriver typically implements plug() by calling create_tap_dev() to create the TAP device, and then plugging into some form of L2 bridge. Does this logic actually have to be in Nova? For a Neutron-based networking implementation, it seems to me that it should also be possible to do this in a Neutron agent (obviously running on the compute node concerned), and that - if so - that would be preferable because it would enable more Neutron-based experimentation without having to modify any Nova code. Specifically, therefore, I wonder if we could/should add a "do-nothing" value to the set of Nova VIF types (VIF_TYPE_NOOP?), and implement plug()/unplug() for that value to do nothing at all, leaving all setup to the Neutron agent? And then hopefully it should never be necessary to introduce further Nova VIF type support ever again... Am I missing something that really makes that not fly? Two possible objections occurs to me, as follows, but I think they're both surmountable. 1. When the port is created in the Neutron DB, and handled (bound etc.) by the plugin and/or mechanism driver, the TAP device name is already present at that time. I think this is still OK because Neutron knows anyway what the TAP device name _will_ be, even if the actual TAP device hasn't been created yet. 2. With some agent implementations, there isn't a direct instruction, from the plugin to the agent, to say "now look after this VM / port". Instead the agents polls the OS for new TAP devices appearing. Clearly, then, if there isn't something other than the agent that creates the TAP device, any logic in the agent will never be triggered. This is certain a problem. For new networking experimentation, however, we can write agent code that is directly instructed by the plugin, and hence (a) doesn't need to poll (b) doesn't require the TAP device to have been previously created by Nova - which I'd argue is preferable. Thoughts? (FYI my context is that I've been working on a networking implementation where the TAP device to/from a VM should _not_ be plugged into a bridge - and for that I've had to make a Nova change even though my team's aim was to do the whole thing in Neutron. I've proposed a spec for the Nova change that plugs a TAP interface without bridging it (https://review.openstack.org/#/c/130732/), but that set me wondering about this wider question of whether such Nova changes should still be necessary...) Many thanks, Neil From sean at dague.net Wed Dec 3 16:26:14 2014 From: sean at dague.net (Sean Dague) Date: Wed, 03 Dec 2014 11:26:14 -0500 Subject: [openstack-dev] [all] bugs with paste pipelines and multiple projects and upgrading In-Reply-To: References: <547F29B8.9050309@dague.net> Message-ID: <547F39A6.6050204@dague.net> On 12/03/2014 10:57 AM, Lance Bragstad wrote: > > > On Wed, Dec 3, 2014 at 9:18 AM, Sean Dague > wrote: > > We've hit two interesting issues this week around multiple projects > installing into the paste pipeline of a server. > > 1) the pkg_resources explosion in grenade. Basically ceilometer modified > swift paste.ini to add it's own code into swift (that's part of normal > ceilometer install in devstack - > https://github.com/openstack-dev/devstack/blob/master/lib/swift#L376-L381 > > This meant when we upgraded and started swift, it turns out that we're > actually running old ceilometer code. A requirements mismatch caused an > explosion (which we've since worked around), however demonstrates a > clear problem with installing code in another project's pipeline. > > 2) keystone is having issues dropping XML api support. It turns out that > parts of it's paste pipeline are actually provided by keystone > middleware, which means that keystone can't provide a sane "this is not > supported" message in a proxy class for older paste config files. > > > I made an attempt to capture some of the information on the specific > grenade case we were hitting for XML removal in a bug report [1]. We can > keep the classes in keystone/middleware/core.py as a workaround for now > with essentially another deprecation message, but at some point we > should pull the plug on defining XmlBodyMiddleware in our > keystone-paste.ini [2] as it won't do anything anyway and shouldn't be > in the configuration. Since this deals with a configuration change, this > could "always" break a customer. What criteria should we follow for > cases like this? > > From visiting with Sean in -qa, typically service configurations don't > change for the grenade target on upgrade, but if we have to make a > change on upgrade (to clean out old cruft for example), how do we go > about that? Add content here - https://github.com/openstack-dev/grenade/tree/master/from-juno Note: you'll get a -2 unless you provide a link to Release Notes somewhere that highlights this as an Upgrade Impact for users for the next release. -Sean -- Sean Dague http://dague.net From lyz at princessleia.com Wed Dec 3 16:27:12 2014 From: lyz at princessleia.com (Elizabeth K. Joseph) Date: Wed, 3 Dec 2014 08:27:12 -0800 Subject: [openstack-dev] [Infra] Infra-manual documentation Sprint, December 1-2 Message-ID: On Fri, Nov 7, 2014 at 5:57 AM, Elizabeth K. Joseph wrote: > The OpenStack Infrastructure team will be hosting a virtual sprint in > the Freenode IRC channel #openstack-sprint for the Infrastructure User > Manual on December 1st starting at 15:00 UTC and going for 48 hours. Thanks to everyone who participated in our online documentation sprint this week! We made major progress with our manual (published here: http://docs.openstack.org/infra/manual/) and hope this push for core content will help us continue to refine and update the content for this valuable community resource moving forward. Now, some numbers: Sprint start: Patches open for review: 10 Patches merged in repo history: 13 Sprint end: Patches open for review: 3 (plus 2 WIP) See: https://review.openstack.org/#/q/status:open+project:openstack-infra/infra-manual,n,z Patches merged during sprint: 30 See: https://review.openstack.org/#/q/status:merged+project:openstack-infra/infra-manual,n,z Reviews: Over 200 See http://stackalytics.com/?module=infra-manual&release=kilo We also have 16 patches for documentation in flight that were initiated or reviewed elsewhere in the openstack-infra project during this sprint, including the important reorganization of the git-review documentation. See: https://review.openstack.org/#/q/status:open+file:%255E.*%255C.rst+project:%255Eopenstack-infra/.*,n,z Sprint participants (sorted chronologically by reviews): Elizabeth Krumbach Joseph, Andreas Jaeger, James E. Blair, Anita Kuno, Clark Boylan, Spencer Krum, Jeremy Stanley, Doug Hellmann, Khai Do, Antoine Musso, Stefano Maffulli, Thierry Carrez and Yolanda Robla -- Elizabeth Krumbach Joseph || Lyz || pleia2 From jaypipes at gmail.com Wed Dec 3 16:41:44 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 03 Dec 2014 11:41:44 -0500 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> Message-ID: <547F3D48.5020401@gmail.com> On 12/03/2014 10:53 AM, Nikolay Markov wrote: >> However, the OpenStack community is also about a shared set of tools, >> development methodologies, and common perspectives. > > I completely agree with you, Jay, but the same principle may be > applied much wider. Why Openstack Community decided to use its own > unstable project instead of existing solution, which is widely used in > Python community? To avoid being a team player? Or, at least, why it's > recommended way even if it doesn't provide the same features other > frameworks have for a long time already? I mean, there is no doubt > everyone would use stable and technically advanced tool, but imposing > everyone to use it by force with a simple hope that it'll become > better from this is usually a bad approach. This conversation was had a long time ago, was thoroughly thought-out and discussed at prior summits and the ML: https://etherpad.openstack.org/p/grizzly-common-wsgi-frameworks https://etherpad.openstack.org/p/havana-common-wsgi I think it's unfair to suggest that the OpenStack community decided "to use its own unstable project instead of existing solution". Best, -jay From nmarkov at mirantis.com Wed Dec 3 17:00:21 2014 From: nmarkov at mirantis.com (Nikolay Markov) Date: Wed, 3 Dec 2014 21:00:21 +0400 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: <547F3D48.5020401@gmail.com> References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> <547F3D48.5020401@gmail.com> Message-ID: I didn't participate in that discussion, but here are topics on Flask cons from your link. I added some comments. - Cons - db transactions a little trickier to manage, but possible # what is trickier? Flask uses pure SQLalchemy or a very thin wrapper - JSON built-in but not XML # the only one I agree with, but does Pecan have it? - some issues, not updated in a while # last commit was 3 days ago - No Python 3 support # full Python 3 support fro a year or so already - Not WebOb # can it even be considered as a con? I'm not trying to argue with you or community principles, I'm just trying to choose the right instrument for the job. On Wed, Dec 3, 2014 at 7:41 PM, Jay Pipes wrote: > On 12/03/2014 10:53 AM, Nikolay Markov wrote: >>> >>> However, the OpenStack community is also about a shared set of tools, >>> development methodologies, and common perspectives. >> >> >> I completely agree with you, Jay, but the same principle may be >> applied much wider. Why Openstack Community decided to use its own >> unstable project instead of existing solution, which is widely used in >> Python community? To avoid being a team player? Or, at least, why it's >> recommended way even if it doesn't provide the same features other >> frameworks have for a long time already? I mean, there is no doubt >> everyone would use stable and technically advanced tool, but imposing >> everyone to use it by force with a simple hope that it'll become >> better from this is usually a bad approach. > > > This conversation was had a long time ago, was thoroughly thought-out and > discussed at prior summits and the ML: > > https://etherpad.openstack.org/p/grizzly-common-wsgi-frameworks > https://etherpad.openstack.org/p/havana-common-wsgi > > I think it's unfair to suggest that the OpenStack community decided "to use > its own unstable project instead of existing solution". > > > Best, > -jay > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Nick Markov From ryan.petrello at dreamhost.com Wed Dec 3 17:09:01 2014 From: ryan.petrello at dreamhost.com (Ryan Petrello) Date: Wed, 3 Dec 2014 12:09:01 -0500 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> Message-ID: <20141203170901.GA90782@Ryans-MBP> I've left some comments/corrections in this document re: pecan and what is supports. On 12/03/14 07:58 PM, Nikolay Markov wrote: > A month or two ago I started gathering differencies between Flask and > Pecan, let's take a look at technical details. Maybe there are some > things that are already fixed in current versions of Pecan, feel free > to comment. https://docs.google.com/document/d/1QR7YphyfN64m-e9b5rKC_U8bMtx4zjfW943BfLTqTao/edit?usp=sharing > > On Wed, Dec 3, 2014 at 6:53 PM, Nikolay Markov wrote: > >> However, the OpenStack community is also about a shared set of tools, > >> development methodologies, and common perspectives. > > > > I completely agree with you, Jay, but the same principle may be > > applied much wider. Why Openstack Community decided to use its own > > unstable project instead of existing solution, which is widely used in > > Python community? To avoid being a team player? Or, at least, why it's > > recommended way even if it doesn't provide the same features other > > frameworks have for a long time already? I mean, there is no doubt > > everyone would use stable and technically advanced tool, but imposing > > everyone to use it by force with a simple hope that it'll become > > better from this is usually a bad approach. > > > > I personally would surely contribute to Pecan in case we decide to use > > it and there will be some gaps and uncovered cases. I'm just curious, > > does it worth it? > > > > On Wed, Dec 3, 2014 at 6:32 PM, Jay Pipes wrote: > >> On 12/03/2014 10:16 AM, Nikolay Markov wrote: > >>> > >>> It would be great to look at some obvious points where Pecan is better > >>> than Flask despite of the fact that it's used by the community. I > >>> still don't see a single and I don't think the principle "jump from > >>> the cliff if everyone does" works well in such cases. > >> > >> > >> This is part of why the Fuel development team is viewed as not working with > >> the OpenStack community in many ways. The Fuel team is doing a remarkable > >> job in changing previously-all-internal-to-Mirantis communication patterns > >> to instead be on a transparent basis in the mailing lists and on IRC. I > >> sincerely applaud the Fuel team for that. > >> > >> However, the OpenStack community is also about a shared set of tools, > >> development methodologies, and common perspectives. It's expected that when > >> you have an OpenStack REST API project, that you try to use the tools that > >> the shared community uses, builds, and supports. Otherwise, you aren't being > >> a team player. > >> > >> In the past, certain teams have chosen to use something other than Pecan due > >> to technical reasons. For example, Zaqar's team chose to use the Falcon > >> framework instead of the Pecan framework. Zaqar, like Swift, is a data API, > >> not a control API, and raw performance is critical to the project's API > >> endpoint). This is, incidentally, why the Swift team chose to use its swob > >> framework over Webob (which Pecan uses). > >> > >> However, the reason that these were chosen was definitely not "it doesn't > >> support the coding patterns I like". There's something that comes from being > >> a team player. And one of those things is "going with the flow" when there > >> isn't a real technical reason not to. All of us can and do find things we > >> don't like about *all* of the projects that we work on. The difference > >> between team players and non-team players is that team players strongly > >> weigh their decisions and opinions based on what the team is doing and how > >> the team can improve. > >> > >> Best, > >> > >> -jay > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > -- > > Best regards, > > Nick Markov > > > > -- > Best regards, > Nick Markov > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ryan Petrello Senior Developer, DreamHost ryan.petrello at dreamhost.com From Kevin.Fox at pnnl.gov Wed Dec 3 17:07:44 2014 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 3 Dec 2014 17:07:44 +0000 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: <547F2D08.3030705@gmail.com> References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F23EB.7010402@gmail.com> , <547F2D08.3030705@gmail.com> Message-ID: <1A3C52DFCD06494D8528644858247BF017812DC3@EX10MBOX03.pnnl.gov> +1. Well said. I second the applauding of the Fuel's development team's for their changing of their communications patterns (that's never easy) and also the desire for closer integration with the rest of the OpenStack community. ________________________________________ From: Jay Pipes [jaypipes at gmail.com] Sent: Wednesday, December 03, 2014 7:32 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [Fuel][Nailgun] Web framework On 12/03/2014 10:16 AM, Nikolay Markov wrote: > It would be great to look at some obvious points where Pecan is better > than Flask despite of the fact that it's used by the community. I > still don't see a single and I don't think the principle "jump from > the cliff if everyone does" works well in such cases. This is part of why the Fuel development team is viewed as not working with the OpenStack community in many ways. The Fuel team is doing a remarkable job in changing previously-all-internal-to-Mirantis communication patterns to instead be on a transparent basis in the mailing lists and on IRC. I sincerely applaud the Fuel team for that. However, the OpenStack community is also about a shared set of tools, development methodologies, and common perspectives. It's expected that when you have an OpenStack REST API project, that you try to use the tools that the shared community uses, builds, and supports. Otherwise, you aren't being a team player. In the past, certain teams have chosen to use something other than Pecan due to technical reasons. For example, Zaqar's team chose to use the Falcon framework instead of the Pecan framework. Zaqar, like Swift, is a data API, not a control API, and raw performance is critical to the project's API endpoint). This is, incidentally, why the Swift team chose to use its swob framework over Webob (which Pecan uses). However, the reason that these were chosen was definitely not "it doesn't support the coding patterns I like". There's something that comes from being a team player. And one of those things is "going with the flow" when there isn't a real technical reason not to. All of us can and do find things we don't like about *all* of the projects that we work on. The difference between team players and non-team players is that team players strongly weigh their decisions and opinions based on what the team is doing and how the team can improve. Best, -jay _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amotoki at gmail.com Wed Dec 3 17:14:30 2014 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 4 Dec 2014 02:14:30 +0900 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: +1 for Kevin and Henry. Thanks for your continuous efforts so far and it would be great additions to our community. I also would like to say great thanks to Bob and Nachi. It would be really nice if we see their activities in reviews soon. We already understand your contributions and skills, so I believe you all can be back cores even though what is the conclusion (as nova team usually does). Thanks, Akihiro On Wed, Dec 3, 2014 at 12:59 AM, Kyle Mestery wrote: > Now that we're in the thick of working hard on Kilo deliverables, I'd > like to make some changes to the neutron core team. Reviews are the > most important part of being a core reviewer, so we need to ensure > cores are doing reviews. The stats for the 180 day period [1] indicate > some changes are needed for cores who are no longer reviewing. > > First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from > neutron-core. Bob and Nachi have been core members for a while now. > They have contributed to Neutron over the years in reviews, code and > leading sub-teams. I'd like to thank them for all that they have done > over the years. I'd also like to propose that should they start > reviewing more going forward the core team looks to fast track them > back into neutron-core. But for now, their review stats place them > below the rest of the team for 180 days. > > As part of the changes, I'd also like to propose two new members to > neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have > been very active in reviews, meetings, and code for a while now. Henry > lead the DB team which fixed Neutron DB migrations during Juno. Kevin > has been actively working across all of Neutron, he's done some great > work on security fixes and stability fixes in particular. Their > comments in reviews are insightful and they have helped to onboard new > reviewers and taken the time to work with people on their patches. > > Existing neutron cores, please vote +1/-1 for the addition of Henry > and Kevin to the core team. > > Thanks! > Kyle > > [1] http://stackalytics.com/report/contribution/neutron-group/180 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Akihiro Motoki From Kevin.Fox at pnnl.gov Wed Dec 3 17:25:28 2014 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 3 Dec 2014 17:25:28 +0000 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> <547F3D48.5020401@gmail.com>, Message-ID: <1A3C52DFCD06494D8528644858247BF017812DEB@EX10MBOX03.pnnl.gov> Choosing the right instrument for the job in an open source community involves choosing technologies that the community is familiar/comfortable with as well, as it will allow you access to a greater pool of developers. With that in mind then, I'd add: Pro Pecan, blessed by the OpenStack community, con Flask, not. Kevin ________________________________________ From: Nikolay Markov [nmarkov at mirantis.com] Sent: Wednesday, December 03, 2014 9:00 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Fuel][Nailgun] Web framework I didn't participate in that discussion, but here are topics on Flask cons from your link. I added some comments. - Cons - db transactions a little trickier to manage, but possible # what is trickier? Flask uses pure SQLalchemy or a very thin wrapper - JSON built-in but not XML # the only one I agree with, but does Pecan have it? - some issues, not updated in a while # last commit was 3 days ago - No Python 3 support # full Python 3 support fro a year or so already - Not WebOb # can it even be considered as a con? I'm not trying to argue with you or community principles, I'm just trying to choose the right instrument for the job. On Wed, Dec 3, 2014 at 7:41 PM, Jay Pipes wrote: > On 12/03/2014 10:53 AM, Nikolay Markov wrote: >>> >>> However, the OpenStack community is also about a shared set of tools, >>> development methodologies, and common perspectives. >> >> >> I completely agree with you, Jay, but the same principle may be >> applied much wider. Why Openstack Community decided to use its own >> unstable project instead of existing solution, which is widely used in >> Python community? To avoid being a team player? Or, at least, why it's >> recommended way even if it doesn't provide the same features other >> frameworks have for a long time already? I mean, there is no doubt >> everyone would use stable and technically advanced tool, but imposing >> everyone to use it by force with a simple hope that it'll become >> better from this is usually a bad approach. > > > This conversation was had a long time ago, was thoroughly thought-out and > discussed at prior summits and the ML: > > https://etherpad.openstack.org/p/grizzly-common-wsgi-frameworks > https://etherpad.openstack.org/p/havana-common-wsgi > > I think it's unfair to suggest that the OpenStack community decided "to use > its own unstable project instead of existing solution". > > > Best, > -jay > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Nick Markov _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedem at linux.vnet.ibm.com Wed Dec 3 17:41:53 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Wed, 03 Dec 2014 11:41:53 -0600 Subject: [openstack-dev] [nova] Can the Kilo nova controller conduct the Juno compute nodes In-Reply-To: References: Message-ID: <547F4B61.3000502@linux.vnet.ibm.com> On 12/3/2014 3:49 AM, Li Junhong wrote: > Hi Joe, > > Thank you for your confirmative answer and the wonderful gate testing > pipeline. > > On Wed, Dec 3, 2014 at 5:38 PM, Joe Gordon > wrote: > > > > On Wed, Dec 3, 2014 at 11:09 AM, Li Junhong > wrote: > > Hi All, > > Is it possible for Kilo nova controller to control the Juno > compute nodes? Is this scenario supported naturally by the nova > mechanism in the design and codes level? > > > Yes, > > We gate on making sure we can run Kilo nova with Juno compute nodes. > > > > -- > Best Regards! > ------------------------------------------------------------------------ > Junhong, Li > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Best Regards! > ------------------------------------------------------------------------ > Junhong, Li > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > FYI, there are also documented upgrade plans now [1]. http://docs.openstack.org/developer/nova/devref/upgrade.html -- Thanks, Matt Riedemann From clint at fewbar.com Wed Dec 3 18:41:01 2014 From: clint at fewbar.com (Clint Byrum) Date: Wed, 03 Dec 2014 10:41:01 -0800 Subject: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023) In-Reply-To: <3C6FDAE6-9BEB-4E9B-91A8-BD5AE724C6D7@tenshu.net> References: <54755DE7.2070107@redhat.com> <547607C3.3090604@nemebean.com> <1417495283-sup-444@fewbar.com> <547E1177.4030705@redhat.com> <1417559171-sup-8883@fewbar.com> <547E98F0.8010406@redhat.com> <3C6FDAE6-9BEB-4E9B-91A8-BD5AE724C6D7@tenshu.net> Message-ID: <1417631988-sup-5421@fewbar.com> Excerpts from Chris Jones's message of 2014-12-03 02:47:30 -0800: > Hi > > I am very sympathetic to this view. We have a patch in hand that improves the situation. We also have disagreement about the ideal situation. > > I +2'd Ian's patch because it makes things work better than they do now. If we can arrive at an ideal solution later, great, but the more I think about logging from a multitude of bash scripts, and tricks like XTRACE_FD, the more I think it's crazy and we should just incrementally improve the non-trace logging as a separate exercise, leaving working tracing for true debugging situations. > Forgive me, I am not pushing for an ideal situation, but I don't want a regression. Running without -x right now has authors xtracing as a rule. Meaning that the moment this merges, the amount of output goes to almost nil compared to what it is now. Basically this is just more of the same OpenStack wrong-headed idea, "you have to run in DEBUG logging mode to be able to understand any issue". I'm totally willing to compromise on the ideal for something that is good enough, but I'm saying this is not good enough _if_ it turns off tracing for all scripts. What if the patch is reworked to leave the current trace-all-the-time mode in place, and we iterate on each script to make tracing conditional as we add proper logging? From sean.k.mooney at intel.com Wed Dec 3 16:37:15 2014 From: sean.k.mooney at intel.com (Mooney, Sean K) Date: Wed, 3 Dec 2014 16:37:15 +0000 Subject: [openstack-dev] [nova] [libvirt] enabling per node filtering of mempage sizes References: <4B1BB321037C0849AAE171801564DFA630E602DE@IRSMSX107.ger.corp.intel.com> <20141203090306.GA8186@redhat.redhat.com> <20141203101233.GG10160@redhat.com> Message-ID: <4B1BB321037C0849AAE171801564DFA630E71237@IRSMSX108.ger.corp.intel.com> Hi Daniel thanks for your feedback. After reading up a little more http://docs.openstack.org/openstack-ops/content/scaling.html#segragation_methods I now understand your original suggestion. I believe that if the operator associates the aggregate directly to the flavor As you suggested that yes this will cover my use case too as the tenant is selecting availability zones Not host aggregates. Sorry I had misconstrued the relationship between availability zones and host aggregates. I believed that there was a one to one mapping ,so when you select an availability zone you were Selecting the host aggregate directly. Regards Sean. > -----Original Message----- > From: Daniel P. Berrange [mailto:berrange at redhat.com] > Sent: Wednesday, December 03, 2014 1:38 PM > To: Mooney, Sean K > Cc: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [nova] [libvirt] enabling per node > filtering of mempage sizes > > On Wed, Dec 03, 2014 at 01:28:36PM +0000, Mooney, Sean K wrote: > > Hi > > > > Unfortunately a flavor + aggregate is not enough for our use case as > > it is still > possible for the tenant to misconfigure a vm. > > > > The edge case not covered by flavor + aggregate that we are trying > > to prevent > is as follows. > > > > The operator creates an aggregate containing the nodes that require > > all VMs > to use large pages. > > The operator creates flavors with and without memory backing specified. > > > > The tenant selects the aggregate containing nodes that only supports > hugepages and a flavor that requires small or any. > > Or > > The tenant selects a flavor that requires small or any and does not > > select an > aggregate. > > The tenant isn't responsible for selecting the aggregate. The operator > should be associating the aggregate directly to the flavour. So the > tenant merely has to select the right flavour. > > Regards, > Daniel > -- > |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| > |: http://libvirt.org -o- http://virt-manager.org :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From apevec at gmail.com Wed Dec 3 19:17:23 2014 From: apevec at gmail.com (Alan Pevec) Date: Wed, 3 Dec 2014 20:17:23 +0100 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: <547DE593.9090709@electronicjungle.net> References: <547DE593.9090709@electronicjungle.net> Message-ID: 2014-12-02 17:15 GMT+01:00 Jay S. Bryant : >> Cinder >> https://review.openstack.org/137537 - small change and limited to the >> VMWare driver > > +1 I think this is fine to make an exception for. one more Cinder exception proposal was added in StableJuno etherpad * https://review.openstack.org/#/c/138526/ (This is currently the master version but I will be proposing to stable/juno as soon as it is approved in Master) The Brocade FS San Lookup facility is currently broken and this revert is necessary to get it working again. Jay, what's the status there, I see master change failed in gate? Cheers, Alan From apevec at gmail.com Wed Dec 3 19:22:57 2014 From: apevec at gmail.com (Alan Pevec) Date: Wed, 3 Dec 2014 20:22:57 +0100 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: References: Message-ID: > Horizon > standing-after-freeze translation update, coming on Dec 3 This is now posted https://review.openstack.org/138798 David, Matthias, I'd appreciate one of you to have a quick look before approving. Cheers, Alan From lucasagomes at gmail.com Wed Dec 3 19:26:59 2014 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Wed, 3 Dec 2014 19:26:59 +0000 Subject: [openstack-dev] [Ironic] A mascot for Ironic In-Reply-To: References: <546BA197.9040605@redhat.com> Message-ID: Hi, Poll is closed! Everyone please welcome: Pixie Boots, the new Ironic mascot! Thanks for everyone that voted! Cheers, Lucas On Mon, Dec 1, 2014 at 4:47 PM, Lucas Alvares Gomes wrote: > Ah forgot to say, > > Please add your launchpad ID on the Name Field. And I will close the poll > on Wednesday at 18:00 UTC (I think it's enough time to everyone take a look > at it) > > Cheers, > Lucas > > On Mon, Dec 1, 2014 at 4:44 PM, Lucas Alvares Gomes > wrote: > >> Hi all, >> >> I'm sorry for the long delay on this I've been dragged into some other >> stuff :) But anyway, now it's time!!!! >> >> I've asked the core Ironic team to narrow down the name options (we had >> too many, thanks to everyone that contributed) the list of finalists is in >> the poll right here: http://doodle.com/9h4ncgx4etkyfgdw. So please vote >> and help us choose the best name for the new mascot! >> >> Cheers, >> Lucas >> >> On Tue, Nov 18, 2014 at 7:44 PM, Nathan Kinder >> wrote: >> >>> >>> >>> On 11/16/2014 10:51 AM, David Shrewsbury wrote: >>> > >>> >> On Nov 16, 2014, at 8:57 AM, Chris K >> >> > wrote: >>> >> >>> >> How cute. >>> >> >>> >> maybe we could call him bear-thoven. >>> >> >>> >> Chris >>> >> >>> > >>> > I like Blaze Bearly, lead singer for Ironic Maiden. :) >>> > >>> > https://en.wikipedia.org/wiki/Blaze_Bayley >>> >>> Good call! I never thought I'd see a Blaze Bayley reference on this >>> list. :) Just watch out for imposters... >>> >>> http://en.wikipedia.org/wiki/Slow_Riot_for_New_Zer%C3%B8_Kanada#BBF3 >>> >>> > >>> > >>> >> >>> >> On Sun, Nov 16, 2014 at 5:14 AM, Lucas Alvares Gomes >>> >> > wrote: >>> >> >>> >> Hi Ironickers, >>> >> >>> >> I was thinking this weekend: All the cool projects does have a >>> mascot >>> >> so I thought that we could have one for Ironic too. >>> >> >>> >> The idea about what the mascot would be was easy because the RAX >>> guys >>> >> put "bear metal" their presentation[1] and that totally rocks! So >>> I >>> >> drew a bear. It also needed an instrument, at first I thought >>> about a >>> >> guitar, but drums is actually my favorite instrument so I drew a >>> pair >>> >> of drumsticks instead. >>> >> >>> >> The drawing thing wasn't that hard, the problem was to digitalize >>> it. >>> >> So I scanned the thing and went to youtube to watch some tutorials >>> >> about gimp and inkspace to learn how to vectorize it. Magic, it >>> >> worked! >>> >> >>> >> Attached in the email there's the original draw, the vectorized >>> >> version without colors and the final version of it (with colors). >>> >> >>> >> Of course, I know some people does have better skills than I do, >>> so I >>> >> also attached the inkspace file of the final version in case >>> people >>> >> want to tweak it :) >>> >> >>> >> So, what you guys think about making this little drummer bear the >>> >> mascot of the Ironic project? >>> >> >>> >> Ahh he also needs a name. So please send some suggestions and we >>> can >>> >> vote on the best name for him. >>> >> >>> >> [1] http://www.youtube.com/watch?v=2Oi2T2pSGDU#t=90 >>> >> >>> >> Lucas >>> >> >>> >> _______________________________________________ >>> >> OpenStack-dev mailing list >>> >> OpenStack-dev at lists.openstack.org >>> >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >>> >> >>> >> _______________________________________________ >>> >> OpenStack-dev mailing list >>> >> OpenStack-dev at lists.openstack.org >>> >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> > >>> > >>> > _______________________________________________ >>> > OpenStack-dev mailing list >>> > OpenStack-dev at lists.openstack.org >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Wed Dec 3 19:32:15 2014 From: apevec at gmail.com (Alan Pevec) Date: Wed, 3 Dec 2014 20:32:15 +0100 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: <547EDD4A.8040600@openstack.org> References: <547EDD4A.8040600@openstack.org> Message-ID: >> Neutron >> https://review.openstack.org/136294 - default SNAT, see review for >> details, I cannot distil 1liner :) > -1: I would rather fix the doc to match behavior, than change behavior > to match the doc and lose people that were relying on it. Consensus is not to merge this and keep behavior. >> https://review.openstack.org/136275 - self-contained to the vendor >> code, extensively tested in several deployments > +0: Feels a bit large for a last-minute exception. Kyle, Ihar, I'd like to see +2 from Neutron stable-maint before approving exception. Cheers, Alan From edgar.magana at workday.com Wed Dec 3 19:48:18 2014 From: edgar.magana at workday.com (Edgar Magana) Date: Wed, 3 Dec 2014 19:48:18 +0000 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: I give +2 to Henry and Kevin. So, Congratulations Folks! I have been working with both of them and great quality reviews are always coming out from them. Many thanks to Nachi and Bob for their hard work! Edgar On 12/2/14, 7:59 AM, "Kyle Mestery" wrote: >Now that we're in the thick of working hard on Kilo deliverables, I'd >like to make some changes to the neutron core team. Reviews are the >most important part of being a core reviewer, so we need to ensure >cores are doing reviews. The stats for the 180 day period [1] indicate >some changes are needed for cores who are no longer reviewing. > >First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from >neutron-core. Bob and Nachi have been core members for a while now. >They have contributed to Neutron over the years in reviews, code and >leading sub-teams. I'd like to thank them for all that they have done >over the years. I'd also like to propose that should they start >reviewing more going forward the core team looks to fast track them >back into neutron-core. But for now, their review stats place them >below the rest of the team for 180 days. > >As part of the changes, I'd also like to propose two new members to >neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have >been very active in reviews, meetings, and code for a while now. Henry >lead the DB team which fixed Neutron DB migrations during Juno. Kevin >has been actively working across all of Neutron, he's done some great >work on security fixes and stability fixes in particular. Their >comments in reviews are insightful and they have helped to onboard new >reviewers and taken the time to work with people on their patches. > >Existing neutron cores, please vote +1/-1 for the addition of Henry >and Kevin to the core team. > >Thanks! >Kyle > >[1] http://stackalytics.com/report/contribution/neutron-group/180 > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mrunge at redhat.com Wed Dec 3 19:52:09 2014 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 03 Dec 2014 20:52:09 +0100 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: References: Message-ID: <547F69E9.5080805@redhat.com> On 03/12/14 20:22, Alan Pevec wrote: >> Horizon >> standing-after-freeze translation update, coming on Dec 3 > > This is now posted https://review.openstack.org/138798 > David, Matthias, I'd appreciate one of you to have a quick look before > approving. > > Cheers, > Alan > Alan, thanks for the heads-up here. Approved. Matthias From cmsj at tenshu.net Wed Dec 3 20:02:08 2014 From: cmsj at tenshu.net (Chris Jones) Date: Wed, 3 Dec 2014 20:02:08 +0000 Subject: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023) In-Reply-To: <1417631988-sup-5421@fewbar.com> References: <54755DE7.2070107@redhat.com> <547607C3.3090604@nemebean.com> <1417495283-sup-444@fewbar.com> <547E1177.4030705@redhat.com> <1417559171-sup-8883@fewbar.com> <547E98F0.8010406@redhat.com> <3C6FDAE6-9BEB-4E9B-91A8-BD5AE724C6D7@tenshu.net> <1417631988-sup-5421@fewbar.com> Message-ID: Hi > On 3 Dec 2014, at 18:41, Clint Byrum wrote: > > What if the patch is reworked to leave the current trace-all-the-time > mode in place, and we iterate on each script to make tracing conditional > as we add proper logging? +1 Cheers, -- Chris Jones From daniel.nobusada at gmail.com Wed Dec 3 20:31:31 2014 From: daniel.nobusada at gmail.com (Daniel Nobusada) Date: Wed, 3 Dec 2014 18:31:31 -0200 Subject: [openstack-dev] [Neutron] [Devstack] Devstack quantun-agent vlan error Message-ID: Hi guys, I'm using Devstack with vlan on the parameter Q_ML2_TENANT_NETWORK_TYPE and while stacking, the quantun-agent (q-agt) breaks. I'm running it on an Ubuntu Server 14.04. For more details, here's my local.conf: http://pastebin.com/4scBGtpf. The error that I recieve is the following: http://pastebin.com/TLq0safe Could anyone help me? Thanks in advance, Daniel Nobusada -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.griffith8 at gmail.com Wed Dec 3 20:44:56 2014 From: john.griffith8 at gmail.com (John Griffith) Date: Wed, 3 Dec 2014 13:44:56 -0700 Subject: [openstack-dev] [OpenStack-Dev] Config Options and OSLO libs Message-ID: Hey, So this is a long running topic, but I want to bring it up again. First, YES Cinder is still running a sample.conf. A lot of Operators spoke up and provided feedback that this was valuable and they objected strongly to taking it away. That being said we're going to go the route of removing it from our unit tests and generating/publishing periodically outside of tests. That being said, one of the things that's driving me crazy and breaking things on a regular basis is other OpenStack libs having a high rate of change of config options. This revolves around things like fixing typos in the comments, reformatting of text etc. All of these things are good in the long run, but I wonder if we could consider batching these sorts of efforts and communicating them? The other issue that we hit today was a flat out removal of an option in the oslo.messaging lib with no deprecation. This patch here [1] does a number of things that are probably great in terms of clean up and housekeeping, but now that we're all in on shared/common libs I think we should be a bit more careful about the changes we make. Also to me the commit message doesn't really make it easy for me to search git logs to try and figure out what happened when things blew up. Anyway, just wanted to send a note out asking people to keep in mind the impact of conf changes, and a gentle reminder about depreciation periods for the removal of options. [1]: https://github.com/openstack/oslo.messaging/commit/bcb3b23b8f6e7d01e38fdc031982558711bb7586 From sean at dague.net Wed Dec 3 20:45:10 2014 From: sean at dague.net (Sean Dague) Date: Wed, 03 Dec 2014 15:45:10 -0500 Subject: [openstack-dev] [oslo] oslo.messaging config option deprecation Message-ID: <547F7656.80708@dague.net> So this - https://github.com/openstack/oslo.messaging/commit/bcb3b23b8f6e7d01e38fdc031982558711bb7586 was clearly a violation of our 1 cycle for deprecation of config options. I think that should be reverted, an oops release put out to fix it, and then deprecate for 1.6. If oslo libraries are going to include config options, they have to follow the same config deprecation as that's a contract that projects project up. Otherwise we need to rethink the ability for libraries to use oslo config (which, honestly is worth rethinking). -Sean -- Sean Dague http://dague.net From anteaya at anteaya.info Wed Dec 3 20:56:56 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Wed, 03 Dec 2014 15:56:56 -0500 Subject: [openstack-dev] [third-party] Third-party CI account creation is now self-serve Message-ID: <547F7918.6000700@anteaya.info> As of now third-party CI account creation is now self-serve. I think this makes everybody happy. What does this mean? Well for a new third-party account this means you follow the new process, outlined here: http://ci.openstack.org/third_party.html#creating-a-service-account If you don't have enough information from these docs, please contact the infra team then we will work on a patch once you learn what you needed, to fill in the holes for others. If you currently have a third-party CI account on Gerrit, this is what will happen with your account: http://ci.openstack.org/third_party.html#permissions-on-your-third-party-system Short story is we will be moving voting accounts into project specific voting groups. Your voting rights will not change, but will be directly managed by project release groups. Non voting accounts will be removed from the now redundant Third-Party CI group and otherwise will not be changed. If you are a member of a -release group for a project currently receiving third-party CI votes, you will find that you have access to manage membership in a new group in Gerrit called -ci. To allow a CI system to vote on your project, add it to the -ci group, and to disable voting on your project, remove them from that group. We hope you are as excited about this change as we are. Let us know if you have questions, do try to work with third-party project representatives as much as you can. Thank you, Anita. From morgan.fainberg at gmail.com Wed Dec 3 20:58:52 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Wed, 3 Dec 2014 12:58:52 -0800 Subject: [openstack-dev] [OpenStack-Dev] Config Options and OSLO libs In-Reply-To: References: Message-ID: <7B9A1DDC-BD6A-4C6D-A8CF-BD87C69A0A75@gmail.com> Hi John, Let me say first off that I 100% agree with the value of the sample config being in-tree. Keystone has not removed it due to similar feedback I?ve received. However, the issue is that *gating* on config changes for all libraries that are included in the sample config is just a process that leads to this frustration / breakage. I have thought about this, and I think the right answer is two fold: 1) immediately stop gating on sample config changes. I know the cinder team uses it as a ?did we break some compat? and ?are you changing config? in a patch that could adversely affect deployers/other systems. I don?t think you?re going to win the ?don?t change config values in libraries we don?t control? (or even controlled by a separate project) argument. It?s very hard to release an updated oslo lib, clients, or keystonemiddleware. 2) Implement a check (I think I have a way of doing this, I?ll run it by Doug Hellman and you on IRC) that is programatically checking *only* for in-tree config values. Alternative: A non-voting gate job that says ?config has changed? [should be *really* easy to add] so at least you know the config has changed. This should likely be something easy to get through the door (either the programatic one or the simple non-voting job). This however, needs the infra team buy-in as acceptable. I know that most projects have moved away from gating on this since we now consume a lot of libraries that provide config options that the individual server-projects don?t control (it is the reason Keystone doesn?t gate explicitly on this). Just my quick $0.002 on the topic, ?Morgan > On Dec 3, 2014, at 12:44 PM, John Griffith wrote: > > Hey, > > So this is a long running topic, but I want to bring it up again. > First, YES Cinder is still running a sample.conf. A lot of Operators > spoke up and provided feedback that this was valuable and they > objected strongly to taking it away. That being said we're going to > go the route of removing it from our unit tests and > generating/publishing periodically outside of tests. > > That being said, one of the things that's driving me crazy and > breaking things on a regular basis is other OpenStack libs having a > high rate of change of config options. This revolves around things > like fixing typos in the comments, reformatting of text etc. All of > these things are good in the long run, but I wonder if we could > consider batching these sorts of efforts and communicating them? > > The other issue that we hit today was a flat out removal of an option > in the oslo.messaging lib with no deprecation. This patch here [1] > does a number of things that are probably great in terms of clean up > and housekeeping, but now that we're all in on shared/common libs I > think we should be a bit more careful about the changes we make. Also > to me the commit message doesn't really make it easy for me to search > git logs to try and figure out what happened when things blew up. > > Anyway, just wanted to send a note out asking people to keep in mind > the impact of conf changes, and a gentle reminder about depreciation > periods for the removal of options. > > [1]: https://github.com/openstack/oslo.messaging/commit/bcb3b23b8f6e7d01e38fdc031982558711bb7586 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ben at swartzlander.org Wed Dec 3 21:07:37 2014 From: ben at swartzlander.org (Ben Swartzlander) Date: Wed, 03 Dec 2014 16:07:37 -0500 Subject: [openstack-dev] [Manila] Manila project use-cases In-Reply-To: References: <1B324A03-7768-4603-9544-01EF53CA7BBC@koderer.com> <99BBEF13-EE7C-4B0C-9EA5-D9A941D40AC4@koderer.com> Message-ID: <547F7B99.3040109@swartzlander.org> Reply to Valeriy below and to Marc further below... On 12/03/2014 02:39 AM, Valeriy Ponomaryov wrote: > According to (2) - yes, analog of Cinder's "manage/unmanage" is not > implemented in Manila yet. Manage/unmanage is a feature I'm very interested in seeing in Manila. I suspect it will be harder to get right in Manila than it was for Cinder, however, and more importantly, getting it right will depend a lot on the work that's going on right now to support pools and driver modes. For Manila core it won't actually be that much work but for individual drivers, implementing manage/unmanage can be a huge amount of work, so we should try to define the semantics of manage/unmanage at the project level to strike a good balance between usefulness to administrators and making it practical to implement. > On Wed, Dec 3, 2014 at 9:03 AM, Marc Koderer > wrote: > > Hi Valeriy, > > thanks for feedback. My answers see below. > > Am 02.12.2014 um 16:44 schrieb Valeriy Ponomaryov > >: > > > Hello Marc, > > > > Here, I tried to cover mentioned use cases with "implemented or > not" notes: > > > > 1) Implemented, but details of implementation are different for > different share drivers. > > 2) Not clear for me. If you mean possibility to mount one share > to any amount of VMs, then yes. > > That means you have an existing shared volume in your storage > system and import > it to manila (like cinder manage). I guess this is not implemented > yet. > > > 3) Nova is used only in one case - Generic Driver that uses > Cinder volumes. So, it can be said, that Manila interface does > allow to use "flat" network and a share driver just should have > implementation for it. I will assume you mean usage of generic > driver and possibility to mount shares to different machines > except Nova VMs. - In that case network architecture should allow > to make connection in general. If it is allowed, then should not > be any problems with mount to any machine. Just access-allow > operations should be performed. > > This point was actually a copy from the wiki [1]. I just removed > the Bare-metal point > since for me it doesn?t matter whether the infrastructure service > is a Bare-metal machine or not. > > > 4) Access can be shared, but it is not as flexible as could be > wanted. Owner of a share can grant access to all, and if there is > network connectivity between user and share host, then user will > be able to mount having provided access. > > Also a copy from the wiki. > > > 5) Manila can not remove some "mount" of some share, it can > remove access for possibility to mount in general. So, looks like > not implemented. > > 6) Implemented. > > 7) Not implemented yet. > > 8) No "cloning", but we have snapshot-approach as for volumes in > cinder. > > Regards > Marc > > > > > Regards, > > Valeriy Ponomaryov > > Mirantis > > > > On Tue, Dec 2, 2014 at 4:22 PM, Marc Koderer > wrote: > > Hello Manila Team, > > > > We identified use cases for Manila during an internal workshop > > with our operators. I would like to share them with you and > > update the wiki [1] since it seems to be outdated. > > > > Before that I would like to gather feedback and you might help me > > with identifying things that aren?t implemented yet. > > > > Our list: > > > > 1.) Create a share and use it in a tenant > > Initial creation of a shared storage volume and assign it > to several VM?s > This is the basic use case for Manila and I hope it's obvious that this works. > > 2.) Assign an preexisting share to a VM with Manila > > Import an existing Share with data and it to several VM?s > in case of > > migrating an existing production - services to Openstack. > Covered above. > > 3.) External consumption of a share > > Accommodate and provide mechanisms for last-mile > consumption of shares by > > consumers of the service that aren't mediated by Nova. > Depending on how you look at this, it either already works or it's out of scope for Manila. Now that we're looking at mount automation we may be more involved in this area, but nothing about Manila prevents the use of shares by something other than nova. > > 4.) Cross Tenant sharing > > Coordinate shares across tenants > As above, this is considered out of scope, but we believe it's easy to make this work with no changes to Manila. > > 5.) Detach a share and don?t destroy data (deactivate) > > Share is flagged as inactive and data are not destroyed for > later > > usage or in case of legal requirements. > Can't this be achieved by simply removing all access? By default, the shares manila creates are not accessible to anyone. Access must be granted explicitly. > > 6.) Unassign and delete data of a share > > Destroy entire share with all data and free space for > further usage. > This is another core feature that already works. > > 7.) Resize Share > > Resize existing and assigned share on the fly. > Similar to manage/unmanage, this is very easily to conceptually understand, but not always easy to implement, due to the vagaries of real storage systems. There are some storage systems that can easily do this (such as NetApp) but others would find it quite challenging. Interestingly, for those that have difficulty resizing shares, resizing larger is often easier than resizing smaller. Cinder has made the design choice to support expanding volumes but NOT to support shrinking volumes. This is an area where we should consider making the resize feature optional, or at least making the shrinking optional if we decide to support expanding across the board. > > 8.) Copy existing share > > Copy existing share between different storage technologies > Is this an analog for the cinder migrate feature? Hopefully it's obvious that anyone can copy a share to another share with the "cp -ar" command from a host that's connected to both shares. For copying across technologies, I suspect you can't do much better than this. For copying within the same family of backends, we already have snapshot and create-share-from-snapshot, and we could add optimized migration paths if we did implement a manila-managed migration feature. > > Regards > > Marc > > Deutsche Telekom > > > > [1]: https://wiki.openstack.org/wiki/Manila/usecases > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > -- > > Kind Regards > > Valeriy Ponomaryov > > www.mirantis.com > > vponomaryov at mirantis.com > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Kind Regards > Valeriy Ponomaryov > www.mirantis.com > vponomaryov at mirantis.com > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Dec 3 21:10:30 2014 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 03 Dec 2014 15:10:30 -0600 Subject: [openstack-dev] [oslo] oslo.messaging config option deprecation In-Reply-To: <547F7656.80708@dague.net> References: <547F7656.80708@dague.net> Message-ID: <547F7C46.7010304@nemebean.com> On 12/03/2014 02:45 PM, Sean Dague wrote: > So this - > https://github.com/openstack/oslo.messaging/commit/bcb3b23b8f6e7d01e38fdc031982558711bb7586 > was clearly a violation of our 1 cycle for deprecation of config options. > > I think that should be reverted, an oops release put out to fix it, and > then deprecate for 1.6. +1. That was definitely a no-no. > > If oslo libraries are going to include config options, they have to > follow the same config deprecation as that's a contract that projects > project up. Otherwise we need to rethink the ability for libraries to > use oslo config (which, honestly is worth rethinking). I don't see how that would help with this sort of thing. Instead of one project mistakenly removing an undeprecated opt, you would have dozens potentially making the same error, which would also then make their available configuration options inconsistent with all of the other projects using oslo.messaging. That way lies madness. From sean at dague.net Wed Dec 3 21:18:41 2014 From: sean at dague.net (Sean Dague) Date: Wed, 03 Dec 2014 16:18:41 -0500 Subject: [openstack-dev] [OpenStack-Dev] Config Options and OSLO libs In-Reply-To: <7B9A1DDC-BD6A-4C6D-A8CF-BD87C69A0A75@gmail.com> References: <7B9A1DDC-BD6A-4C6D-A8CF-BD87C69A0A75@gmail.com> Message-ID: <547F7E31.40209@dague.net> On 12/03/2014 03:58 PM, Morgan Fainberg wrote: > Hi John, > > Let me say first off that I 100% agree with the value of the sample config being in-tree. Keystone has not removed it due to similar feedback I?ve received. However, the issue is that *gating* on config changes for all libraries that are included in the sample config is just a process that leads to this frustration / breakage. I have thought about this, and I think the right answer is two fold: > > 1) immediately stop gating on sample config changes. I know the cinder team uses it as a ?did we break some compat? and ?are you changing config? in a patch that could adversely affect deployers/other systems. I don?t think you?re going to win the ?don?t change config values in libraries we don?t control? (or even controlled by a separate project) argument. It?s very hard to release an updated oslo lib, clients, or keystonemiddleware. > > 2) Implement a check (I think I have a way of doing this, I?ll run it by Doug Hellman and you on IRC) that is programatically checking *only* for in-tree config values. > > Alternative: A non-voting gate job that says ?config has changed? [should be *really* easy to add] so at least you know the config has changed. > > This should likely be something easy to get through the door (either the programatic one or the simple non-voting job). This however, needs the infra team buy-in as acceptable. > > I know that most projects have moved away from gating on this since we now consume a lot of libraries that provide config options that the individual server-projects don?t control (it is the reason Keystone doesn?t gate explicitly on this). So I think there is a better way. The end game here is you want an up to date sample config in your tree. Ok, so as a post merge figure out if you need a config change, and if so proposal bot that back in. Better yet, publish these somewhere on the web so people can browse samples. Maybe even for a few different kinds of configs. Make it part of the release checklist for a milestone that the tool which generates the config is run manually and make sure that the in tree config is up to date. Which might mean in master it's behind a bit, but at least it will be right for any releases. -Sean > > Just my quick $0.002 on the topic, > ?Morgan > >> On Dec 3, 2014, at 12:44 PM, John Griffith wrote: >> >> Hey, >> >> So this is a long running topic, but I want to bring it up again. >> First, YES Cinder is still running a sample.conf. A lot of Operators >> spoke up and provided feedback that this was valuable and they >> objected strongly to taking it away. That being said we're going to >> go the route of removing it from our unit tests and >> generating/publishing periodically outside of tests. >> >> That being said, one of the things that's driving me crazy and >> breaking things on a regular basis is other OpenStack libs having a >> high rate of change of config options. This revolves around things >> like fixing typos in the comments, reformatting of text etc. All of >> these things are good in the long run, but I wonder if we could >> consider batching these sorts of efforts and communicating them? >> >> The other issue that we hit today was a flat out removal of an option >> in the oslo.messaging lib with no deprecation. This patch here [1] >> does a number of things that are probably great in terms of clean up >> and housekeeping, but now that we're all in on shared/common libs I >> think we should be a bit more careful about the changes we make. Also >> to me the commit message doesn't really make it easy for me to search >> git logs to try and figure out what happened when things blew up. >> >> Anyway, just wanted to send a note out asking people to keep in mind >> the impact of conf changes, and a gentle reminder about depreciation >> periods for the removal of options. >> >> [1]: https://github.com/openstack/oslo.messaging/commit/bcb3b23b8f6e7d01e38fdc031982558711bb7586 >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Sean Dague http://dague.net From morgan.fainberg at gmail.com Wed Dec 3 21:36:24 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Wed, 3 Dec 2014 13:36:24 -0800 Subject: [openstack-dev] [OpenStack-Dev] Config Options and OSLO libs In-Reply-To: <547F7E31.40209@dague.net> References: <7B9A1DDC-BD6A-4C6D-A8CF-BD87C69A0A75@gmail.com> <547F7E31.40209@dague.net> Message-ID: <690F27D4-B694-49F2-82C2-E766A4C8A1AC@gmail.com> > On Dec 3, 2014, at 1:18 PM, Sean Dague wrote: > > On 12/03/2014 03:58 PM, Morgan Fainberg wrote: >> Hi John, >> >> Let me say first off that I 100% agree with the value of the sample config being in-tree. Keystone has not removed it due to similar feedback I?ve received. However, the issue is that *gating* on config changes for all libraries that are included in the sample config is just a process that leads to this frustration / breakage. I have thought about this, and I think the right answer is two fold: >> >> 1) immediately stop gating on sample config changes. I know the cinder team uses it as a ?did we break some compat? and ?are you changing config? in a patch that could adversely affect deployers/other systems. I don?t think you?re going to win the ?don?t change config values in libraries we don?t control? (or even controlled by a separate project) argument. It?s very hard to release an updated oslo lib, clients, or keystonemiddleware. >> >> 2) Implement a check (I think I have a way of doing this, I?ll run it by Doug Hellman and you on IRC) that is programatically checking *only* for in-tree config values. >> >> Alternative: A non-voting gate job that says ?config has changed? [should be *really* easy to add] so at least you know the config has changed. >> >> This should likely be something easy to get through the door (either the programatic one or the simple non-voting job). This however, needs the infra team buy-in as acceptable. >> >> I know that most projects have moved away from gating on this since we now consume a lot of libraries that provide config options that the individual server-projects don?t control (it is the reason Keystone doesn?t gate explicitly on this). > > So I think there is a better way. The end game here is you want an up to > date sample config in your tree. > > Ok, so as a post merge figure out if you need a config change, and if so > proposal bot that back in. Better yet, publish these somewhere on the > web so people can browse samples. Maybe even for a few different kinds > of configs. > > Make it part of the release checklist for a milestone that the tool > which generates the config is run manually and make sure that the in > tree config is up to date. Which might mean in master it's behind a bit, > but at least it will be right for any releases. > This is actually what Keystone does. But last time I talked to John they used it for review of config changes purposes. I?m happy to be corrected and I?m a big advocate of ?it?s part of the release checklist?. ?Morgan > -Sean > > >> >> Just my quick $0.002 on the topic, >> ?Morgan >> >>> On Dec 3, 2014, at 12:44 PM, John Griffith wrote: >>> >>> Hey, >>> >>> So this is a long running topic, but I want to bring it up again. >>> First, YES Cinder is still running a sample.conf. A lot of Operators >>> spoke up and provided feedback that this was valuable and they >>> objected strongly to taking it away. That being said we're going to >>> go the route of removing it from our unit tests and >>> generating/publishing periodically outside of tests. >>> >>> That being said, one of the things that's driving me crazy and >>> breaking things on a regular basis is other OpenStack libs having a >>> high rate of change of config options. This revolves around things >>> like fixing typos in the comments, reformatting of text etc. All of >>> these things are good in the long run, but I wonder if we could >>> consider batching these sorts of efforts and communicating them? >>> >>> The other issue that we hit today was a flat out removal of an option >>> in the oslo.messaging lib with no deprecation. This patch here [1] >>> does a number of things that are probably great in terms of clean up >>> and housekeeping, but now that we're all in on shared/common libs I >>> think we should be a bit more careful about the changes we make. Also >>> to me the commit message doesn't really make it easy for me to search >>> git logs to try and figure out what happened when things blew up. >>> >>> Anyway, just wanted to send a note out asking people to keep in mind >>> the impact of conf changes, and a gentle reminder about depreciation >>> periods for the removal of options. >>> >>> [1]: https://github.com/openstack/oslo.messaging/commit/bcb3b23b8f6e7d01e38fdc031982558711bb7586 >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Sean Dague > http://dague.net > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbayer at redhat.com Wed Dec 3 22:17:10 2014 From: mbayer at redhat.com (Mike Bayer) Date: Wed, 3 Dec 2014 17:17:10 -0500 Subject: [openstack-dev] [neutron] alembic 0.7.1 will break neutron's "heal" feature which assumes a fixed set of potential autogenerate types In-Reply-To: References: Message-ID: <37EAEDF9-F496-4B56-9CEF-9AA29A4CFA3F@redhat.com> So folks, I had to put Alembic 0.7.1 out as I realized that the ?batch? mode was being turned on for autogenerate across the board in 0.7.0, and that was not the plan. So it is now out, and the builds are failing due to https://launchpad.net/bugs/1397796 . There?s some nits happening on the review https://review.openstack.org/#/c/137989/ , so I?m hoping someone with some Neutron cred adjust the patch to their liking and get it merged. I?m just the messenger on this. - mike > On Dec 1, 2014, at 5:43 PM, Salvatore Orlando wrote: > > Thanks Mike! > > I've left some comments on the patch. > Just out of curiosity, since now alembic can autogenerate foreign keys, are we be able to remove the logic for identifying foreign keys to add/remove [1]? > > Salvatore > > [1] http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/migration/alembic_migrations/heal_script.py#n205 > > On 1 December 2014 at 20:35, Mike Bayer > wrote: > hey neutron - > > Just an FYI, I?ve added https://review.openstack.org/#/c/137989/ / https://launchpad.net/bugs/1397796 to refer to an issue in neutron?s ?heal? script that is going to start failing when I put out Alembic 0.7.1, which is potentially later today / this week. > > The issue is pretty straightforward, Alembic 0.7.1 is adding foreign key autogenerate (and really, could add more types of autogenerate at any time), and as these new commands are revealed within the execute_alembic_command(), they are not accounted for, so it fails. I?d recommend folks try to push this one through or otherwise decide how this issue (which should be expected to occur many more times) should be handled. > > Just a heads up in case you start seeing builds failing! > > - mike > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlowja at outlook.com Wed Dec 3 22:21:02 2014 From: harlowja at outlook.com (Joshua Harlow) Date: Wed, 3 Dec 2014 14:21:02 -0800 Subject: [openstack-dev] [oslo] oslo.messaging config option deprecation In-Reply-To: <547F7C46.7010304@nemebean.com> References: <547F7656.80708@dague.net> <547F7C46.7010304@nemebean.com> Message-ID: Ben Nemec wrote: > On 12/03/2014 02:45 PM, Sean Dague wrote: >> So this - >> https://github.com/openstack/oslo.messaging/commit/bcb3b23b8f6e7d01e38fdc031982558711bb7586 >> was clearly a violation of our 1 cycle for deprecation of config options. >> >> I think that should be reverted, an oops release put out to fix it, and >> then deprecate for 1.6. > > +1. That was definitely a no-no. > >> If oslo libraries are going to include config options, they have to >> follow the same config deprecation as that's a contract that projects >> project up. Otherwise we need to rethink the ability for libraries to >> use oslo config (which, honestly is worth rethinking). > > I don't see how that would help with this sort of thing. Instead of one > project mistakenly removing an undeprecated opt, you would have dozens > potentially making the same error, which would also then make their > available configuration options inconsistent with all of the other > projects using oslo.messaging. That way lies madness. Isn't this the way most other libraries that don't use oslo.config operate? Aka, the API they expose would follow deprecation strategies that libraries/apps commonly use. It has always been sort of tricky/hidden (IMHO) that the usage of oslo.config is really a part of those libraries public API. Example that shows what people might do: # Using oslo-config as the API def a(cfg): if cfg.b: print("hi") # Using an actual kwarg def a(b=False): if b: print("hi") Now to deprecate the kwarg using function there needs to be some kind of warning logged when 'b' is provided that recommends the new name (say 'c'); this is similar to what the oslo-config using API also has to do but I feel it's less obvious and easier to break; since it isn't clear from the function definition this is happening since the function still gets just a 'cfg' object, and this just gets worse if u continue passing around the cfg object to other functions/methods/classes... Basically the only way to really know that someone is using 'cfg.b' is by grepping for it (which makes the API very malleable, for better or worse; it also seems to make it harder to use tools like sphinx that can interpret function pydocs to denote functions that have depreciations and such...). IMHO it doesn't seem like the larger pypi/python world has gone to mad (aka those projects, apps, libraries, others that haven't used oslo.config)? But maybe I missed all the mad people or I'm in the crowd that is mad and I can't tell ;) To me the inconsistency that you state is a policy/people problem and need not be addressed always by oslo.config (but could just be addressed at say review time). I know not everyone agrees with this, but it's useful to think of different ways and at least think about them... > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From anteaya at anteaya.info Wed Dec 3 22:27:15 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Wed, 03 Dec 2014 17:27:15 -0500 Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: <547F8DC1.5000202@anteaya.info> References: <547F8DC1.5000202@anteaya.info> Message-ID: <547F8E43.1030803@anteaya.info> Whoops too many mailing lists. -------- Forwarded Message -------- Subject: Re: [OpenStack-Infra] [openstack-dev] [third-party]Time for Additional Meeting for third-party Date: Wed, 03 Dec 2014 17:25:05 -0500 From: Anita Kuno To: openstack-infra at lists.openstack.org On 12/03/2014 03:15 AM, Omri Marcovitch wrote: > Hello Anteaya, > > A meeting between 8:00 - 16:00 UTC time will be great (Israel). > > > Thanks > Omri > > -----Original Message----- > From: Joshua Hesketh [mailto:joshua.hesketh at rackspace.com] > Sent: Wednesday, December 03, 2014 9:04 AM > To: He, Yongli; OpenStack Development Mailing List (not for usage questions); openstack-infra at lists.openstack.org > Subject: Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party > > Hey, > > 0700 -> 1000 UTC would work for me most weeks fwiw. > > Cheers, > Josh > > Rackspace Australia > > On 12/3/14 11:17 AM, He, Yongli wrote: >> anteaya, >> >> UTC 7:00 AM to UTC9:00, or UTC11:30 to UTC13:00 is ideal time for china. >> >> if there is no time slot there, just pick up any time between UTC 7:00 AM to UCT 13:00. ( UTC9:00 to UTC 11:30 is on road to home and dinner.) >> >> Yongi He >> -----Original Message----- >> From: Anita Kuno [mailto:anteaya at anteaya.info] >> Sent: Tuesday, December 02, 2014 4:07 AM >> To: openstack Development Mailing List; openstack-infra at lists.openstack.org >> Subject: [openstack-dev] [third-party]Time for Additional Meeting for third-party >> >> One of the actions from the Kilo Third-Party CI summit session was to start up an additional meeting for CI operators to participate from non-North American time zones. >> >> Please reply to this email with times/days that would work for you. The current third party meeting is on Mondays at 1800 utc which works well since Infra meetings are on Tuesdays. If we could find a time that works for Europe and APAC that is also on Monday that would be ideal. >> >> Josh Hesketh has said he will try to be available for these meetings, he is in Australia. >> >> Let's get a sense of what days and timeframes work for those interested and then we can narrow it down and pick a channel. >> >> Thanks everyone, >> Anita. >> Okay first of all thanks to everyone who replied. Again, to clarify, the purpose of this thread has been to find a suitable additional third-party meeting time geared towards folks in EU and APAC. We live on a sphere, there is no time that will suit everyone. It looks like we are converging on 0800 UTC as a time and I am going to suggest Tuesdays. We have very little competition for space at that date + time combination so we can use #openstack-meeting (I have already booked the space on the wikipage). So barring further discussion, see you then! Thanks everyone, Anita. From asahlin at linux.vnet.ibm.com Wed Dec 3 22:30:19 2014 From: asahlin at linux.vnet.ibm.com (Aaron Sahlin) Date: Wed, 03 Dec 2014 16:30:19 -0600 Subject: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon In-Reply-To: References: Message-ID: <547F8EFB.4030302@linux.vnet.ibm.com> I would be happy with either the two proposed solutions (both improvements over the what we have now). Any thoughts on combining them? Only close if esc or 'x' is clicked, but also warn them if data was entered. On 12/3/2014 7:21 AM, Rob Cresswell (rcresswe) wrote: > +1 to changing the behaviour to 'static'. Modal inside a modal is > potentially slightly more useful, but looks messy and inconsistent, > which I think outweighs the functionality. > > Rob > > > On 2 Dec 2014, at 12:21, Timur Sufiev > wrote: > >> Hello, Horizoneers and UX-ers! >> >> The default behavior of modals in Horizon (defined in turn by >> Bootstrap defaults) regarding their closing is to simply close the >> modal once user clicks somewhere outside of it (on the backdrop >> element below and around the modal). This is not very convenient for >> the modal forms containing a lot of input - when it is closed without >> a warning all the data the user has already provided is lost. Keeping >> this in mind, I've made a patch [1] changing default Bootstrap >> 'modal_backdrop' parameter to 'static', which means that forms are >> not closed once the user clicks on a backdrop, while it's still >> possible to close them by pressing 'Esc' or clicking on the 'X' link >> at the top right border of the form. Also the patch [1] allows to >> customize this behavior (between 'true'-current one/'false' - no >> backdrop element/'static') on a per-form basis. >> >> What I didn't know at the moment I was uploading my patch is that >> David Lyle had been working on a similar solution [2] some time ago. >> It's a bit more elaborate than mine: if the user has already filled >> some some inputs in the form, then a confirmation dialog is shown, >> otherwise the form is silently dismissed as it happens now. >> >> The whole point of writing about this in the ML is to gather opinions >> which approach is better: >> * stick to the current behavior; >> * change the default behavior to 'static'; >> * use the David's solution with confirmation dialog (once it'll be >> rebased to the current codebase). >> >> What do you think? >> >> [1] https://review.openstack.org/#/c/113206/ >> [2] https://review.openstack.org/#/c/23037/ >> >> P.S. I remember that I promised to write this email a week ago, but >> better late than never :). >> >> -- >> Timur Sufiev >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From blak111 at gmail.com Wed Dec 3 22:44:57 2014 From: blak111 at gmail.com (Kevin Benton) Date: Wed, 3 Dec 2014 14:44:57 -0800 Subject: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup? In-Reply-To: <87ppc090ne.fsf@metaswitch.com> References: <87ppc090ne.fsf@metaswitch.com> Message-ID: What you are proposing sounds very reasonable. If I understand correctly, the idea is to make Nova just create the TAP device and get it attached to the VM and leave it 'unplugged'. This would work well and might eliminate the need for some drivers. I see no reason to block adding a VIF type that does this. However, there is a good reason that the VIF type for some OVS-based deployments require this type of setup. The vSwitches are connected to a central controller using openflow (or ovsdb) which configures forwarding rules/etc. Therefore they don't have any agents running on the compute nodes from the Neutron side to perform the step of getting the interface plugged into the vSwitch in the first place. For this reason, we will still need both types of VIFs. > 1 .When the port is created in the Neutron DB, and handled (bound etc.) by the plugin and/or mechanism driver, the TAP device name is already present at that time. This is backwards. The tap device name is derived from the port ID, so the port has already been created in Neutron at that point. It is just unbound. The steps are roughly as follows: Nova calls neutron for a port, Nova creates/plugs VIF based on port, Nova updates port on Neutron, Neutron binds the port and notifies agent/plugin/whatever to finish the plumbing, Neutron notifies Nova that port is active, Nova unfreezes the VM. None of that should be affected by what you are proposing. The only difference is that your Neutron agent would also perform the 'plugging' operation. For your second point, scanning the integration bridge for new ports is currently used now, but that's an implementation detail of the reference OVS driver. It doesn't block your work directly since OVS wouldn't use your NOOP VIF type anyway. Cheers, Kevin Benton On Wed, Dec 3, 2014 at 8:08 AM, Neil Jerram wrote: > Hi there. I've been looking into a tricky point, and hope I can succeed > in expressing it clearly here... > > I believe it is the case, even when using a committedly Neutron-based > networking implementation, that Nova is still involved a little bit in > the networking setup logic. Specifically I mean the plug() and unplug() > operations, whose implementations are provided by *VIFDriver classes for > the various possible hypervisors. > > For example, for the libvirt hypervisor, LibvirtGenericVIFDriver > typically implements plug() by calling create_tap_dev() to create the > TAP device, and then plugging into some form of L2 bridge. > > Does this logic actually have to be in Nova? For a Neutron-based > networking implementation, it seems to me that it should also be > possible to do this in a Neutron agent (obviously running on the compute > node concerned), and that - if so - that would be preferable because it > would enable more Neutron-based experimentation without having to modify > any Nova code. > > Specifically, therefore, I wonder if we could/should add a "do-nothing" > value to the set of Nova VIF types (VIF_TYPE_NOOP?), and implement > plug()/unplug() for that value to do nothing at all, leaving all setup > to the Neutron agent? And then hopefully it should never be necessary > to introduce further Nova VIF type support ever again... > > Am I missing something that really makes that not fly? Two possible > objections occurs to me, as follows, but I think they're both > surmountable. > > 1. When the port is created in the Neutron DB, and handled (bound etc.) > by the plugin and/or mechanism driver, the TAP device name is already > present at that time. > > I think this is still OK because Neutron knows anyway what the TAP > device name _will_ be, even if the actual TAP device hasn't been > created yet. > > 2. With some agent implementations, there isn't a direct instruction, > from the plugin to the agent, to say "now look after this VM / > port". Instead the agents polls the OS for new TAP devices > appearing. Clearly, then, if there isn't something other than the > agent that creates the TAP device, any logic in the agent will never > be triggered. > > This is certain a problem. For new networking experimentation, > however, we can write agent code that is directly instructed by the > plugin, and hence (a) doesn't need to poll (b) doesn't require the > TAP device to have been previously created by Nova - which I'd argue > is preferable. > > Thoughts? > > (FYI my context is that I've been working on a networking implementation > where the TAP device to/from a VM should _not_ be plugged into a bridge > - and for that I've had to make a Nova change even though my team's aim > was to do the whole thing in Neutron. > > I've proposed a spec for the Nova change that plugs a TAP interface > without bridging it (https://review.openstack.org/#/c/130732/), but that > set me wondering about this wider question of whether such Nova changes > should still be necessary...) > > Many thanks, > Neil > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Kevin Benton -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorlando at nicira.com Wed Dec 3 22:52:28 2014 From: sorlando at nicira.com (Salvatore Orlando) Date: Wed, 3 Dec 2014 23:52:28 +0100 Subject: [openstack-dev] [neutron] alembic 0.7.1 will break neutron's "heal" feature which assumes a fixed set of potential autogenerate types In-Reply-To: <37EAEDF9-F496-4B56-9CEF-9AA29A4CFA3F@redhat.com> References: <37EAEDF9-F496-4B56-9CEF-9AA29A4CFA3F@redhat.com> Message-ID: Thanks Mike, I already gave your patch a +2. when the gate is broken there's no time for nitpicking - those can come with a followup patch. The patch is now spinning again against jenkins checks. As soon as it's done we'll send it through the gate - and hopefully get a promotion for it. Salvatore On 3 December 2014 at 23:17, Mike Bayer wrote: > So folks, I had to put Alembic 0.7.1 out as I realized that the ?batch? > mode was being turned on for autogenerate across the board in 0.7.0, and > that was not the plan. > > So it is now out, and the builds are failing due to > https://launchpad.net/bugs/1397796. > > There?s some nits happening on the review > https://review.openstack.org/#/c/137989/, so I?m hoping someone with some > Neutron cred adjust the patch to their liking and get it merged. I?m > just the messenger on this. > > - mike > > > > On Dec 1, 2014, at 5:43 PM, Salvatore Orlando wrote: > > Thanks Mike! > > I've left some comments on the patch. > Just out of curiosity, since now alembic can autogenerate foreign keys, > are we be able to remove the logic for identifying foreign keys to > add/remove [1]? > > Salvatore > > [1] > http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/migration/alembic_migrations/heal_script.py#n205 > > > On 1 December 2014 at 20:35, Mike Bayer wrote: > >> hey neutron - >> >> Just an FYI, I?ve added https://review.openstack.org/#/c/137989/ / >> https://launchpad.net/bugs/1397796 to refer to an issue in neutron?s >> ?heal? script that is going to start failing when I put out Alembic 0.7.1, >> which is potentially later today / this week. >> >> The issue is pretty straightforward, Alembic 0.7.1 is adding foreign key >> autogenerate (and really, could add more types of autogenerate at any >> time), and as these new commands are revealed within the >> execute_alembic_command(), they are not accounted for, so it fails. I?d >> recommend folks try to push this one through or otherwise decide how this >> issue (which should be expected to occur many more times) should be handled. >> >> Just a heads up in case you start seeing builds failing! >> >> - mike >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.clevenger at RACKSPACE.COM Wed Dec 3 23:02:19 2014 From: ryan.clevenger at RACKSPACE.COM (Ryan Clevenger) Date: Wed, 3 Dec 2014 23:02:19 +0000 Subject: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration Message-ID: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP> Hi, At Rackspace, we have a need to create a higher level networking service primarily for the purpose of creating a Floating IP solution in our environment. The current solutions for Floating IPs, being tied to plugin implementations, does not meet our needs at scale for the following reasons: 1. Limited endpoint H/A mainly targeting failover only and not multi-active endpoints, 2. Lack of noisy neighbor and DDOS mitigation, 3. IP fragmentation (with cells, public connectivity is terminated inside each cell leading to fragmentation and IP stranding when cell CPU/Memory use doesn't line up with allocated IP blocks. Abstracting public connectivity away from nova installations allows for much more efficient use of those precious IPv4 blocks). 4. Diversity in transit (multiple encapsulation and transit types on a per floating ip basis). We realize that network infrastructures are often unique and such a solution would likely diverge from provider to provider. However, we would love to collaborate with the community to see if such a project could be built that would meet the needs of providers at scale. We believe that, at its core, this solution would boil down to terminating north<->south traffic temporarily at a massively horizontally scalable centralized core and then encapsulating traffic east<->west to a specific host based on the association setup via the current L3 router's extension's 'floatingips' resource. Our current idea, involves using Open vSwitch for header rewriting and tunnel encapsulation combined with a set of Ryu applications for management: https://i.imgur.com/bivSdcC.png The Ryu application uses Ryu's BGP support to announce up to the Public Routing layer individual floating ips (/32's or /128's) which are then summarized and announced to the rest of the datacenter. If a particular floating ip is experiencing unusually large traffic (DDOS, slashdot effect, etc.), the Ryu application could change the announcements up to the Public layer to shift that traffic to dedicated hosts setup for that purpose. It also announces a single /32 "Tunnel Endpoint" ip downstream to the TunnelNet Routing system which provides transit to and from the cells and their hypervisors. Since traffic from either direction can then end up on any of the FLIP hosts, a simple flow table to modify the MAC and IP in either the SRC or DST fields (depending on traffic direction) allows the system to be completely stateless. We have proven this out (with static routing and flows) to work reliably in a small lab setup. On the hypervisor side, we currently plumb networks into separate OVS bridges. Another Ryu application would control the bridge that handles overlay networking to selectively divert traffic destined for the default gateway up to the FLIP NAT systems, taking into account any configured logical routing and local L2 traffic to pass out into the existing overlay fabric undisturbed. Adding in support for L2VPN EVPN (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to the Ryu BGP speaker will allow the hypervisor side Ryu application to advertise up to the FLIP system reachability information to take into account VM failover, live-migrate, and supported encapsulation types. We believe that decoupling the tunnel endpoint discovery from the control plane (Nova/Neutron) will provide for a more robust solution as well as allow for use outside of openstack if desired. ________________________________________ Ryan Clevenger Manager, Cloud Engineering - US m: 678.548.7261 e: ryan.clevenger at rackspace.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Wed Dec 3 23:16:07 2014 From: mikal at stillhq.com (Michael Still) Date: Thu, 4 Dec 2014 10:16:07 +1100 Subject: [openstack-dev] [Compute][Nova] Mid-cycle meetup details for kilo In-Reply-To: References: Message-ID: I've just created the signup page for this event. Its here: https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-14767182039 Cheers, Michael On Wed, Oct 15, 2014 at 3:45 PM, Michael Still wrote: > Hi. > > I am pleased to announce details for the Kilo Compute mid-cycle > meetup, but first some background about how we got here. > > Two companies actively involved in OpenStack came forward with offers > to host the Compute meetup. However, one of those companies has > gracefully decided to wait until the L release because of the cold > conditions are their proposed location (think several feet of snow). > > So instead, we're left with California! > > The mid-cycle meetup will be from 26 to 28 January 2015, at the VMWare > offices in Palo Alto California. > > Thanks for VMWare for stepping up and offering to host. It sure does > make my life easy. > > More details will be forthcoming closer to the event, but I wanted to > give people as much notice as possible about dates and location so > they can start negotiating travel if they want to come. > > Cheers, > Michael > > -- > Rackspace Australia -- Rackspace Australia From mikal at stillhq.com Wed Dec 3 23:18:07 2014 From: mikal at stillhq.com (Michael Still) Date: Thu, 4 Dec 2014 10:18:07 +1100 Subject: [openstack-dev] [Compute][Nova] Mid-cycle meetup details for kilo In-Reply-To: References: Message-ID: Sigh, sorry. It is of course the Kilo meetup: https://www.eventbrite.com.au/e/openstack-nova-kilo-mid-cycle-developer-meetup-tickets-14767182039 Michael On Thu, Dec 4, 2014 at 10:16 AM, Michael Still wrote: > I've just created the signup page for this event. Its here: > > https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-14767182039 > > Cheers, > Michael > > On Wed, Oct 15, 2014 at 3:45 PM, Michael Still wrote: >> Hi. >> >> I am pleased to announce details for the Kilo Compute mid-cycle >> meetup, but first some background about how we got here. >> >> Two companies actively involved in OpenStack came forward with offers >> to host the Compute meetup. However, one of those companies has >> gracefully decided to wait until the L release because of the cold >> conditions are their proposed location (think several feet of snow). >> >> So instead, we're left with California! >> >> The mid-cycle meetup will be from 26 to 28 January 2015, at the VMWare >> offices in Palo Alto California. >> >> Thanks for VMWare for stepping up and offering to host. It sure does >> make my life easy. >> >> More details will be forthcoming closer to the event, but I wanted to >> give people as much notice as possible about dates and location so >> they can start negotiating travel if they want to come. >> >> Cheers, >> Michael >> >> -- >> Rackspace Australia > > > > -- > Rackspace Australia -- Rackspace Australia From sukhdevkapur at gmail.com Wed Dec 3 23:24:41 2014 From: sukhdevkapur at gmail.com (Sukhdev Kapur) Date: Wed, 3 Dec 2014 15:24:41 -0800 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: Congratulations Henry and Kevin. It has always been pleasure working with you guys..... If I may express my opinion, Bob's contribution to ML2 has been quite substantial. The kind of stability ML2 has achieved makes a statement of his dedication to this work. I have worked very closely with Bob on several issues and co-chaired ML2-Subteam with him and have developed tremendous respect for his dedication. Reading his email reply makes me believe he wants to continue to contribute as core developer. Therefore, I would like to take an opportunity to appeal to the core team to consider granting him his wish - i.e. vote -1 on his removal. regards.. -Sukhdev On Wed, Dec 3, 2014 at 11:48 AM, Edgar Magana wrote: > I give +2 to Henry and Kevin. So, Congratulations Folks! > I have been working with both of them and great quality reviews are always > coming out from them. > > Many thanks to Nachi and Bob for their hard work! > > Edgar > > On 12/2/14, 7:59 AM, "Kyle Mestery" wrote: > > >Now that we're in the thick of working hard on Kilo deliverables, I'd > >like to make some changes to the neutron core team. Reviews are the > >most important part of being a core reviewer, so we need to ensure > >cores are doing reviews. The stats for the 180 day period [1] indicate > >some changes are needed for cores who are no longer reviewing. > > > >First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from > >neutron-core. Bob and Nachi have been core members for a while now. > >They have contributed to Neutron over the years in reviews, code and > >leading sub-teams. I'd like to thank them for all that they have done > >over the years. I'd also like to propose that should they start > >reviewing more going forward the core team looks to fast track them > >back into neutron-core. But for now, their review stats place them > >below the rest of the team for 180 days. > > > >As part of the changes, I'd also like to propose two new members to > >neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have > >been very active in reviews, meetings, and code for a while now. Henry > >lead the DB team which fixed Neutron DB migrations during Juno. Kevin > >has been actively working across all of Neutron, he's done some great > >work on security fixes and stability fixes in particular. Their > >comments in reviews are insightful and they have helped to onboard new > >reviewers and taken the time to work with people on their patches. > > > >Existing neutron cores, please vote +1/-1 for the addition of Henry > >and Kevin to the core team. > > > >Thanks! > >Kyle > > > >[1] http://stackalytics.com/report/contribution/neutron-group/180 > > > >_______________________________________________ > >OpenStack-dev mailing list > >OpenStack-dev at lists.openstack.org > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.griffith8 at gmail.com Wed Dec 3 23:28:14 2014 From: john.griffith8 at gmail.com (John Griffith) Date: Wed, 3 Dec 2014 16:28:14 -0700 Subject: [openstack-dev] [OpenStack-Dev] Config Options and OSLO libs In-Reply-To: <547F7E31.40209@dague.net> References: <7B9A1DDC-BD6A-4C6D-A8CF-BD87C69A0A75@gmail.com> <547F7E31.40209@dague.net> Message-ID: On Wed, Dec 3, 2014 at 2:18 PM, Sean Dague wrote: > On 12/03/2014 03:58 PM, Morgan Fainberg wrote: >> Hi John, >> >> Let me say first off that I 100% agree with the value of the sample config being in-tree. Keystone has not removed it due to similar feedback I?ve received. However, the issue is that *gating* on config changes for all libraries that are included in the sample config is just a process that leads to this frustration / breakage. I have thought about this, and I think the right answer is two fold: >> >> 1) immediately stop gating on sample config changes. I know the cinder team uses it as a ?did we break some compat? and ?are you changing config? in a patch that could adversely affect deployers/other systems. I don?t think you?re going to win the ?don?t change config values in libraries we don?t control? (or even controlled by a separate project) argument. It?s very hard to release an updated oslo lib, clients, or keystonemiddleware. >> >> 2) Implement a check (I think I have a way of doing this, I?ll run it by Doug Hellman and you on IRC) that is programatically checking *only* for in-tree config values. >> >> Alternative: A non-voting gate job that says ?config has changed? [should be *really* easy to add] so at least you know the config has changed. >> >> This should likely be something easy to get through the door (either the programatic one or the simple non-voting job). This however, needs the infra team buy-in as acceptable. >> >> I know that most projects have moved away from gating on this since we now consume a lot of libraries that provide config options that the individual server-projects don?t control (it is the reason Keystone doesn?t gate explicitly on this). > > So I think there is a better way. The end game here is you want an up to > date sample config in your tree. > > Ok, so as a post merge figure out if you need a config change, and if so > proposal bot that back in. Better yet, publish these somewhere on the > web so people can browse samples. Maybe even for a few different kinds > of configs. The above is what we're going to do (in conjunction with a MileStone check), thanks for the great input Sean and Morgan. > > Make it part of the release checklist for a milestone that the tool > which generates the config is run manually and make sure that the in > tree config is up to date. Which might mean in master it's behind a bit, > but at least it will be right for any releases. > > -Sean > > >> >> Just my quick $0.002 on the topic, >> ?Morgan >> >>> On Dec 3, 2014, at 12:44 PM, John Griffith wrote: >>> >>> Hey, >>> >>> So this is a long running topic, but I want to bring it up again. >>> First, YES Cinder is still running a sample.conf. A lot of Operators >>> spoke up and provided feedback that this was valuable and they >>> objected strongly to taking it away. That being said we're going to >>> go the route of removing it from our unit tests and >>> generating/publishing periodically outside of tests. >>> >>> That being said, one of the things that's driving me crazy and >>> breaking things on a regular basis is other OpenStack libs having a >>> high rate of change of config options. This revolves around things >>> like fixing typos in the comments, reformatting of text etc. All of >>> these things are good in the long run, but I wonder if we could >>> consider batching these sorts of efforts and communicating them? >>> >>> The other issue that we hit today was a flat out removal of an option >>> in the oslo.messaging lib with no deprecation. This patch here [1] >>> does a number of things that are probably great in terms of clean up >>> and housekeeping, but now that we're all in on shared/common libs I >>> think we should be a bit more careful about the changes we make. Also >>> to me the commit message doesn't really make it easy for me to search >>> git logs to try and figure out what happened when things blew up. >>> >>> Anyway, just wanted to send a note out asking people to keep in mind >>> the impact of conf changes, and a gentle reminder about depreciation >>> periods for the removal of options. >>> >>> [1]: https://github.com/openstack/oslo.messaging/commit/bcb3b23b8f6e7d01e38fdc031982558711bb7586 >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Sean Dague > http://dague.net > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jgb at bitergia.com Wed Dec 3 23:42:09 2014 From: jgb at bitergia.com (Jesus M. Gonzalez-Barahona) Date: Thu, 04 Dec 2014 00:42:09 +0100 Subject: [openstack-dev] How to get openstack bugs data for research? In-Reply-To: <2014120320204186973010@gmail.com> References: <2014120320204186973010@gmail.com> Message-ID: <1417650129.19489.14.camel@bitergia.com> You can download a database dump with (hopefully) information for all Launchpad tickets corresponding to OpenStack, already organized and ready to be queried: http://activity.openstack.org/dash/browser/data/db/tickets.mysql.7z (linked from http://activity.openstack.org/dash/browser/data_sources.html ) It is a 7z-zipped file with includes a MySQL dump of the actual database, produced by Bicho (one of the tools in MetricsGrimoire). It is updated daily. You can just feed it to MySQL, and start to do your queries. There is some information about Bicho and the database schema used at https://github.com/MetricsGrimoire/Bicho/wiki https://github.com/MetricsGrimoire/Bicho/wiki/Database-Schema Please, let me know if you need further info. Saludos, Jesus. On Wed, 2014-12-03 at 20:20 +0800, zfx0906 at gmail.com wrote: > Hi, all > > > I am a graduate student in Peking University, our lab do some research > on open source projects. > This is our introduction: https://passion-lab.org/ > > > Now we need openstack issues data for research, I found the issues > list: https://bugs.launchpad.net/openstack/ > I want to download the openstack issues data, Could anyone tell me how > to download the data? Or is there some link or API for download the > data? > > > And I found 9464 bugs in https://bugs.launchpad.net/openstack/ ?is > this all? why so few? > > > Many thanks! > > > Beat regards, > Feixue, Zhang > > > > ______________________________________________________________________ > zfx0906 at gmail.com > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Bitergia: http://bitergia.com /me at Twitter: https://twitter.com/jgbarah From dprince at redhat.com Thu Dec 4 02:35:15 2014 From: dprince at redhat.com (Dan Prince) Date: Wed, 03 Dec 2014 21:35:15 -0500 Subject: [openstack-dev] [tripleo] Managing no-mergepy template duplication In-Reply-To: <20141203101059.GA23017@t430slt.redhat.com> References: <20141203101059.GA23017@t430slt.redhat.com> Message-ID: <1417660515.836.3.camel@dovetail.localdomain> On Wed, 2014-12-03 at 10:11 +0000, Steven Hardy wrote: > Hi all, > > Lately I've been spending more time looking at tripleo and doing some > reviews. I'm particularly interested in helping the no-mergepy and > subsequent puppet-software-config implementations mature (as well as > improving overcloud updates via heat). > > Since Tomas's patch landed[1] to enable --no-mergepy in > tripleo-heat-templates, it's become apparent that frequently patches are > submitted which only update overcloud-source.yaml, so I've been trying to > catch these and ask for a corresponding change to e.g controller.yaml. > > This raises the following questions: > > 1. Is it reasonable to -1 a patch and ask folks to update in both places? Yes! In fact until we abandon merge.py we shouldn't land anything that doesn't make the change in both places. Probably more important to make sure things go into the new (no-mergepy) templates though. > 2. How are we going to handle this duplication and divergence? Move as quickly as possible to the new without-mergepy varients? That is my vote anyways. > 3. What's the status of getting gating CI on the --no-mergepy templates? Devtest already supports it by simply setting an option (which sets an ENV variable). Just need to update tripleo-ci to do that and then make the switch. > 4. What barriers exist (now that I've implemented[2] the eliding functionality > requested[3] for ResourceGroup) to moving to the --no-mergepy > implementation by default? None that I know of. > > Thanks for any clarification you can provide! :) > > Steve > > [1] https://review.openstack.org/#/c/123100/ > [2] https://review.openstack.org/#/c/128365/ > [3] https://review.openstack.org/#/c/123713/ > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From clint at fewbar.com Thu Dec 4 02:54:48 2014 From: clint at fewbar.com (Clint Byrum) Date: Wed, 03 Dec 2014 18:54:48 -0800 Subject: [openstack-dev] [tripleo] Managing no-mergepy template duplication In-Reply-To: <1417660515.836.3.camel@dovetail.localdomain> References: <20141203101059.GA23017@t430slt.redhat.com> <1417660515.836.3.camel@dovetail.localdomain> Message-ID: <1417661497-sup-1964@fewbar.com> Excerpts from Dan Prince's message of 2014-12-03 18:35:15 -0800: > On Wed, 2014-12-03 at 10:11 +0000, Steven Hardy wrote: > > Hi all, > > > > Lately I've been spending more time looking at tripleo and doing some > > reviews. I'm particularly interested in helping the no-mergepy and > > subsequent puppet-software-config implementations mature (as well as > > improving overcloud updates via heat). > > > > Since Tomas's patch landed[1] to enable --no-mergepy in > > tripleo-heat-templates, it's become apparent that frequently patches are > > submitted which only update overcloud-source.yaml, so I've been trying to > > catch these and ask for a corresponding change to e.g controller.yaml. > > > > This raises the following questions: > > > > 1. Is it reasonable to -1 a patch and ask folks to update in both places? > > Yes! In fact until we abandon merge.py we shouldn't land anything that > doesn't make the change in both places. Probably more important to make > sure things go into the new (no-mergepy) templates though. > > > 2. How are we going to handle this duplication and divergence? > > Move as quickly as possible to the new without-mergepy varients? That is > my vote anyways. > > > 3. What's the status of getting gating CI on the --no-mergepy templates? > > Devtest already supports it by simply setting an option (which sets an > ENV variable). Just need to update tripleo-ci to do that and then make > the switch. > > > 4. What barriers exist (now that I've implemented[2] the eliding functionality > > requested[3] for ResourceGroup) to moving to the --no-mergepy > > implementation by default? > > None that I know of. > I concur with Dan. Elide was the last reason not to use this. One thing to consider is that there is no actual upgrade path from non-autoscaling-group based clouds, to auto-scaling-group based templates. We should consider how we'll do that before making it the default. So, I suggest we discuss possible upgrade paths and then move forward with switching one of the CI jobs to using the new templates. From nikhil.komawar at RACKSPACE.COM Thu Dec 4 02:56:10 2014 From: nikhil.komawar at RACKSPACE.COM (Nikhil Komawar) Date: Thu, 4 Dec 2014 02:56:10 +0000 Subject: [openstack-dev] [glance] Deprecating osprofiler option 'enabled' in favour of 'profiler_enabled' In-Reply-To: References: , Message-ID: <0FBF5631AB7B504D89C7E6929695B6249302849F@ORD1EXD02.RACKSPACE.CORP> Don't care either way, let's be consistent with other projects and raise this concern in next weekly cross-project meeting [1] to see what "all" of the projects mutually agree on. If there is no consensus, let's stick to what we have. @Louis: Can you please add that to the agenda of the cross-project meeting? [1] https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting Thanks, -Nikhil ________________________________________ From: Ian Cordasco [ian.cordasco at RACKSPACE.COM] Sent: Tuesday, December 02, 2014 10:32 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [glance] Deprecating osprofiler option 'enabled' in favour of 'profiler_enabled' Except for the fact that the person who implemented this was told to change the option name in other projects because it conflicted with a different option. We can keep this if we?re worried about being too obvious (to the point of becoming the Department of Redundancy Department) with our naming. I don?t think other projects will be very happy having to change their naming especially if the original name was already a problem. On 12/2/14, 06:12, "Zhi Yan Liu" wrote: >I totally agreed to make it to be consistent cross all projects, so I >propose to change other projects. > >But I think keeping it as-it is clear enough for both developer and >operator/configuration, for example: > >[profiler] >enable = True > >instead of: > >[profiler] >profiler_enable = True > >Tbh, the "profiler" prefix is redundant to me still from the >perspective of operator/configuration. > >zhiyan > > >On Tue, Dec 2, 2014 at 7:44 PM, Louis Taylor wrote: >> On Tue, Dec 02, 2014 at 12:16:44PM +0800, Zhi Yan Liu wrote: >>> Why not change other services instead of glance? I see one reason is >>> "glance is the only one service use this option name", but to me one >>> reason to keep it as-it in glance is that original name makes more >>> sense due to the option already under "profiler" group, adding >>> "profiler" prefix to it is really redundant, imo, and in other >>> existing config group there's no one go this naming way. Then in the >>> code we can just use a clear way: >>> >>> CONF.profiler.enabled >>> >>> instead of: >>> >>> CONF.profiler.profiler_enabled >>> >>> thanks, >>> zhiyan >> >> I agree this looks nicer in the code. However, the primary consumer of >>this >> option is someone editing it in the configuration files. In this case, I >> believe having something more verbose and consistent is better than the >>Glance >> code being slightly more elegant. >> >> One name or the other doesn't make all that much difference, but >>consistency in >> how we turn osprofiler on and off across projects would be best. >> >> - Louis >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From samsong8610 at gmail.com Thu Dec 4 04:11:14 2014 From: samsong8610 at gmail.com (sam song) Date: Thu, 04 Dec 2014 12:11:14 +0800 Subject: [openstack-dev] [Ceilometer]Unit test failed on branch stable/icehouse In-Reply-To: References: <547ED93B.8070407@gmail.com> Message-ID: <547FDEE2.4090602@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Dina, As you say, it is caused by legacy .pyc files. I remove them all and all tests are passed. Thanks Sam On 12/03/2014 05:47 PM, Dina Belova wrote: > have unit tests failing in the new just cloned reposi -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (GNU/Linux) iQEcBAEBAgAGBQJUf97iAAoJEPDWvv0r0XeRovYH/3/N3XYMeXBT+EEZ25D4xy3N 7JMZzJhR01isuPL3koY6pMoP8hBSfSd0Fb7fD5fIZiUcuwzLvsUA2rl2CvBnb9HK TZMhasLGUdfWtvIz01NAO8/8kMN22e/yKa4vrtdeVaxEvVTE3pzoe6XihSQQDZ4j oiuHjgKIBNERS/5BPSHnYEnIfOhDo+bykEmB6TYn+HfLZmOKozpcx7sXvVKf32Pr EbuOQHBBLQYaIGRAQshqrNIihGFvY10KKfp2t4I1fM1g5DLqLQ3Ud0rb52E8FlQo yPhge/A6H/qIdOTSVi9VpPpWXVP6BdMFkPiRH2MBIgejbmgaTxOeDxMqbOenPQc= =WhO0 -----END PGP SIGNATURE----- From mikal at stillhq.com Thu Dec 4 04:30:46 2014 From: mikal at stillhq.com (Michael Still) Date: Thu, 4 Dec 2014 15:30:46 +1100 Subject: [openstack-dev] [nova] NUMA Cells Message-ID: Hi, so just having read a bunch of the libvirt driver numa code, I have a concern. At first I thought it was a little thing, but I am starting to think its more of a big deal... We use the term "cells" to describe numa cells. However, that term has a specific meaning in nova, and I worry that overloading the term is confusing. (Yes, I know the numa people had it first, but hey). So, what do people think about trying to move the numa code to use something like "numa cell" or "numacell" based on context? Michael -- Rackspace Australia From steven at wedontsleep.org Thu Dec 4 04:47:19 2014 From: steven at wedontsleep.org (Steve Kowalik) Date: Thu, 04 Dec 2014 15:47:19 +1100 Subject: [openstack-dev] [TripleO] Do we want to remove Nova-bm support? Message-ID: <547FE757.9030701@wedontsleep.org> Hi all, I'm becoming increasingly concerned about all of the code paths in tripleo-incubator that check $USE_IRONIC -eq 0 -- that is, use nova-baremetal rather than Ironic. We do not check nova-bm support in CI, haven't for at least a month, and I'm concerned that parts of it may be slowly bit-rotting. I think our documentation is fairly clear that nova-baremetal is deprecated and Ironic is the way forward, and I know it flies in the face of backwards-compatibility, but do we want to bite the bullet and remove nova-bm support? Cheers, -- Steve Oh, in case you got covered in that Repulsion Gel, here's some advice the lab boys gave me: [paper rustling] DO NOT get covered in the Repulsion Gel. - Cave Johnson, CEO of Aperture Science From mriedem at linux.vnet.ibm.com Thu Dec 4 04:57:30 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Wed, 03 Dec 2014 22:57:30 -0600 Subject: [openstack-dev] [nova] global or per-project specific ssl config options, or both? Message-ID: <547FE9BA.1010700@linux.vnet.ibm.com> I've posted this to the 12/4 nova meeting agenda but figured I'd socialize it here also. SSL options - do we make them per-project or global, or both? Neutron and Cinder have config-group specific SSL options in nova, Glance is using oslo sslutils global options since Juno which was contentious for a time in a separate review in Icehouse [1]. Now [2] wants to break that out for Glance, but we also have a patch [3] for Keystone to use the global oslo SSL options, we should be consistent, but does that require a blueprint now? In the Icehouse patch, markmc suggested using a DictOpt where the default value is the global value, which could be coming from the oslo [ssl] group and then you could override that with a project-specific key, e.g. cinder, neutron, glance, keystone. [1] https://review.openstack.org/#/c/84522/ [2] https://review.openstack.org/#/c/131066/ [3] https://review.openstack.org/#/c/124296/ -- Thanks, Matt Riedemann From ayoung at redhat.com Thu Dec 4 05:07:44 2014 From: ayoung at redhat.com (Adam Young) Date: Thu, 04 Dec 2014 00:07:44 -0500 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: <547FEC20.7000308@redhat.com> On 12/03/2014 06:24 PM, Sukhdev Kapur wrote: > Congratulations Henry and Kevin. It has always been pleasure working > with you guys..... > > > If I may express my opinion, Bob's contribution to ML2 has been quite > substantial. The kind of stability ML2 has achieved makes a statement > of his dedication to this work. I have worked very closely with Bob on > several issues and co-chaired ML2-Subteam with him and have developed > tremendous respect for his dedication. > Reading his email reply makes me believe he wants to continue to > contribute as core developer. Therefore, I would like to take an > opportunity to appeal to the core team to consider granting him his > wish - i.e. vote -1 on his removal. If I might venture an outside voice in support of Bob: you don't want to chase away the continuity. Yes, sometimes the day job makes us focus on things other than upstream work for a while, but I would say that you should err on the side of keeping someone that is otherwise still engaged. Especially when that core has been as fundamental on a project as I know Bob to have been on Quantum....er Neutron. > > regards.. > -Sukhdev > > > > > > > On Wed, Dec 3, 2014 at 11:48 AM, Edgar Magana > > wrote: > > I give +2 to Henry and Kevin. So, Congratulations Folks! > I have been working with both of them and great quality reviews > are always > coming out from them. > > Many thanks to Nachi and Bob for their hard work! > > Edgar > > On 12/2/14, 7:59 AM, "Kyle Mestery" > wrote: > > >Now that we're in the thick of working hard on Kilo deliverables, I'd > >like to make some changes to the neutron core team. Reviews are the > >most important part of being a core reviewer, so we need to ensure > >cores are doing reviews. The stats for the 180 day period [1] > indicate > >some changes are needed for cores who are no longer reviewing. > > > >First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from > >neutron-core. Bob and Nachi have been core members for a while now. > >They have contributed to Neutron over the years in reviews, code and > >leading sub-teams. I'd like to thank them for all that they have done > >over the years. I'd also like to propose that should they start > >reviewing more going forward the core team looks to fast track them > >back into neutron-core. But for now, their review stats place them > >below the rest of the team for 180 days. > > > >As part of the changes, I'd also like to propose two new members to > >neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin > have > >been very active in reviews, meetings, and code for a while now. > Henry > >lead the DB team which fixed Neutron DB migrations during Juno. Kevin > >has been actively working across all of Neutron, he's done some great > >work on security fixes and stability fixes in particular. Their > >comments in reviews are insightful and they have helped to > onboard new > >reviewers and taken the time to work with people on their patches. > > > >Existing neutron cores, please vote +1/-1 for the addition of Henry > >and Kevin to the core team. > > > >Thanks! > >Kyle > > > >[1] http://stackalytics.com/report/contribution/neutron-group/180 > > > >_______________________________________________ > >OpenStack-dev mailing list > >OpenStack-dev at lists.openstack.org > > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Thu Dec 4 05:19:39 2014 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 04 Dec 2014 00:19:39 -0500 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <547C1285.7090909@hp.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> Message-ID: <547FEEEB.3070507@redhat.com> On 01/12/14 02:02, Anant Patil wrote: > On GitHub:https://github.com/anantpatil/heat-convergence-poc I'm trying to review this code at the moment, and finding some stuff I don't understand: https://github.com/anantpatil/heat-convergence-poc/blob/master/heat/engine/stack.py#L911-L916 This appears to loop through all of the resources *prior* to kicking off any actual updates to check if the resource will change. This is impossible to do in general, since a resource may obtain a property value from an attribute of another resource and there is no way to know whether an update to said other resource would cause a change in the attribute value. In addition, no attempt to catch UpdateReplace is made. Although that looks like a simple fix, I'm now worried about the level to which this code has been tested. I'm also trying to wrap my head around how resources are cleaned up in dependency order. If I understand correctly, you store in the ResourceGraph table the dependencies between various resource names in the current template (presumably there could also be some left around from previous templates too?). For each resource name there may be a number of rows in the Resource table, each with an incrementing version. As far as I can tell though, there's nowhere that the dependency graph for _previous_ templates is persisted? So if the dependency order changes in the template we have no way of knowing the correct order to clean up in any more? (There's not even a mechanism to associate a resource version with a particular template, which might be one avenue by which to recover the dependencies.) I think this is an important case we need to be able to handle, so I added a scenario to my test framework to exercise it and discovered that my implementation was also buggy. Here's the fix: https://github.com/zaneb/heat-convergence-prototype/commit/786f367210ca0acf9eb22bea78fd9d51941b0e40 > It was difficult, for me personally, to completely understand Zane's PoC > and how it would lay the foundation for aforementioned design goals. It > would be very helpful to have Zane's understanding here. I could > understand that there are ideas like async message passing and notifying > the parent which we also subscribe to. So I guess the thing to note is that there are essentially two parts to my Poc: 1) A simulation framework that takes what will be in the final implementation multiple tasks running in parallel in separate processes and talking to a database, and replaces it with an event loop that runs the tasks sequentially in a single process with an in-memory data store. I could have built a more realistic simulator using Celery or something, but I preferred this way as it offers deterministic tests. 2) A toy implementation of Heat on top of this framework. The files map roughly to Heat something like this: converge.engine -> heat.engine.service converge.stack -> heat.engine.stack converge.resource -> heat.engine.resource converge.template -> heat.engine.template converge.dependencies -> actually is heat.engine.dependencies converge.sync_point -> no equivalent converge.converger -> no equivalent (this is convergence "worker") converge.reality -> represents the actual OpenStack services For convenience, I just use the @asynchronous decorator to turn an ordinary method call into a simulated message. The concept is essentially as follows: At the start of a stack update (creates and deletes are also just updates) we create any new resources in the DB calculate the dependency graph for the update from the data in the DB and template. This graph is the same one used by updates in Heat currently, so it contains both the forward and reverse (cleanup) dependencies. The stack update then kicks off checks of all the leaf nodes, passing the pre-calculated dependency graph. Each resource check may result in a call to the create(), update() or delete() methods of a Resource plugin. The resource also reads any attributes that will be required from it. Once this is complete, it triggers any dependent resources that are ready, or updates a SyncPoint in the database if there are dependent resources that have multiple requirements. The message triggering the next resource will contain the dependency graph again, as well as the RefIds and required attributes of any resources it depends on. The new dependencies thus created are added to the resource itself in the database at the time it is checked, allowing it to record the changes caused by a requirement being unexpectedly replaced without needing a global lock on anything. When cleaning up resources, we also endeavour to remove any that are successfully deleted from the dependencies graph. Each traversal has a unique ID that is both stored in the stack and passed down through the resource check triggers. (At present this is the template ID, but it may make more sense to have a unique ID since old template IDs can be resurrected in the case of a rollback.) As soon as these fail to match the resource checks stop propagating, so only an update of a single field is required (rather than locking an entire table) before beginning a new stack update. Hopefully that helps a little. Please let me know if you have specific questions. I'm *very* happy to incorporate other ideas into it, since it's pretty quick to change, has tests to check for regressions, and is intended to be thrown away anyhow (so I genuinely don't care if some bits get thrown away earlier than others). > In retrospective, we had to struggle a lot to understand the existing > Heat engine. We couldn't have done justice by just creating another > project in GitHub and without any concrete understanding of existing > state-of-affairs. I completely agree, and you guys did the right thing by starting out looking at Heat. But remember, the valuable thing isn't the code, it's what you learned. My concern is that now that you have Heat pretty well figured out, you won't be able to continue to learn nearly as fast trying to wrestle with the Heat codebase as you could with the simulator. We don't want to fall into the trap of just shipping whatever we have because it's too hard to explore the other options, we want to identify a promising design and iterate it as quickly as possible. cheers, Zane. From nader.lahouti at gmail.com Thu Dec 4 06:12:04 2014 From: nader.lahouti at gmail.com (Nader Lahouti) Date: Wed, 3 Dec 2014 22:12:04 -0800 Subject: [openstack-dev] [Neutron] [Devstack] Devstack quantun-agent vlan error In-Reply-To: References: Message-ID: What do you have for tunnel_types in ml2_conf.ini? On Wed, Dec 3, 2014 at 12:31 PM, Daniel Nobusada wrote: > Hi guys, > > I'm using Devstack with vlan on the parameter Q_ML2_TENANT_NETWORK_TYPE > and while stacking, the quantun-agent (q-agt) breaks. I'm running it on an > Ubuntu Server 14.04. > > For more details, here's my local.conf: http://pastebin.com/4scBGtpf. > The error that I recieve is the following: http://pastebin.com/TLq0safe > > Could anyone help me? > > Thanks in advance, > Daniel Nobusada > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sumitnaiksatam at gmail.com Thu Dec 4 07:38:54 2014 From: sumitnaiksatam at gmail.com (Sumit Naiksatam) Date: Wed, 3 Dec 2014 23:38:54 -0800 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: <547FEC20.7000308@redhat.com> References: <547FEC20.7000308@redhat.com> Message-ID: On Wed, Dec 3, 2014 at 9:07 PM, Adam Young wrote: > On 12/03/2014 06:24 PM, Sukhdev Kapur wrote: > > Congratulations Henry and Kevin. It has always been pleasure working with > you guys..... > > > If I may express my opinion, Bob's contribution to ML2 has been quite > substantial. The kind of stability ML2 has achieved makes a statement of his > dedication to this work. I have worked very closely with Bob on several > issues and co-chaired ML2-Subteam with him and have developed tremendous > respect for his dedication. > Reading his email reply makes me believe he wants to continue to contribute > as core developer. Therefore, I would like to take an opportunity to appeal > to the core team to consider granting him his wish - i.e. vote -1 on his > removal. > > If I might venture an outside voice in support of Bob: you don't want to > chase away the continuity. Yes, sometimes the day job makes us focus on > things other than upstream work for a while, but I would say that you should > err on the side of keeping someone that is otherwise still engaged. > Especially when that core has been as fundamental on a project as I know Bob > to have been on Quantum....er Neutron. I would definitely echo the above sentiments; Bob has continually made valuable design contributions to ML2 and Neutron that go beyond the review count metric. Kindly consider keeping him as a part of the core team. That said, a big +1 to both, Henry and Kevin, as additions to the core team! Welcome!! Thanks, ~Sumit. > > > > > > > regards.. > -Sukhdev > > > > > > > On Wed, Dec 3, 2014 at 11:48 AM, Edgar Magana > wrote: >> >> I give +2 to Henry and Kevin. So, Congratulations Folks! >> I have been working with both of them and great quality reviews are always >> coming out from them. >> >> Many thanks to Nachi and Bob for their hard work! >> >> Edgar >> >> On 12/2/14, 7:59 AM, "Kyle Mestery" wrote: >> >> >Now that we're in the thick of working hard on Kilo deliverables, I'd >> >like to make some changes to the neutron core team. Reviews are the >> >most important part of being a core reviewer, so we need to ensure >> >cores are doing reviews. The stats for the 180 day period [1] indicate >> >some changes are needed for cores who are no longer reviewing. >> > >> >First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from >> >neutron-core. Bob and Nachi have been core members for a while now. >> >They have contributed to Neutron over the years in reviews, code and >> >leading sub-teams. I'd like to thank them for all that they have done >> >over the years. I'd also like to propose that should they start >> >reviewing more going forward the core team looks to fast track them >> >back into neutron-core. But for now, their review stats place them >> >below the rest of the team for 180 days. >> > >> >As part of the changes, I'd also like to propose two new members to >> >neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have >> >been very active in reviews, meetings, and code for a while now. Henry >> >lead the DB team which fixed Neutron DB migrations during Juno. Kevin >> >has been actively working across all of Neutron, he's done some great >> >work on security fixes and stability fixes in particular. Their >> >comments in reviews are insightful and they have helped to onboard new >> >reviewers and taken the time to work with people on their patches. >> > >> >Existing neutron cores, please vote +1/-1 for the addition of Henry >> >and Kevin to the core team. >> > >> >Thanks! >> >Kyle >> > >> >[1] http://stackalytics.com/report/contribution/neutron-group/180 >> > >> >_______________________________________________ >> >OpenStack-dev mailing list >> >OpenStack-dev at lists.openstack.org >> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ndipanov at redhat.com Thu Dec 4 08:25:00 2014 From: ndipanov at redhat.com (=?UTF-8?B?Tmlrb2xhIMSQaXBhbm92?=) Date: Thu, 04 Dec 2014 09:25:00 +0100 Subject: [openstack-dev] [nova] NUMA Cells In-Reply-To: References: Message-ID: <54801A5C.5070006@redhat.com> On 12/04/2014 05:30 AM, Michael Still wrote: > Hi, > > so just having read a bunch of the libvirt driver numa code, I have a > concern. At first I thought it was a little thing, but I am starting > to think its more of a big deal... > > We use the term "cells" to describe numa cells. However, that term has > a specific meaning in nova, and I worry that overloading the term is > confusing. > > (Yes, I know the numa people had it first, but hey). > > So, what do people think about trying to move the numa code to use > something like "numa cell" or "numacell" based on context? > Seeing that "node" is also not exactly unambiguous in this space - I am fine with both with either "numanode" or "numacell" with a slight preference for "numacell". A small issue will be renaming it in objects though - as this will require adding a new field for use in Kilo while still remaining backwards compatible with Juno, resulting in even more compatibility code (we already added some for the slightly different data format). The whole name is quite in context there, but we would use it like: for cell in numa_topology.cells: # awesome algo here with cell :( but if we were to rename it just in places where it's used to: for numacell in numa_topology.cells: # awesome algo here with numacell :) We would achieve a lot of the disambiguation without renaming object attributes (but not really make it future proof). Thoughts? N. From shardy at redhat.com Thu Dec 4 09:09:18 2014 From: shardy at redhat.com (Steven Hardy) Date: Thu, 4 Dec 2014 09:09:18 +0000 Subject: [openstack-dev] [tripleo] Managing no-mergepy template duplication In-Reply-To: <1417661497-sup-1964@fewbar.com> References: <20141203101059.GA23017@t430slt.redhat.com> <1417660515.836.3.camel@dovetail.localdomain> <1417661497-sup-1964@fewbar.com> Message-ID: <20141204090918.GA8949@t430slt.redhat.com> On Wed, Dec 03, 2014 at 06:54:48PM -0800, Clint Byrum wrote: > Excerpts from Dan Prince's message of 2014-12-03 18:35:15 -0800: > > On Wed, 2014-12-03 at 10:11 +0000, Steven Hardy wrote: > > > Hi all, > > > > > > Lately I've been spending more time looking at tripleo and doing some > > > reviews. I'm particularly interested in helping the no-mergepy and > > > subsequent puppet-software-config implementations mature (as well as > > > improving overcloud updates via heat). > > > > > > Since Tomas's patch landed[1] to enable --no-mergepy in > > > tripleo-heat-templates, it's become apparent that frequently patches are > > > submitted which only update overcloud-source.yaml, so I've been trying to > > > catch these and ask for a corresponding change to e.g controller.yaml. > > > > > > This raises the following questions: > > > > > > 1. Is it reasonable to -1 a patch and ask folks to update in both places? > > > > Yes! In fact until we abandon merge.py we shouldn't land anything that > > doesn't make the change in both places. Probably more important to make > > sure things go into the new (no-mergepy) templates though. > > > > > 2. How are we going to handle this duplication and divergence? > > > > Move as quickly as possible to the new without-mergepy varients? That is > > my vote anyways. > > > > > 3. What's the status of getting gating CI on the --no-mergepy templates? > > > > Devtest already supports it by simply setting an option (which sets an > > ENV variable). Just need to update tripleo-ci to do that and then make > > the switch. > > > > > 4. What barriers exist (now that I've implemented[2] the eliding functionality > > > requested[3] for ResourceGroup) to moving to the --no-mergepy > > > implementation by default? > > > > None that I know of. > > > > I concur with Dan. Elide was the last reason not to use this. That's great news! :) > One thing to consider is that there is no actual upgrade path from > non-autoscaling-group based clouds, to auto-scaling-group based > templates. We should consider how we'll do that before making it the > default. So, I suggest we discuss possible upgrade paths and then move > forward with switching one of the CI jobs to using the new templates. This is probably going to be really hard :( The sort of pattern which might work is: 1. Abandon mergepy based stack 2. Have helper script to reformat abandon data into nomergepy based adopt data 3. Adopt stack Unforunately there are several abandon/adopt bugs we'll have to fix if we decide this is the way to go (original author hasn't maintained it, but we can pick up the slack if it's on the critical path for TripleO). An alternative could be the external resource feature Angus is looking at: https://review.openstack.org/#/c/134848/ This would be more limited (we just reference rather than manage the existing resources), but potentially safer. The main risk here is import (or subsequent update) operations becoming destructive and replacing things, but I guess to some extent this is a risk with any change to tripleo-heat-templates. Has any thought been given to upgrade CI testing? I'm thinking grenade or grenade-style testing here where we test maintaing a deployed overcloud over an upgrade of (some subset of) changes. I know the upgrade testing thing will be hard, but to me it's a key requirement to mature heat-driven updates vs those driven by external tooling. Steve From jp at jamezpolley.com Thu Dec 4 09:11:42 2014 From: jp at jamezpolley.com (James Polley) Date: Thu, 4 Dec 2014 10:11:42 +0100 Subject: [openstack-dev] [TripleO] Meeting purpose Message-ID: Hi all, The other topic that has come up a few times in our meetings is: what value do we get from these meetings? There's no doubt that regular meetings provide value - but that doesn't mean that everything we do in them is valuable. As an example of something that I think doesn't add much value in the meeting - DerekH has already been giving semi-regular CI/CD status reports via email. I'd like to make these weekly update emails regular, and take the update off the meeting agenda. I'm offering to share the load with him to make this easier to achieve. Are there other things on our regular agenda that you feel aren't offering much value? Are there things you'd like to see moved onto, or off, the agenda? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp at jamezpolley.com Thu Dec 4 09:40:36 2014 From: jp at jamezpolley.com (James Polley) Date: Thu, 4 Dec 2014 10:40:36 +0100 Subject: [openstack-dev] [TripleO] Alternate meeting time In-Reply-To: <547DD9F5.6010108@redhat.com> References: <547DD0A2.3030102@redhat.com> <547DD6E1.8070103@redhat.com> <547DD9F5.6010108@redhat.com> Message-ID: Just taking a look at http://doodle.com/27ffgkdm5gxzr654 again - we've had 10 people respond so far. The winning time so far is Monday 2100UTC - 7 "yes" and one "If I have to". ... but the 2 people who can't make that time work in UTC or CET. Finding a time that includes those people rules out people who work in Eastern Australia and New Zealand. Purely in terms of getting the biggest numbers in, Monday 2100UTC seems like the most workable time so far. On Tue, Dec 2, 2014 at 4:25 PM, Derek Higgins wrote: > On 02/12/14 15:12, Giulio Fidente wrote: > > On 12/02/2014 03:45 PM, Derek Higgins wrote: > >> On 02/12/14 14:10, James Polley wrote: > >>> Months ago, I pushed for us to alternate meeting times to something > that > >>> was friendlier to me, so we started doing alternate weeks at 0700UTC. > >>> That worked well for me, but wasn't working so well for a few people in > >>> Europe, so we decided to give 0800UTC a try. Then DST changes happened, > >>> and wiki pages got out of sync, and there was confusion about what time > >>> the meeting is at.. > >>> > >>> The alternate meeting hasn't been very well attended for the last ~3 > >>> meetings. Partly I think that's due to summit and travel plans, but it > >>> seems like the 0800UTC time doesn't work very well for quite a few > >>> people. > >>> > >>> So, instead of trying things at random, I've > >>> created > https://etherpad.openstack.org/p/tripleo-alternate-meeting-time > >>> as a starting point for figuring out what meeting time might work well > >>> for the most people. Obviously the world is round, and people have > >>> different schedules, and we're never going to get a meeting time that > >>> works well for everyone - but it'd be nice to try to maximise > attendance > >>> (and minimise inconvenience) as much as we can. > >>> > >>> If you regularly attend, or would like to attend, the meeting, please > >>> take a moment to look at the etherpad to register your vote for which > >>> time works best for you. There's even a section for you to cast your > >>> vote if the UTC1900 meeting (aka the "main" or "US-Friendly" meeting) > >>> works better for you! > >> > >> > >> Can I suggest an alternative data gathering method, I've put each hour > >> in a week in a poll, for each slot you have 3 options > > > > I think it is great, but would be even better if we could trim it to > > just a *single* day and once we agreed on the timeframe, we decide on > > the day as that probably won't count much so long as it is a weekday I > > suppose > > I think leaving the whole week in there is better, some people may have > different schedules on different weekdays, me for example ;-) > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mathieu.rohon at gmail.com Thu Dec 4 10:21:45 2014 From: mathieu.rohon at gmail.com (Mathieu Rohon) Date: Thu, 4 Dec 2014 11:21:45 +0100 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: <547FEC20.7000308@redhat.com> Message-ID: On Thu, Dec 4, 2014 at 8:38 AM, Sumit Naiksatam wrote: > On Wed, Dec 3, 2014 at 9:07 PM, Adam Young wrote: >> On 12/03/2014 06:24 PM, Sukhdev Kapur wrote: >> >> Congratulations Henry and Kevin. It has always been pleasure working with >> you guys..... >> >> >> If I may express my opinion, Bob's contribution to ML2 has been quite >> substantial. The kind of stability ML2 has achieved makes a statement of his >> dedication to this work. I have worked very closely with Bob on several >> issues and co-chaired ML2-Subteam with him and have developed tremendous >> respect for his dedication. >> Reading his email reply makes me believe he wants to continue to contribute >> as core developer. Therefore, I would like to take an opportunity to appeal >> to the core team to consider granting him his wish - i.e. vote -1 on his >> removal. >> >> If I might venture an outside voice in support of Bob: you don't want to >> chase away the continuity. Yes, sometimes the day job makes us focus on >> things other than upstream work for a while, but I would say that you should >> err on the side of keeping someone that is otherwise still engaged. >> Especially when that core has been as fundamental on a project as I know Bob >> to have been on Quantum....er Neutron. > > I would definitely echo the above sentiments; Bob has continually made > valuable design contributions to ML2 and Neutron that go beyond the > review count metric. Kindly consider keeping him as a part of the core > team. Working with bob in the ML2 sub team was a real pleasure. He is providing a good technical and community leadership. Its reviews are really valuable, since he always reviews a patch in the context of the overall project and other work in progress. This takes more time. > That said, a big +1 to both, Henry and Kevin, as additions to the core > team! Welcome!! > > Thanks, > ~Sumit. > >> >> >> >> >> >> >> regards.. >> -Sukhdev >> >> >> >> >> >> >> On Wed, Dec 3, 2014 at 11:48 AM, Edgar Magana >> wrote: >>> >>> I give +2 to Henry and Kevin. So, Congratulations Folks! >>> I have been working with both of them and great quality reviews are always >>> coming out from them. >>> >>> Many thanks to Nachi and Bob for their hard work! >>> >>> Edgar >>> >>> On 12/2/14, 7:59 AM, "Kyle Mestery" wrote: >>> >>> >Now that we're in the thick of working hard on Kilo deliverables, I'd >>> >like to make some changes to the neutron core team. Reviews are the >>> >most important part of being a core reviewer, so we need to ensure >>> >cores are doing reviews. The stats for the 180 day period [1] indicate >>> >some changes are needed for cores who are no longer reviewing. >>> > >>> >First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from >>> >neutron-core. Bob and Nachi have been core members for a while now. >>> >They have contributed to Neutron over the years in reviews, code and >>> >leading sub-teams. I'd like to thank them for all that they have done >>> >over the years. I'd also like to propose that should they start >>> >reviewing more going forward the core team looks to fast track them >>> >back into neutron-core. But for now, their review stats place them >>> >below the rest of the team for 180 days. >>> > >>> >As part of the changes, I'd also like to propose two new members to >>> >neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have >>> >been very active in reviews, meetings, and code for a while now. Henry >>> >lead the DB team which fixed Neutron DB migrations during Juno. Kevin >>> >has been actively working across all of Neutron, he's done some great >>> >work on security fixes and stability fixes in particular. Their >>> >comments in reviews are insightful and they have helped to onboard new >>> >reviewers and taken the time to work with people on their patches. >>> > >>> >Existing neutron cores, please vote +1/-1 for the addition of Henry >>> >and Kevin to the core team. >>> > >>> >Thanks! >>> >Kyle >>> > >>> >[1] http://stackalytics.com/report/contribution/neutron-group/180 >>> > >>> >_______________________________________________ >>> >OpenStack-dev mailing list >>> >OpenStack-dev at lists.openstack.org >>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From akilesh1597 at gmail.com Thu Dec 4 10:26:21 2014 From: akilesh1597 at gmail.com (Akilesh K) Date: Thu, 4 Dec 2014 15:56:21 +0530 Subject: [openstack-dev] [neutron][sriov] PciDeviceRequestFailed error Message-ID: Hi, I am using neutron-plugin-sriov-agent. I have configured pci_whitelist in nova.conf I have configured ml2_conf_sriov.ini. But when I launch instance I get the exception in subject. On further checking with the help of some forum messages, I discovered that pci_stats are empty. mysql> select hypervisor_hostname,pci_stats from compute_nodes; +---------------------+-----------+ | hypervisor_hostname | pci_stats | +---------------------+-----------+ | openstack | [] | +---------------------+-----------+ 1 row in set (0.00 sec) Further to this I found that PciDeviceStats.pools is an empty list too. Can anyone tell me what I am missing. Thank you, Ageeleshwar K -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Thu Dec 4 10:33:04 2014 From: dougal at redhat.com (Dougal Matthews) Date: Thu, 4 Dec 2014 05:33:04 -0500 (EST) Subject: [openstack-dev] [TripleO] Alternate meeting time In-Reply-To: References: <547DD0A2.3030102@redhat.com> <547DD6E1.8070103@redhat.com> <547DD9F5.6010108@redhat.com> Message-ID: <1747953760.6493821.1417689184026.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "James Polley" > To: "OpenStack Development Mailing List (not for usage questions)" > Sent: Thursday, 4 December, 2014 9:40:36 AM > Subject: Re: [openstack-dev] [TripleO] Alternate meeting time > > Just taking a look at http://doodle.com/27ffgkdm5gxzr654 again - we've had > 10 people respond so far. The winning time so far is Monday 2100UTC - 7 > "yes" and one "If I have to". > > ... but the 2 people who can't make that time work in UTC or CET. Finding a > time that includes those people rules out people who work in Eastern > Australia and New Zealand. Purely in terms of getting the biggest numbers > in, Monday 2100UTC seems like the most workable time so far. Yeah, one of those people is me. I realise I'm not that flexible, so it's fine if I can't make it. However, I'm still generally +1 to having alternating meeting times. Dougal From marios at redhat.com Thu Dec 4 10:40:23 2014 From: marios at redhat.com (marios) Date: Thu, 04 Dec 2014 12:40:23 +0200 Subject: [openstack-dev] [TripleO] Alternate meeting time In-Reply-To: References: <547DD0A2.3030102@redhat.com> <547DD6E1.8070103@redhat.com> <547DD9F5.6010108@redhat.com> Message-ID: <54803A17.3040403@redhat.com> On 04/12/14 11:40, James Polley wrote: > Just taking a look at http://doodle.com/27ffgkdm5gxzr654 again - we've > had 10 people respond so far. The winning time so far is Monday 2100UTC > - 7 "yes" and one "If I have to". for me it currently shows 1200 UTC as the preferred time. So to be clear, we are voting here for the alternate meeting. The 'original' meeting is at 1900UTC. If in fact 2100UTC ends up being the most popular, what would be the point of an alternating meeting that is only 2 hours apart in time? > > ... but the 2 people who can't make that time work in UTC or CET. > Finding a time that includes those people rules out people who work in > Eastern Australia and New Zealand. Purely in terms of getting the > biggest numbers in, Monday 2100UTC seems like the most workable time so far. > > On Tue, Dec 2, 2014 at 4:25 PM, Derek Higgins > wrote: > > On 02/12/14 15:12, Giulio Fidente wrote: > > On 12/02/2014 03:45 PM, Derek Higgins wrote: > >> On 02/12/14 14:10, James Polley wrote: > >>> Months ago, I pushed for us to alternate meeting times to > something that > >>> was friendlier to me, so we started doing alternate weeks at > 0700UTC. > >>> That worked well for me, but wasn't working so well for a few > people in > >>> Europe, so we decided to give 0800UTC a try. Then DST changes > happened, > >>> and wiki pages got out of sync, and there was confusion about > what time > >>> the meeting is at.. > >>> > >>> The alternate meeting hasn't been very well attended for the last ~3 > >>> meetings. Partly I think that's due to summit and travel plans, > but it > >>> seems like the 0800UTC time doesn't work very well for quite a few > >>> people. > >>> > >>> So, instead of trying things at random, I've > >>> created > https://etherpad.openstack.org/p/tripleo-alternate-meeting-time > >>> as a starting point for figuring out what meeting time might > work well > >>> for the most people. Obviously the world is round, and people have > >>> different schedules, and we're never going to get a meeting time > that > >>> works well for everyone - but it'd be nice to try to maximise > attendance > >>> (and minimise inconvenience) as much as we can. > >>> > >>> If you regularly attend, or would like to attend, the meeting, > please > >>> take a moment to look at the etherpad to register your vote for > which > >>> time works best for you. There's even a section for you to cast your > >>> vote if the UTC1900 meeting (aka the "main" or "US-Friendly" > meeting) > >>> works better for you! > >> > >> > >> Can I suggest an alternative data gathering method, I've put each > hour > >> in a week in a poll, for each slot you have 3 options > > > > I think it is great, but would be even better if we could trim it to > > just a *single* day and once we agreed on the timeframe, we decide on > > the day as that probably won't count much so long as it is a weekday I > > suppose > > I think leaving the whole week in there is better, some people may have > different schedules on different weekdays, me for example ;-) > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From itzikb at redhat.com Thu Dec 4 10:53:11 2014 From: itzikb at redhat.com (Itzik Brown) Date: Thu, 04 Dec 2014 12:53:11 +0200 Subject: [openstack-dev] [neutron][sriov] PciDeviceRequestFailed error In-Reply-To: References: Message-ID: <54803D17.2070801@redhat.com> Hi, I think it's better to ask this question at ask.openstack.org or the openstack mailing list. BR, Itzik On 12/04/2014 12:26 PM, Akilesh K wrote: > Hi, > I am using neutron-plugin-sriov-agent. > > I have configured pci_whitelist in nova.conf > > I have configured ml2_conf_sriov.ini. > > But when I launch instance I get the exception in subject. > > On further checking with the help of some forum messages, I discovered > that pci_stats are empty. > mysql> select hypervisor_hostname,pci_stats from compute_nodes; > +---------------------+-----------+ > | hypervisor_hostname | pci_stats | > +---------------------+-----------+ > | openstack | [] | > +---------------------+-----------+ > 1 row in set (0.00 sec) > > > Further to this I found that PciDeviceStats.pools is an empty list too. > > Can anyone tell me what I am missing. > > > Thank you, > Ageeleshwar K > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at etc.gen.nz Thu Dec 4 11:11:16 2014 From: andrew at etc.gen.nz (Andrew Ruthven) Date: Fri, 05 Dec 2014 00:11:16 +1300 Subject: [openstack-dev] [neutron] Private external network In-Reply-To: References: <891761EAFA335D44AD1FFDB9B4A8C063C7C080@G4W3216.americas.hpqcorp.net> <1417600749.20200.5.camel@etc.gen.nz> Message-ID: <1417691476.18966.2.camel@etc.gen.nz> On Wed, 2014-12-03 at 02:08 -0800, Kevin Benton wrote: > There is a current blueprint under discussion[1] which would have > covered the external network access control as well, however it looks > like the scope is going to have to be reduced for this cycle so it > will be limited to shared networks if it's accepted at all. > > > 1. https://review.openstack.org/#/c/132661/ Great, that looks to cover off what I need to achieve. Thanks Kevin! -- Andrew Ruthven, Wellington, New Zealand andrew at etc.gen.nz | linux.conf.au 2015 New Zealand's only Cloud: | BeAwesome in Auckland, NZ https://catalyst.net.nz/cloud | http://lca2015.linux.org.au From apevec at gmail.com Thu Dec 4 11:24:01 2014 From: apevec at gmail.com (Alan Pevec) Date: Thu, 4 Dec 2014 12:24:01 +0100 Subject: [openstack-dev] [oslo] oslo.messaging config option deprecation In-Reply-To: <547F7C46.7010304@nemebean.com> References: <547F7656.80708@dague.net> <547F7C46.7010304@nemebean.com> Message-ID: 2014-12-03 22:10 GMT+01:00 Ben Nemec : > On 12/03/2014 02:45 PM, Sean Dague wrote: >> So this - >> https://github.com/openstack/oslo.messaging/commit/bcb3b23b8f6e7d01e38fdc031982558711bb7586 >> was clearly a violation of our 1 cycle for deprecation of config options. >> >> I think that should be reverted, an oops release put out to fix it, and >> then deprecate for 1.6. > > +1. That was definitely a no-no. Filed as https://bugs.launchpad.net/oslo.messaging/+bug/1399085 Since this is blocking stable/juno I've pushed partial revert of Revert "Cap Oslo and client library versions" https://review.openstack.org/138963 as a quickfix before the 2014.2.1 release today. We'll of course need to revisit that, once oslo.messaging is fixed. Cheers, Alan From mrunge at redhat.com Thu Dec 4 11:30:34 2014 From: mrunge at redhat.com (Matthias Runge) Date: Thu, 04 Dec 2014 12:30:34 +0100 Subject: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard In-Reply-To: <5469BBFC.9030601@redhat.com> References: <54522B7C.8000208@redhat.com> <5469BBFC.9030601@redhat.com> Message-ID: <548045DA.2080101@redhat.com> On 17/11/14 10:12, Matthias Runge wrote: > On 30/10/14 13:13, Matthias Runge wrote: >> Hi, >> >> tl;dr: how to progreed in separating horizon and openstack_dashboard > Options so far: In yesterday's horizon meeting, we canceled the repo split[1] In the light of moving to a more angular based implementation, splitting the repo will just slow down development, and will unnecessarily bind resources. Matthias [1] https://blueprints.launchpad.net/horizon/+spec/separate-horizon-from-dashboard From derekh at redhat.com Thu Dec 4 11:51:01 2014 From: derekh at redhat.com (Derek Higgins) Date: Thu, 04 Dec 2014 11:51:01 +0000 Subject: [openstack-dev] [TripleO] CI report : 1/11/2014 - 4/12/2014 Message-ID: <54804AA5.9090902@redhat.com> A month since my last update, sorry my bad since the last email we've had 5 incidents causing ci failures 26/11/2014 : Lots of ubuntu jobs failed over 24 hours (maybe half) - We seem to suffer any time an ubuntu mirror isn't in sync causing hash mismatch errors. For now I've pinned DNS on our proxy to a specific server so we stop DNS round robining 21/11/2014 : All tripleo jobs failed for about 16 hours - Neutron started asserting that local_ip be set to a valid ip address, on the seed we had been leaving it blank - Cinder moved to using oslo.concurreny which in turn requires that lock_path be set, we are now setting it 8/11/2014 : All fedora tripleo jobs failed for about 60 hours (over a weekend) - A url being accessed on https://bzr.linuxfoundation.org is no longer available, we removed the dependency 7/11/2014 : All tripleo tests failed for about 24 hours - Options were removed from nova.conf that had been deprecated (although no deprecation warnings were being reported), we were still using these in tripleo as always more details can be found here https://etherpad.openstack.org/p/tripleo-ci-breakages thanks, Derek. From davanum at gmail.com Thu Dec 4 12:02:17 2014 From: davanum at gmail.com (Davanum Srinivas) Date: Thu, 4 Dec 2014 07:02:17 -0500 Subject: [openstack-dev] [nova] global or per-project specific ssl config options, or both? In-Reply-To: <547FE9BA.1010700@linux.vnet.ibm.com> References: <547FE9BA.1010700@linux.vnet.ibm.com> Message-ID: +1 to @markmc's "default is global value and override for project specific key" suggestion. -- dims On Wed, Dec 3, 2014 at 11:57 PM, Matt Riedemann wrote: > I've posted this to the 12/4 nova meeting agenda but figured I'd socialize > it here also. > > SSL options - do we make them per-project or global, or both? Neutron and > Cinder have config-group specific SSL options in nova, Glance is using oslo > sslutils global options since Juno which was contentious for a time in a > separate review in Icehouse [1]. > > Now [2] wants to break that out for Glance, but we also have a patch [3] for > Keystone to use the global oslo SSL options, we should be consistent, but > does that require a blueprint now? > > In the Icehouse patch, markmc suggested using a DictOpt where the default > value is the global value, which could be coming from the oslo [ssl] group > and then you could override that with a project-specific key, e.g. cinder, > neutron, glance, keystone. > > [1] https://review.openstack.org/#/c/84522/ > [2] https://review.openstack.org/#/c/131066/ > [3] https://review.openstack.org/#/c/124296/ > > -- > > Thanks, > > Matt Riedemann > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From doug at doughellmann.com Thu Dec 4 12:09:07 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 4 Dec 2014 07:09:07 -0500 Subject: [openstack-dev] [OpenStack-Dev] Config Options and OSLO libs In-Reply-To: References: Message-ID: On Dec 3, 2014, at 3:44 PM, John Griffith wrote: > Hey, > > So this is a long running topic, but I want to bring it up again. > First, YES Cinder is still running a sample.conf. A lot of Operators > spoke up and provided feedback that this was valuable and they > objected strongly to taking it away. That being said we're going to > go the route of removing it from our unit tests and > generating/publishing periodically outside of tests. > > That being said, one of the things that's driving me crazy and > breaking things on a regular basis is other OpenStack libs having a > high rate of change of config options. This revolves around things > like fixing typos in the comments, reformatting of text etc. All of > these things are good in the long run, but I wonder if we could > consider batching these sorts of efforts and communicating them? > > The other issue that we hit today was a flat out removal of an option > in the oslo.messaging lib with no deprecation. This patch here [1] > does a number of things that are probably great in terms of clean up > and housekeeping, but now that we're all in on shared/common libs I > think we should be a bit more careful about the changes we make. Also > to me the commit message doesn't really make it easy for me to search > git logs to try and figure out what happened when things blew up. Yes, this was a mistake. We believed that option was only used in the oslo.messaging library tests, and didn?t check widely enough to verify that we were right. There is a patch merging now to restore the option [1], and when that lands we will release a new version today. As we move ahead with the libraries, I would like for us to stop having apps set configuration options in their unit tests. As you rightly point out, your unit tests shouldn?t break if we change options (tempest and grenade would still use the options). We should provide fixtures and APIs that can be used to configure the libraries in the necessary ways, without relying on the configuration options. This shift is going to take a long time, and we might not start it until next cycle, but we could have the liaisons help us put together the list of ways those options are already being used in unit tests. I?ll be bringing that up in the next Oslo meeting [2]. Please make sure all of your liaisons are present. Doug [1] https://review.openstack.org/#/c/138973/ [2] https://wiki.openstack.org/wiki/Meetings/Oslo > > Anyway, just wanted to send a note out asking people to keep in mind > the impact of conf changes, and a gentle reminder about depreciation > periods for the removal of options. > > [1]: https://github.com/openstack/oslo.messaging/commit/bcb3b23b8f6e7d01e38fdc031982558711bb7586 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Thu Dec 4 12:16:46 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 4 Dec 2014 07:16:46 -0500 Subject: [openstack-dev] [oslo] oslo.messaging config option deprecation In-Reply-To: <547F7656.80708@dague.net> References: <547F7656.80708@dague.net> Message-ID: On Dec 3, 2014, at 3:45 PM, Sean Dague wrote: > So this - > https://github.com/openstack/oslo.messaging/commit/bcb3b23b8f6e7d01e38fdc031982558711bb7586 > was clearly a violation of our 1 cycle for deprecation of config options. > > I think that should be reverted, an oops release put out to fix it, and > then deprecate for 1.6. See https://review.openstack.org/#/c/138973/ > > If oslo libraries are going to include config options, they have to > follow the same config deprecation as that's a contract that projects > project up. Otherwise we need to rethink the ability for libraries to > use oslo config (which, honestly is worth rethinking). We do, generally. We believed this option was used only by the oslo.messaging unit tests. Obviously that?s incorrect, and we are fixing the problem. The configuration options for the rabbit driver inside of oslo.messaging need to be owned by that driver code. We can?t have them set using different names in every app, and we don?t want to expose those option definitions to the applications. It shouldn?t matter to the application which messaging driver is used or how it is configured. As I mentioned in John?s thread (?Config Options and OSLO Libs?), we will be looking into ways to further isolate applications from the configuration options, especially in their unit tests by providing fixtures and other APIs to set up the libraries for the tests. Doug From doug at doughellmann.com Thu Dec 4 12:20:27 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 4 Dec 2014 07:20:27 -0500 Subject: [openstack-dev] [oslo] sprinting today Message-ID: As a (late) reminder, the Oslo team is sprinting today to clear out our bug triage and review backlog. Please join us in #openstack-oslo to participate! Doug From rcresswe at cisco.com Thu Dec 4 12:30:00 2014 From: rcresswe at cisco.com (Rob Cresswell (rcresswe)) Date: Thu, 4 Dec 2014 12:30:00 +0000 Subject: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon In-Reply-To: <547F8EFB.4030302@linux.vnet.ibm.com> References: <547F8EFB.4030302@linux.vnet.ibm.com> Message-ID: <3A30C604-A730-4A85-9AFB-789CE8744AB7@cisco.com> While clicking off the modal is relatively easy to do my accident, hitting Esc or ?X? are fairly distinct actions. I don?t personally think there is any need to warn the user in that instance :) Rob On 3 Dec 2014, at 22:30, Aaron Sahlin > wrote: I would be happy with either the two proposed solutions (both improvements over the what we have now). Any thoughts on combining them? Only close if esc or 'x' is clicked, but also warn them if data was entered. On 12/3/2014 7:21 AM, Rob Cresswell (rcresswe) wrote: +1 to changing the behaviour to ?static'. Modal inside a modal is potentially slightly more useful, but looks messy and inconsistent, which I think outweighs the functionality. Rob On 2 Dec 2014, at 12:21, Timur Sufiev > wrote: Hello, Horizoneers and UX-ers! The default behavior of modals in Horizon (defined in turn by Bootstrap defaults) regarding their closing is to simply close the modal once user clicks somewhere outside of it (on the backdrop element below and around the modal). This is not very convenient for the modal forms containing a lot of input - when it is closed without a warning all the data the user has already provided is lost. Keeping this in mind, I've made a patch [1] changing default Bootstrap 'modal_backdrop' parameter to 'static', which means that forms are not closed once the user clicks on a backdrop, while it's still possible to close them by pressing 'Esc' or clicking on the 'X' link at the top right border of the form. Also the patch [1] allows to customize this behavior (between 'true'-current one/'false' - no backdrop element/'static') on a per-form basis. What I didn't know at the moment I was uploading my patch is that David Lyle had been working on a similar solution [2] some time ago. It's a bit more elaborate than mine: if the user has already filled some some inputs in the form, then a confirmation dialog is shown, otherwise the form is silently dismissed as it happens now. The whole point of writing about this in the ML is to gather opinions which approach is better: * stick to the current behavior; * change the default behavior to 'static'; * use the David's solution with confirmation dialog (once it'll be rebased to the current codebase). What do you think? [1] https://review.openstack.org/#/c/113206/ [2] https://review.openstack.org/#/c/23037/ P.S. I remember that I promised to write this email a week ago, but better late than never :). -- Timur Sufiev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From berrange at redhat.com Thu Dec 4 12:43:22 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Thu, 4 Dec 2014 12:43:22 +0000 Subject: [openstack-dev] [nova] NUMA Cells In-Reply-To: References: Message-ID: <20141204124321.GG16269@redhat.com> On Thu, Dec 04, 2014 at 03:30:46PM +1100, Michael Still wrote: > Hi, > > so just having read a bunch of the libvirt driver numa code, I have a > concern. At first I thought it was a little thing, but I am starting > to think its more of a big deal... > > We use the term "cells" to describe numa cells. However, that term has > a specific meaning in nova, and I worry that overloading the term is > confusing. I don't really think it is a big deal. It is pretty obvious which meaning is relevant, based on the context of the code your looking at. eg nothing in the libvirt driver code deals with Nova cells at all, so it is always going to be referring to NUMA cells. > So, what do people think about trying to move the numa code to use > something like "numa cell" or "numacell" based on context? The only places where I think it is important to have an explicit 'numa' prefix is in places where we're in a global namespace eg, Nova Object names, database table names, or flavour properties. I think we're aleady using a numa prefix in all those cases. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From berrange at redhat.com Thu Dec 4 12:46:00 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Thu, 4 Dec 2014 12:46:00 +0000 Subject: [openstack-dev] [nova] NUMA Cells In-Reply-To: <54801A5C.5070006@redhat.com> References: <54801A5C.5070006@redhat.com> Message-ID: <20141204124600.GH16269@redhat.com> On Thu, Dec 04, 2014 at 09:25:00AM +0100, Nikola ?ipanov wrote: > On 12/04/2014 05:30 AM, Michael Still wrote: > > Hi, > > > > so just having read a bunch of the libvirt driver numa code, I have a > > concern. At first I thought it was a little thing, but I am starting > > to think its more of a big deal... > > > > We use the term "cells" to describe numa cells. However, that term has > > a specific meaning in nova, and I worry that overloading the term is > > confusing. > > > > (Yes, I know the numa people had it first, but hey). > > > > So, what do people think about trying to move the numa code to use > > something like "numa cell" or "numacell" based on context? > > > > Seeing that "node" is also not exactly unambiguous in this space - I am > fine with both with either "numanode" or "numacell" with a slight > preference for "numacell". > > A small issue will be renaming it in objects though - as this will > require adding a new field for use in Kilo while still remaining > backwards compatible with Juno, resulting in even more compatibility > code (we already added some for the slightly different data format). The > whole name is quite in context there, but we would use it like: > > for cell in numa_topology.cells: > # awesome algo here with cell :( > > but if we were to rename it just in places where it's used to: > > for numacell in numa_topology.cells: > # awesome algo here with numacell :) I think renaming local variables like this is really a solution in search of a problem. It is pretty blindingly obvious the 'cell' variable refers to a NUMA cell here, without having to spell it out as 'numacell'. Likewise I think the object property name is just fine as 'cell' because the context again makes it obvious what it is referring to Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From slukjanov at mirantis.com Thu Dec 4 12:49:01 2014 From: slukjanov at mirantis.com (Sergey Lukjanov) Date: Thu, 4 Dec 2014 16:49:01 +0400 Subject: [openstack-dev] [sahara] team meeting Dec 4 1400 UTC Message-ID: Hi folks, We'll be having the Sahara team meeting in #openstack-meeting-3 channel. Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20141204T14 NOTE: There is another time slot and meeting room. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From snikitin at mirantis.com Thu Dec 4 12:50:17 2014 From: snikitin at mirantis.com (Sergey Nikitin) Date: Thu, 4 Dec 2014 16:50:17 +0400 Subject: [openstack-dev] [nova] V3 API support Message-ID: Hi, Christopher, I working on API extension for instance tags ( https://review.openstack.org/#/c/128940/). Recently one reviewer asked me to add V3 API support. I talked with Jay Pipes about it and he told me that V3 API became useless. So I wanted to ask you and our community: "Do we need to support v3 API in future nova patches?" -------------- next part -------------- An HTML attachment was scrubbed... URL: From ikalnitsky at mirantis.com Thu Dec 4 13:01:31 2014 From: ikalnitsky at mirantis.com (Igor Kalnitsky) Date: Thu, 4 Dec 2014 15:01:31 +0200 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: <1A3C52DFCD06494D8528644858247BF017812DEB@EX10MBOX03.pnnl.gov> References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> <547F3D48.5020401@gmail.com> <1A3C52DFCD06494D8528644858247BF017812DEB@EX10MBOX03.pnnl.gov> Message-ID: Ok, guys, It became obvious that most of us either vote for Pecan or abstain from voting. So I propose to stop fighting this battle (Flask vs Pecan) and start thinking about moving to Pecan. You know, there are many questions that need to be discussed (such as 'should we change API version' or 'should be it done iteratively or as one patchset'). - Igor On Wed, Dec 3, 2014 at 7:25 PM, Fox, Kevin M wrote: > Choosing the right instrument for the job in an open source community involves choosing technologies that the community is familiar/comfortable with as well, as it will allow you access to a greater pool of developers. > > With that in mind then, I'd add: > Pro Pecan, blessed by the OpenStack community, con Flask, not. > > Kevin > ________________________________________ > From: Nikolay Markov [nmarkov at mirantis.com] > Sent: Wednesday, December 03, 2014 9:00 AM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [Fuel][Nailgun] Web framework > > I didn't participate in that discussion, but here are topics on Flask > cons from your link. I added some comments. > > - Cons > - db transactions a little trickier to manage, but possible # > what is trickier? Flask uses pure SQLalchemy or a very thin wrapper > - JSON built-in but not XML # the only one I agree with, but does > Pecan have it? > - some issues, not updated in a while # last commit was 3 days ago > - No Python 3 support # full Python 3 support fro a year or so already > - Not WebOb # can it even be considered as a con? > > I'm not trying to argue with you or community principles, I'm just > trying to choose the right instrument for the job. > > On Wed, Dec 3, 2014 at 7:41 PM, Jay Pipes wrote: >> On 12/03/2014 10:53 AM, Nikolay Markov wrote: >>>> >>>> However, the OpenStack community is also about a shared set of tools, >>>> development methodologies, and common perspectives. >>> >>> >>> I completely agree with you, Jay, but the same principle may be >>> applied much wider. Why Openstack Community decided to use its own >>> unstable project instead of existing solution, which is widely used in >>> Python community? To avoid being a team player? Or, at least, why it's >>> recommended way even if it doesn't provide the same features other >>> frameworks have for a long time already? I mean, there is no doubt >>> everyone would use stable and technically advanced tool, but imposing >>> everyone to use it by force with a simple hope that it'll become >>> better from this is usually a bad approach. >> >> >> This conversation was had a long time ago, was thoroughly thought-out and >> discussed at prior summits and the ML: >> >> https://etherpad.openstack.org/p/grizzly-common-wsgi-frameworks >> https://etherpad.openstack.org/p/havana-common-wsgi >> >> I think it's unfair to suggest that the OpenStack community decided "to use >> its own unstable project instead of existing solution". >> >> >> Best, >> -jay >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Best regards, > Nick Markov > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dprince at redhat.com Thu Dec 4 13:09:56 2014 From: dprince at redhat.com (Dan Prince) Date: Thu, 04 Dec 2014 08:09:56 -0500 Subject: [openstack-dev] [TripleO] Do we want to remove Nova-bm support? In-Reply-To: <547FE757.9030701@wedontsleep.org> References: <547FE757.9030701@wedontsleep.org> Message-ID: <1417698596.2112.1.camel@dovetail.localdomain> On Thu, 2014-12-04 at 15:47 +1100, Steve Kowalik wrote: > Hi all, > > I'm becoming increasingly concerned about all of the code paths > in tripleo-incubator that check $USE_IRONIC -eq 0 -- that is, use > nova-baremetal rather than Ironic. We do not check nova-bm support in > CI, haven't for at least a month, and I'm concerned that parts of it > may be slowly bit-rotting. > > I think our documentation is fairly clear that nova-baremetal is > deprecated and Ironic is the way forward, and I know it flies in the > face of backwards-compatibility, but do we want to bite the bullet and > remove nova-bm support? I'd vote yes. Given that our CI jobs all currently use Ironic I think it is safe to move forwards and remove the old Nova BM configuration. Dan > > Cheers, From rprikhodchenko at mirantis.com Thu Dec 4 13:10:24 2014 From: rprikhodchenko at mirantis.com (Roman Prykhodchenko) Date: Thu, 4 Dec 2014 14:10:24 +0100 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> <547F3D48.5020401@gmail.com> <1A3C52DFCD06494D8528644858247BF017812DEB@EX10MBOX03.pnnl.gov> Message-ID: I?d rather suggest doing in several iteration by replacing several resources by Pecan?s implementations. Doing that in one big patch-set will make reviewing very painful, so some bad things might be not noticed. > On 04 Dec 2014, at 14:01, Igor Kalnitsky wrote: > > Ok, guys, > > It became obvious that most of us either vote for Pecan or abstain from voting. > > So I propose to stop fighting this battle (Flask vs Pecan) and start > thinking about moving to Pecan. You know, there are many questions > that need to be discussed (such as 'should we change API version' or > 'should be it done iteratively or as one patchset'). > > - Igor > > On Wed, Dec 3, 2014 at 7:25 PM, Fox, Kevin M wrote: >> Choosing the right instrument for the job in an open source community involves choosing technologies that the community is familiar/comfortable with as well, as it will allow you access to a greater pool of developers. >> >> With that in mind then, I'd add: >> Pro Pecan, blessed by the OpenStack community, con Flask, not. >> >> Kevin >> ________________________________________ >> From: Nikolay Markov [nmarkov at mirantis.com] >> Sent: Wednesday, December 03, 2014 9:00 AM >> To: OpenStack Development Mailing List (not for usage questions) >> Subject: Re: [openstack-dev] [Fuel][Nailgun] Web framework >> >> I didn't participate in that discussion, but here are topics on Flask >> cons from your link. I added some comments. >> >> - Cons >> - db transactions a little trickier to manage, but possible # >> what is trickier? Flask uses pure SQLalchemy or a very thin wrapper >> - JSON built-in but not XML # the only one I agree with, but does >> Pecan have it? >> - some issues, not updated in a while # last commit was 3 days ago >> - No Python 3 support # full Python 3 support fro a year or so already >> - Not WebOb # can it even be considered as a con? >> >> I'm not trying to argue with you or community principles, I'm just >> trying to choose the right instrument for the job. >> >> On Wed, Dec 3, 2014 at 7:41 PM, Jay Pipes wrote: >>> On 12/03/2014 10:53 AM, Nikolay Markov wrote: >>>>> >>>>> However, the OpenStack community is also about a shared set of tools, >>>>> development methodologies, and common perspectives. >>>> >>>> >>>> I completely agree with you, Jay, but the same principle may be >>>> applied much wider. Why Openstack Community decided to use its own >>>> unstable project instead of existing solution, which is widely used in >>>> Python community? To avoid being a team player? Or, at least, why it's >>>> recommended way even if it doesn't provide the same features other >>>> frameworks have for a long time already? I mean, there is no doubt >>>> everyone would use stable and technically advanced tool, but imposing >>>> everyone to use it by force with a simple hope that it'll become >>>> better from this is usually a bad approach. >>> >>> >>> This conversation was had a long time ago, was thoroughly thought-out and >>> discussed at prior summits and the ML: >>> >>> https://etherpad.openstack.org/p/grizzly-common-wsgi-frameworks >>> https://etherpad.openstack.org/p/havana-common-wsgi >>> >>> I think it's unfair to suggest that the OpenStack community decided "to use >>> its own unstable project instead of existing solution". >>> >>> >>> Best, >>> -jay >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> -- >> Best regards, >> Nick Markov >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pasquale.porreca at dektech.com.au Thu Dec 4 13:16:44 2014 From: pasquale.porreca at dektech.com.au (Pasquale Porreca) Date: Thu, 04 Dec 2014 14:16:44 +0100 Subject: [openstack-dev] [nova] V3 API support In-Reply-To: References: Message-ID: <54805EBC.6040206@dektech.com.au> v3 API is now moving to v2.1, but the definition of the api module and the json schema for v2.1 still go on v3 folder. I posted a question about v2 vs v3 API on this mailing list not long ago and got useful tips by Christopher Yeoh, maybe it will be useful for you too to give at check at them: http://lists.openstack.org/pipermail/openstack-dev/2014-November/050711.html On 12/04/14 13:50, Sergey Nikitin wrote: > Hi, Christopher, > > I working on API extension for instance tags > (https://review.openstack.org/#/c/128940/). Recently one reviewer > asked me to add V3 API support. I talked with Jay Pipes about it and > he told me that V3 API became useless. So I wanted to ask you and our > community: "Do we need to support v3 API in future nova patches?" > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr -------------- next part -------------- An HTML attachment was scrubbed... URL: From dprince at redhat.com Thu Dec 4 13:24:20 2014 From: dprince at redhat.com (Dan Prince) Date: Thu, 04 Dec 2014 08:24:20 -0500 Subject: [openstack-dev] [TripleO] Meeting purpose In-Reply-To: References: Message-ID: <1417699460.2112.13.camel@dovetail.localdomain> On Thu, 2014-12-04 at 10:11 +0100, James Polley wrote: > Hi all, > > > The other topic that has come up a few times in our meetings is: what > value do we get from these meetings? Often. Not much. :( Partially my own fault too since I haven't been good about adding items to the agenda myself ahead of time. > > > There's no doubt that regular meetings provide value - but that > doesn't mean that everything we do in them is valuable. > > > As an example of something that I think doesn't add much value in the > meeting - DerekH has already been giving semi-regular CI/CD status > reports via email. I'd like to make these weekly update emails > regular, and take the update off the meeting agenda. I'm offering to > share the load with him to make this easier to achieve. > > > Are there other things on our regular agenda that you feel aren't > offering much value? I'd propose we axe the regular agenda entirely and let people promote things in open discussion if they need to. In fact the regular agenda often seems like a bunch of motions we go through... to the extent that while the TripleO meeting was going on we've actually discussed what was in my opinion the most important things in the normal #tripleo IRC channel. Is getting through our review stats really that important!? > Are there things you'd like to see moved onto, or off, the agenda? Perhaps a streamlined agenda like this would work better: * Bugs * Projects needing releases * Open Discussion (including important SPECs, CI, or anything needing attention). ** Leader might have to drive this ** > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From joe.gordon0 at gmail.com Thu Dec 4 13:26:59 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Thu, 4 Dec 2014 15:26:59 +0200 Subject: [openstack-dev] [Nova][Cinder] Operations: adding new nodes in "disabled" state, allowed for test tenant only In-Reply-To: References: Message-ID: On Wed, Dec 3, 2014 at 3:31 PM, Mike Scherbakov wrote: > Hi all, > enable_new_services in nova.conf seems to allow add new compute nodes in > disabled state: > > https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L507-L508, > so it would allow to check everything first, before allowing production > workloads host a VM on it. I've filed a bug to Fuel to use this by default > when we scale up the env (add more computes) [1]. > > A few questions: > > 1. can we somehow enable compute service for test tenant first? So > cloud administrator would be able to run test VMs on the node, and after > ensuring that everything is fine - to enable service for all tenants > > Although there may be more then one way to set this up in nova, this can definitely be done via nova host aggregates. Put new compute services into an aggregate that only specific tenants can access (controlled via scheduler filter). > > 1. > 2. What about Cinder? Is there a similar option / ability? > 3. What about other OpenStack projects? > > What is your opinion, how we should approach the problem (if there is a > problem)? > > [1] https://bugs.launchpad.net/fuel/+bug/1398817 > -- > Mike Scherbakov > #mihgen > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dprince at redhat.com Thu Dec 4 13:37:33 2014 From: dprince at redhat.com (Dan Prince) Date: Thu, 04 Dec 2014 08:37:33 -0500 Subject: [openstack-dev] [TripleO] CI report : 1/11/2014 - 4/12/2014 In-Reply-To: <54804AA5.9090902@redhat.com> References: <54804AA5.9090902@redhat.com> Message-ID: <1417700253.2112.23.camel@dovetail.localdomain> On Thu, 2014-12-04 at 11:51 +0000, Derek Higgins wrote: > A month since my last update, sorry my bad > > since the last email we've had 5 incidents causing ci failures > > 26/11/2014 : Lots of ubuntu jobs failed over 24 hours (maybe half) > - We seem to suffer any time an ubuntu mirror isn't in sync causing hash > mismatch errors. For now I've pinned DNS on our proxy to a specific > server so we stop DNS round robining This sound fine to me. I personally like the model where you pin to a specific mirror, perhaps one that is geographically closer to your datacenter. This also makes Squid caching (in the rack) happier in some cases. > > 21/11/2014 : All tripleo jobs failed for about 16 hours > - Neutron started asserting that local_ip be set to a valid ip address, > on the seed we had been leaving it blank > - Cinder moved to using oslo.concurreny which in turn requires that > lock_path be set, we are now setting it Thinking about how we might catch these ahead of time with our limited resources ATM. These sorts of failures all seem related to configuration and or requirements changes. I wonder if we were to selectively (automatically) run check experimental jobs on all reviews with associated tickets which have either doc changes or modify requirements.txt. Probably a bit of work to pull this off but if we had a report containing these results "coming down the pike" we might be able to catch them ahead of time. > > 8/11/2014 : All fedora tripleo jobs failed for about 60 hours (over a > weekend) > - A url being accessed on https://bzr.linuxfoundation.org is no longer > available, we removed the dependency > > 7/11/2014 : All tripleo tests failed for about 24 hours > - Options were removed from nova.conf that had been deprecated (although > no deprecation warnings were being reported), we were still using these > in tripleo > > as always more details can be found here > https://etherpad.openstack.org/p/tripleo-ci-breakages Thanks for sending this out! Very useful. Dan > > thanks, > Derek. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From alexisl at hp.com Thu Dec 4 13:49:35 2014 From: alexisl at hp.com (Alexis Lee) Date: Thu, 4 Dec 2014 13:49:35 +0000 Subject: [openstack-dev] [TripleO] Do we want to remove Nova-bm support? In-Reply-To: <1417698596.2112.1.camel@dovetail.localdomain> References: <547FE757.9030701@wedontsleep.org> <1417698596.2112.1.camel@dovetail.localdomain> Message-ID: <20141204134935.GD21416@hp.com> Dan Prince said on Thu, Dec 04, 2014 at 08:09:56AM -0500: > > face of backwards-compatibility, but do we want to bite the bullet and > > remove nova-bm support? +1, FWIW. Alexis -- Nova Engineer, HP Cloud. AKA lealexis, lxsli. From majopela at redhat.com Thu Dec 4 14:06:04 2014 From: majopela at redhat.com (=?utf-8?Q?Miguel_=C3=81ngel_Ajo?=) Date: Thu, 4 Dec 2014 15:06:04 +0100 Subject: [openstack-dev] Deprecating old security groups code / RPC. Message-ID: During Juno, we introduced the enhanced security groups rpc (security_groups_info_for_devices) instead of (security_group_rules_for_devices), and the ipset functionality to offload iptable chains a bit. Here I propose to: 1) Remove the old security_group_info_for_devices, which was left to ease operators upgrade path from I to J (allowing running old openvswitch agents as we upgrade) Doing this we can cleanup the current iptables firewall driver a bit from unused code paths. I suppose this would require a major RPC version bump. 2) Remove the option to disable ipset (now it?s enabled by default and seems to be working without problems), and make it an standard way to handle ?IP? groups from the iptables perspective. Thoughts?, Best regards, Miguel ?ngel Ajo -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsufiev at mirantis.com Thu Dec 4 14:08:46 2014 From: tsufiev at mirantis.com (Timur Sufiev) Date: Thu, 4 Dec 2014 18:08:46 +0400 Subject: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon In-Reply-To: <547F8EFB.4030302@linux.vnet.ibm.com> References: <547F8EFB.4030302@linux.vnet.ibm.com> Message-ID: Hi Aaron, The only way to combine 2 aforementioned solutions I've been thinking of is to implement David's solution as the 4th option (in addition to true|false|static) on a per-form basis, leaving the possibility to change the default value in configs. I guess this sort of combining would be as simple as just putting both patches together (perhaps, changing a bit David's js-code for catching 'click' event - to work only for the modal forms with [data-modal-backdrop='confirm']). On Thu, Dec 4, 2014 at 1:30 AM, Aaron Sahlin wrote: > I would be happy with either the two proposed solutions (both > improvements over the what we have now). > Any thoughts on combining them? Only close if esc or 'x' is clicked, but > also warn them if data was entered. > > > > On 12/3/2014 7:21 AM, Rob Cresswell (rcresswe) wrote: > > +1 to changing the behaviour to ?static'. Modal inside a modal is > potentially slightly more useful, but looks messy and inconsistent, which I > think outweighs the functionality. > > Rob > > > On 2 Dec 2014, at 12:21, Timur Sufiev wrote: > > Hello, Horizoneers and UX-ers! > > The default behavior of modals in Horizon (defined in turn by Bootstrap > defaults) regarding their closing is to simply close the modal once user > clicks somewhere outside of it (on the backdrop element below and around > the modal). This is not very convenient for the modal forms containing a > lot of input - when it is closed without a warning all the data the user > has already provided is lost. Keeping this in mind, I've made a patch [1] > changing default Bootstrap 'modal_backdrop' parameter to 'static', which > means that forms are not closed once the user clicks on a backdrop, while > it's still possible to close them by pressing 'Esc' or clicking on the 'X' > link at the top right border of the form. Also the patch [1] allows to > customize this behavior (between 'true'-current one/'false' - no backdrop > element/'static') on a per-form basis. > > What I didn't know at the moment I was uploading my patch is that David > Lyle had been working on a similar solution [2] some time ago. It's a bit > more elaborate than mine: if the user has already filled some some inputs > in the form, then a confirmation dialog is shown, otherwise the form is > silently dismissed as it happens now. > > The whole point of writing about this in the ML is to gather opinions > which approach is better: > * stick to the current behavior; > * change the default behavior to 'static'; > * use the David's solution with confirmation dialog (once it'll be > rebased to the current codebase). > > What do you think? > > [1] https://review.openstack.org/#/c/113206/ > [2] https://review.openstack.org/#/c/23037/ > > P.S. I remember that I promised to write this email a week ago, but > better late than never :). > > -- > Timur Sufiev > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Timur Sufiev -------------- next part -------------- An HTML attachment was scrubbed... URL: From majopela at redhat.com Thu Dec 4 14:10:50 2014 From: majopela at redhat.com (=?utf-8?Q?Miguel_=C3=81ngel_Ajo?=) Date: Thu, 4 Dec 2014 15:10:50 +0100 Subject: [openstack-dev] [neutron] Deprecating old security groups code / RPC. In-Reply-To: References: Message-ID: Sorry, adding [neutron] to the subject. Miguel ?ngel Ajo On Thursday, 4 de December de 2014 at 15:06, Miguel ?ngel Ajo wrote: > > > During Juno, we introduced the enhanced security groups rpc (security_groups_info_for_devices) instead of (security_group_rules_for_devices), > and the ipset functionality to offload iptable chains a bit. > > > Here I propose to: > > 1) Remove the old security_group_info_for_devices, which was left to ease operators upgrade > path from I to J (allowing running old openvswitch agents as we upgrade) > > Doing this we can cleanup the current iptables firewall driver a bit from unused code paths. > > > I suppose this would require a major RPC version bump. > > 2) Remove the option to disable ipset (now it?s enabled by default and seems > to be working without problems), and make it an standard way to handle ?IP? groups > from the iptables perspective. > > > Thoughts?, > > Best regards, > Miguel ?ngel Ajo > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org (mailto:OpenStack-dev at lists.openstack.org) > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at dague.net Thu Dec 4 14:13:24 2014 From: sean at dague.net (Sean Dague) Date: Thu, 04 Dec 2014 09:13:24 -0500 Subject: [openstack-dev] profiling code in tests that include eventlet? Message-ID: <54806C04.1030300@dague.net> I'd like to get a bit better data on where we're spending all our time in tests in Nova (and just in general). It doesn't appear there is a clear way to do that with testr. The following wraps the tests runs in cProfile - https://review.openstack.org/#/c/138440/2/tools/profile.py,cm However, as was correctly pointed out by marun, eventlet causes some determinism issues here. My attempts to use GreenletProfiler end in segfaults - http://emptysqua.re/blog/greenletprofiler/ I also looked a bit into the profiler that is shipped with eventlet, however that's triggering import errors that are a bit wonky. Mostly, I'm trying to figure out if anyone in our community has developed a pattern here before I keep banging my head into this one. We can do a lot of guessing without real profiling about where time is spent, but I'd like to be a little more data driven. -Sean -- Sean Dague http://dague.net From sileht at sileht.net Thu Dec 4 14:17:51 2014 From: sileht at sileht.net (Mehdi Abaakouk) Date: Thu, 04 Dec 2014 15:17:51 +0100 Subject: [openstack-dev] oslo.messaging 1.5.1 released Message-ID: <0687aad87a9c2f2d105c22ac232e8aad@sileht.net> The Oslo team is pleased to announce the release of oslo.messaging 1.5.1: Oslo Messaging API This release reintroduces the 'fake_rabbit' config option. For more details, please see the git log history below and https://launchpad.net/oslo.messaging//+milestone/1.5.1 Please report issues through launchpad: https://bugs.launchpad.net/oslo.messaging/ ---------------------------------------- Changes in openstack/oslo.messaging 1.5.0..1.5.1 712f6e3 Reintroduces fake_rabbit config option 554ad9d Imported Translations from Transifex diffstat (except docs and test files): .../locale/de/LC_MESSAGES/oslo.messaging.po | 8 ++++++-- .../locale/en_GB/LC_MESSAGES/oslo.messaging.po | 8 ++++++-- .../locale/fr/LC_MESSAGES/oslo.messaging.po | 8 ++++++-- oslo.messaging/locale/oslo.messaging.pot | 8 ++++++-- oslo/messaging/_drivers/impl_rabbit.py | 13 ++++++++++++- tests/drivers/test_impl_rabbit.py | 19 +++++++++++++++++++ 6 files changed, 55 insertions(+), 9 deletions(-) Requirements updates: N/A -- Mehdi Abaakouk mail: sileht at sileht.net irc: sileht From ihrachys at redhat.com Thu Dec 4 14:19:15 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 04 Dec 2014 15:19:15 +0100 Subject: [openstack-dev] [neutron] Deprecating old security groups code / RPC. In-Reply-To: References: Message-ID: <54806D63.1030002@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 > On Thursday, 4 de December de 2014 at 15:06, Miguel ?ngel Ajo > wrote: > >> >> >> During Juno, we introduced the enhanced security groups rpc >> (security_groups_info_for_devices) instead of >> (security_group_rules_for_devices), and the ipset functionality >> to offload iptable chains a bit. >> >> >> Here I propose to: >> >> 1) Remove the old security_group_info_for_devices, which was left >> to ease operators upgrade path from I to J (allowing running old >> openvswitch agents as we upgrade) >> >> Doing this we can cleanup the current iptables firewall driver a >> bit from unused code paths. >> +1. >> >> I suppose this would require a major RPC version bump. >> >> 2) Remove the option to disable ipset (now it?s enabled by >> default and seems to be working without problems), and make it an >> standard way to handle ?IP? groups from the iptables >> perspective. Is ipset support present in all supported distributions? >> >> >> Thoughts?, >> >> Best regards, Miguel ?ngel Ajo >> >> _______________________________________________ OpenStack-dev >> mailing list OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUgG1jAAoJEC5aWaUY1u57aK4H/1G0R0NgURf1l7WCx27VqRDR jdFlYzecMk2E6h84Fv5tJgGqAm6mGEFUrLf8MJ9+kDB33Syb+zvxJc9v6CvMw7br o+Qjk4lbHiiko1W8kDmq+onjUDHExapTR1+PsSX0HmuEvwV8yrAm/VJyccAAiqB6 XPrWG4Xft2zEp004/uT9jzJPeW4YhRNY84Sa2C1ghemzKn43QYlu8U3DfuDzfQFP 2MjzTwdP1FfBIX0jhXHrMlnHGuuxAscL9v6DM7Np2Iro6ExXK1ry9ex4/NWbdcIY sP9MkuA2wAMYE8pN1UM4LwSPg2rpEZEuwJfXyTohshcVHDoyPk81F4Q6R+ABPqM= =xzY6 -----END PGP SIGNATURE----- From ihrachys at redhat.com Thu Dec 4 14:21:29 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 04 Dec 2014 15:21:29 +0100 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: References: Message-ID: <54806DE9.3030608@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 02/12/14 14:22, Alan Pevec wrote: > Hi all, > > here are exception proposal I have collected when preparing for > the 2014.2.1 release, stable-maint members please have a look! > > > General: cap Oslo and client library versions - sync from > openstack/requirements stable/juno, would be good to include in > the release. > https://review.openstack.org/#/q/status:open+branch:stable/juno+topic:openstack/requirements,n,z > > Ceilometer (all proposed by Ceilo PTL) > https://review.openstack.org/138315 > https://review.openstack.org/138317 > https://review.openstack.org/138320 > https://review.openstack.org/138321 > https://review.openstack.org/138322 > > Cinder https://review.openstack.org/137537 - small change and > limited to the VMWare driver > > Glance https://review.openstack.org/137704 - glance_store is > backward compatible, but not sure about forcing version bump on > stable https://review.openstack.org/137862 - Disable osprofiler by > default to prevent upgrade issues, disabled by default in other > services > > Horizon standing-after-freeze translation update, coming on Dec 3 > https://review.openstack.org/138018 - visible issue, no > translation string changes https://review.openstack.org/138313 - > low risk patch for a highly problematic issue > > Neutron https://review.openstack.org/136294 - default SNAT, see > review for details, I cannot distil 1liner :) > https://review.openstack.org/136275 - self-contained to the vendor > code, extensively tested in several deployments I'd like to request another exception for Neutron to avoid introducing regression in hostname validation for DNS nameservers: https://review.openstack.org/#/c/139061/ > > Nova > https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/juno+topic:1386236/juno,n,z > > - - soaked more than a week in master, makes numa actually work in Juno > > Sahara https://review.openstack.org/135549 - fix for auto security > groups, there were some concerns, see review for details > > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUgG3pAAoJEC5aWaUY1u57E7gH/Ry5zrN5/1bA8M2NkkhmJ1D4 ks9sX4ivZYjSeV5/4qfLsrHAnO2+R5H3CmVHmS2Us3s5ZTU4CTiKCt9TAmWDy5pd f6YmpjzOnszusXMMibi1K7sWZGmC0P5zOSxyPZh1W3rljy7WZSeR/P3NtAqqBw3M eMb0gEDBDkyn8JNWSxM5mjYHYsYnfU6CddZWT/EoOQExttFKRyAd35ugOJnvbQoR F3Cu3k/BgsxSPYZIuLzf+W1YQBsnGpH9AK7LwTzVfsYJgX1ggiVGU8Rdh3FvccIx hV84PKMwNccIgwIMxqK83QHtV4p1FJ/TN4UmActXSgR1J+QUEFgH32zAolIMLlw= =n/d7 -----END PGP SIGNATURE----- From jason.dobies at redhat.com Thu Dec 4 14:27:02 2014 From: jason.dobies at redhat.com (Jay Dobies) Date: Thu, 04 Dec 2014 09:27:02 -0500 Subject: [openstack-dev] [TripleO] Do we want to remove Nova-bm support? In-Reply-To: <20141204134935.GD21416@hp.com> References: <547FE757.9030701@wedontsleep.org> <1417698596.2112.1.camel@dovetail.localdomain> <20141204134935.GD21416@hp.com> Message-ID: <54806F36.20508@redhat.com> > +1, FWIW. > > > Alexis +1 This is similar to the no merge.py discussion. If something isn't covered by CI, it's going to grow stale pretty quickly. From apevec at gmail.com Thu Dec 4 14:29:44 2014 From: apevec at gmail.com (Alan Pevec) Date: Thu, 4 Dec 2014 15:29:44 +0100 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: <54806DE9.3030608@redhat.com> References: <54806DE9.3030608@redhat.com> Message-ID: > I'd like to request another exception for Neutron to avoid introducing > regression in hostname validation for DNS nameservers: > https://review.openstack.org/#/c/139061/ Nice solution for this regression, I think it's worthy last-minute exception. Should VMT also send this officially to the vendors as a followup to OSSA 2014-039 ? Cheers, Alan From nmarkov at mirantis.com Thu Dec 4 14:32:14 2014 From: nmarkov at mirantis.com (Nikolay Markov) Date: Thu, 4 Dec 2014 18:32:14 +0400 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> <547F3D48.5020401@gmail.com> <1A3C52DFCD06494D8528644858247BF017812DEB@EX10MBOX03.pnnl.gov> Message-ID: Ryan, can you please provide some more links on how these features what I described are implemented in Pecan? Some working examples, maybe? As far as I see now, each OpenStack project uses its own approach to integration with Pecan, so what will you recommend to look at? On Thu, Dec 4, 2014 at 4:10 PM, Roman Prykhodchenko wrote: > I?d rather suggest doing in several iteration by replacing several resources by Pecan?s implementations. > Doing that in one big patch-set will make reviewing very painful, so some bad things might be not noticed. > > >> On 04 Dec 2014, at 14:01, Igor Kalnitsky wrote: >> >> Ok, guys, >> >> It became obvious that most of us either vote for Pecan or abstain from voting. >> >> So I propose to stop fighting this battle (Flask vs Pecan) and start >> thinking about moving to Pecan. You know, there are many questions >> that need to be discussed (such as 'should we change API version' or >> 'should be it done iteratively or as one patchset'). >> >> - Igor >> >> On Wed, Dec 3, 2014 at 7:25 PM, Fox, Kevin M wrote: >>> Choosing the right instrument for the job in an open source community involves choosing technologies that the community is familiar/comfortable with as well, as it will allow you access to a greater pool of developers. >>> >>> With that in mind then, I'd add: >>> Pro Pecan, blessed by the OpenStack community, con Flask, not. >>> >>> Kevin >>> ________________________________________ >>> From: Nikolay Markov [nmarkov at mirantis.com] >>> Sent: Wednesday, December 03, 2014 9:00 AM >>> To: OpenStack Development Mailing List (not for usage questions) >>> Subject: Re: [openstack-dev] [Fuel][Nailgun] Web framework >>> >>> I didn't participate in that discussion, but here are topics on Flask >>> cons from your link. I added some comments. >>> >>> - Cons >>> - db transactions a little trickier to manage, but possible # >>> what is trickier? Flask uses pure SQLalchemy or a very thin wrapper >>> - JSON built-in but not XML # the only one I agree with, but does >>> Pecan have it? >>> - some issues, not updated in a while # last commit was 3 days ago >>> - No Python 3 support # full Python 3 support fro a year or so already >>> - Not WebOb # can it even be considered as a con? >>> >>> I'm not trying to argue with you or community principles, I'm just >>> trying to choose the right instrument for the job. >>> >>> On Wed, Dec 3, 2014 at 7:41 PM, Jay Pipes wrote: >>>> On 12/03/2014 10:53 AM, Nikolay Markov wrote: >>>>>> >>>>>> However, the OpenStack community is also about a shared set of tools, >>>>>> development methodologies, and common perspectives. >>>>> >>>>> >>>>> I completely agree with you, Jay, but the same principle may be >>>>> applied much wider. Why Openstack Community decided to use its own >>>>> unstable project instead of existing solution, which is widely used in >>>>> Python community? To avoid being a team player? Or, at least, why it's >>>>> recommended way even if it doesn't provide the same features other >>>>> frameworks have for a long time already? I mean, there is no doubt >>>>> everyone would use stable and technically advanced tool, but imposing >>>>> everyone to use it by force with a simple hope that it'll become >>>>> better from this is usually a bad approach. >>>> >>>> >>>> This conversation was had a long time ago, was thoroughly thought-out and >>>> discussed at prior summits and the ML: >>>> >>>> https://etherpad.openstack.org/p/grizzly-common-wsgi-frameworks >>>> https://etherpad.openstack.org/p/havana-common-wsgi >>>> >>>> I think it's unfair to suggest that the OpenStack community decided "to use >>>> its own unstable project instead of existing solution". >>>> >>>> >>>> Best, >>>> -jay >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> -- >>> Best regards, >>> Nick Markov >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Nick Markov From jason.dobies at redhat.com Thu Dec 4 14:35:50 2014 From: jason.dobies at redhat.com (Jay Dobies) Date: Thu, 04 Dec 2014 09:35:50 -0500 Subject: [openstack-dev] [TripleO] Meeting purpose In-Reply-To: <1417699460.2112.13.camel@dovetail.localdomain> References: <1417699460.2112.13.camel@dovetail.localdomain> Message-ID: <54807146.7070605@redhat.com> >> As an example of something that I think doesn't add much value in the >> meeting - DerekH has already been giving semi-regular CI/CD status >> reports via email. I'd like to make these weekly update emails >> regular, and take the update off the meeting agenda. I'm offering to >> share the load with him to make this easier to achieve. The Tuskar item is the same way. Not sure how that was added as an explicit agenda item, but I don't see why we'd call out to one particular project within TripleO. Anything we'd need eyes on should be covered when we chime in about specs or reviews needing eyes. >> Are there other things on our regular agenda that you feel aren't >> offering much value? > > I'd propose we axe the regular agenda entirely and let people promote > things in open discussion if they need to. In fact the regular agenda > often seems like a bunch of motions we go through... to the extent that > while the TripleO meeting was going on we've actually discussed what was > in my opinion the most important things in the normal #tripleo IRC > channel. Is getting through our review stats really that important!? I think the review stats would be better handled in e-mail format like Derek's CI status e-mails. We don't want the reviews to get out of hand, but the time spent pasting in the links and everyone looking at the stats during the meeting itself are wasteful. I could see bringing it up if it's becoming a problem, but the number crunching doesn't need to be part of the meeting. >> Are there things you'd like to see moved onto, or off, the agenda? > > Perhaps a streamlined agenda like this would work better: > > * Bugs This one is valuable and I like the idea of keeping it. > * Projects needing releases Is this even needed as well? It feels like for months now the answer is always "Yes, release the world". I think our cadence on those release can be slowed down as well (the last few releases I've done have had minimal churn at best), but I'm not trying to thread jack into that discussion. I bring it up because we could remove that from the meeting and do an entirely new model where we get the release volunteer through other means on a (potentially) less frequent release basis. > * Open Discussion (including important SPECs, CI, or anything needing > attention). ** Leader might have to drive this ** I like the idea of a specific Specs/Reviews section. It should be quick, but a specific point in time where people can #info a review they need eyes on. I think it appeals to my OCD to have this more structured than interspersed with other topics in open discussion. From majopela at redhat.com Thu Dec 4 14:40:45 2014 From: majopela at redhat.com (=?utf-8?Q?Miguel_=C3=81ngel_Ajo?=) Date: Thu, 4 Dec 2014 15:40:45 +0100 Subject: [openstack-dev] [neutron] Deprecating old security groups code / RPC. In-Reply-To: <54806D63.1030002@redhat.com> References: <54806D63.1030002@redhat.com> Message-ID: On Thursday, 4 de December de 2014 at 15:19, Ihar Hrachyshka wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > > On Thursday, 4 de December de 2014 at 15:06, Miguel ?ngel Ajo > > wrote: > > > > > > > > > > > During Juno, we introduced the enhanced security groups rpc > > > (security_groups_info_for_devices) instead of > > > (security_group_rules_for_devices), and the ipset functionality > > > to offload iptable chains a bit. > > > > > > > > > Here I propose to: > > > > > > 1) Remove the old security_group_info_for_devices, which was left > > > to ease operators upgrade path from I to J (allowing running old > > > openvswitch agents as we upgrade) > > > > > > Doing this we can cleanup the current iptables firewall driver a > > > bit from unused code paths. > > > > > > > > > > +1. > > > > > > > I suppose this would require a major RPC version bump. > > > > > > 2) Remove the option to disable ipset (now it?s enabled by > > > default and seems to be working without problems), and make it an > > > standard way to handle ?IP? groups from the iptables > > > perspective. > > > > > > > > Is ipset support present in all supported distributions? > It is from Red Hat perspective, not sure Ubuntu, and the others, I think Juno was targeted to ubuntu 14.04 only (which does have ipset kernel support and it?s tool). Ipset was in kernel since 2.4.x, but RHEL6/Centos6 didn?t ship the tools neither enabled it on kernel (AFAIK). > > > > > > > > > > Thoughts?, > > > > > > Best regards, Miguel ?ngel Ajo > > > > > > _______________________________________________ OpenStack-dev > > > mailing list OpenStack-dev at lists.openstack.org (mailto:OpenStack-dev at lists.openstack.org) > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > _______________________________________________ OpenStack-dev > > mailing list OpenStack-dev at lists.openstack.org (mailto:OpenStack-dev at lists.openstack.org) > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > > iQEcBAEBCgAGBQJUgG1jAAoJEC5aWaUY1u57aK4H/1G0R0NgURf1l7WCx27VqRDR > jdFlYzecMk2E6h84Fv5tJgGqAm6mGEFUrLf8MJ9+kDB33Syb+zvxJc9v6CvMw7br > o+Qjk4lbHiiko1W8kDmq+onjUDHExapTR1+PsSX0HmuEvwV8yrAm/VJyccAAiqB6 > XPrWG4Xft2zEp004/uT9jzJPeW4YhRNY84Sa2C1ghemzKn43QYlu8U3DfuDzfQFP > 2MjzTwdP1FfBIX0jhXHrMlnHGuuxAscL9v6DM7Np2Iro6ExXK1ry9ex4/NWbdcIY > sP9MkuA2wAMYE8pN1UM4LwSPg2rpEZEuwJfXyTohshcVHDoyPk81F4Q6R+ABPqM= > =xzY6 > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org (mailto:OpenStack-dev at lists.openstack.org) > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsblendido at suse.com Thu Dec 4 14:41:09 2014 From: rsblendido at suse.com (Rossella Sblendido) Date: Thu, 04 Dec 2014 15:41:09 +0100 Subject: [openstack-dev] [neutron] Deprecating old security groups code / RPC. In-Reply-To: <54806D63.1030002@redhat.com> References: <54806D63.1030002@redhat.com> Message-ID: <54807285.9020903@suse.com> On 12/04/2014 03:19 PM, Ihar Hrachyshka wrote: > Is ipset support present in all supported distributions? SUSE distributions support ipset. +1 for Miguel Angel's proposal cheers, Rossella From mestery at mestery.com Thu Dec 4 14:50:18 2014 From: mestery at mestery.com (Kyle Mestery) Date: Thu, 4 Dec 2014 08:50:18 -0600 Subject: [openstack-dev] [neutron] Deprecating old security groups code / RPC. In-Reply-To: References: <54806D63.1030002@redhat.com> Message-ID: On Thu, Dec 4, 2014 at 8:40 AM, Miguel ?ngel Ajo wrote: > > > On Thursday, 4 de December de 2014 at 15:19, Ihar Hrachyshka wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > On Thursday, 4 de December de 2014 at 15:06, Miguel ?ngel Ajo > wrote: > > > > During Juno, we introduced the enhanced security groups rpc > (security_groups_info_for_devices) instead of > (security_group_rules_for_devices), and the ipset functionality > to offload iptable chains a bit. > > > Here I propose to: > > 1) Remove the old security_group_info_for_devices, which was left > to ease operators upgrade path from I to J (allowing running old > openvswitch agents as we upgrade) > > Doing this we can cleanup the current iptables firewall driver a > bit from unused code paths. > > > +1. > > > I suppose this would require a major RPC version bump. > > 2) Remove the option to disable ipset (now it?s enabled by > default and seems to be working without problems), and make it an > standard way to handle ?IP? groups from the iptables > perspective. > > > Is ipset support present in all supported distributions? > > > It is from Red Hat perspective, not sure Ubuntu, and the others, I think > Juno was targeted to ubuntu 14.04 only (which does have ipset kernel > support and it?s tool). > > Ipset was in kernel since 2.4.x, but RHEL6/Centos6 didn?t ship > the tools neither enabled it on kernel (AFAIK). > Once we verify Ubuntu's support for ipset (kernel and user tools), I'm +1 to this proposal. RHEL/CentOS/Fedora and SuSe look good. Thanks, Kyle > > > > > Thoughts?, > > Best regards, Miguel ?ngel Ajo > > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > > iQEcBAEBCgAGBQJUgG1jAAoJEC5aWaUY1u57aK4H/1G0R0NgURf1l7WCx27VqRDR > jdFlYzecMk2E6h84Fv5tJgGqAm6mGEFUrLf8MJ9+kDB33Syb+zvxJc9v6CvMw7br > o+Qjk4lbHiiko1W8kDmq+onjUDHExapTR1+PsSX0HmuEvwV8yrAm/VJyccAAiqB6 > XPrWG4Xft2zEp004/uT9jzJPeW4YhRNY84Sa2C1ghemzKn43QYlu8U3DfuDzfQFP > 2MjzTwdP1FfBIX0jhXHrMlnHGuuxAscL9v6DM7Np2Iro6ExXK1ry9ex4/NWbdcIY > sP9MkuA2wAMYE8pN1UM4LwSPg2rpEZEuwJfXyTohshcVHDoyPk81F4Q6R+ABPqM= > =xzY6 > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ryan.petrello at dreamhost.com Thu Dec 4 14:57:01 2014 From: ryan.petrello at dreamhost.com (Ryan Petrello) Date: Thu, 4 Dec 2014 09:57:01 -0500 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> <547F3D48.5020401@gmail.com> <1A3C52DFCD06494D8528644858247BF017812DEB@EX10MBOX03.pnnl.gov> Message-ID: <20141204145701.GA58583@Ryans-MacBook-Pro.local> Nikolay, I'd recommend taking a look at the work I did for the Barbican project in converting their falcon-based API to pecan: https://review.openstack.org/#/c/89746 ...because it makes use of a pecan feature called "generic controllers" for defining RESTful interfaces: https://review.openstack.org/#/c/138887/ The approach that most people in the OpenStack community have taken (probably because Ceilometer did so first, and provided a canonical implementation), the use of `pecan.rest.RestController`, is something we added to pecan to support compatability with TurboGears2. In my opinion, RestController is quite confusing, and overkill for what most APIs need - I recommend that people *not* use RestController due to its complexity, which mainly stems from TG2 compatability). I'm also working today on improving pecan's documentation as a result of some of the questions I've seen floating around in this thread. On 12/04/14 06:32 PM, Nikolay Markov wrote: > Ryan, can you please provide some more links on how these features > what I described are implemented in Pecan? Some working examples, > maybe? As far as I see now, each OpenStack project uses its own > approach to integration with Pecan, so what will you recommend to look > at? > > On Thu, Dec 4, 2014 at 4:10 PM, Roman Prykhodchenko > wrote: > > I?d rather suggest doing in several iteration by replacing several resources by Pecan?s implementations. > > Doing that in one big patch-set will make reviewing very painful, so some bad things might be not noticed. > > > > > >> On 04 Dec 2014, at 14:01, Igor Kalnitsky wrote: > >> > >> Ok, guys, > >> > >> It became obvious that most of us either vote for Pecan or abstain from voting. > >> > >> So I propose to stop fighting this battle (Flask vs Pecan) and start > >> thinking about moving to Pecan. You know, there are many questions > >> that need to be discussed (such as 'should we change API version' or > >> 'should be it done iteratively or as one patchset'). > >> > >> - Igor > >> > >> On Wed, Dec 3, 2014 at 7:25 PM, Fox, Kevin M wrote: > >>> Choosing the right instrument for the job in an open source community involves choosing technologies that the community is familiar/comfortable with as well, as it will allow you access to a greater pool of developers. > >>> > >>> With that in mind then, I'd add: > >>> Pro Pecan, blessed by the OpenStack community, con Flask, not. > >>> > >>> Kevin > >>> ________________________________________ > >>> From: Nikolay Markov [nmarkov at mirantis.com] > >>> Sent: Wednesday, December 03, 2014 9:00 AM > >>> To: OpenStack Development Mailing List (not for usage questions) > >>> Subject: Re: [openstack-dev] [Fuel][Nailgun] Web framework > >>> > >>> I didn't participate in that discussion, but here are topics on Flask > >>> cons from your link. I added some comments. > >>> > >>> - Cons > >>> - db transactions a little trickier to manage, but possible # > >>> what is trickier? Flask uses pure SQLalchemy or a very thin wrapper > >>> - JSON built-in but not XML # the only one I agree with, but does > >>> Pecan have it? > >>> - some issues, not updated in a while # last commit was 3 days ago > >>> - No Python 3 support # full Python 3 support fro a year or so already > >>> - Not WebOb # can it even be considered as a con? > >>> > >>> I'm not trying to argue with you or community principles, I'm just > >>> trying to choose the right instrument for the job. > >>> > >>> On Wed, Dec 3, 2014 at 7:41 PM, Jay Pipes wrote: > >>>> On 12/03/2014 10:53 AM, Nikolay Markov wrote: > >>>>>> > >>>>>> However, the OpenStack community is also about a shared set of tools, > >>>>>> development methodologies, and common perspectives. > >>>>> > >>>>> > >>>>> I completely agree with you, Jay, but the same principle may be > >>>>> applied much wider. Why Openstack Community decided to use its own > >>>>> unstable project instead of existing solution, which is widely used in > >>>>> Python community? To avoid being a team player? Or, at least, why it's > >>>>> recommended way even if it doesn't provide the same features other > >>>>> frameworks have for a long time already? I mean, there is no doubt > >>>>> everyone would use stable and technically advanced tool, but imposing > >>>>> everyone to use it by force with a simple hope that it'll become > >>>>> better from this is usually a bad approach. > >>>> > >>>> > >>>> This conversation was had a long time ago, was thoroughly thought-out and > >>>> discussed at prior summits and the ML: > >>>> > >>>> https://etherpad.openstack.org/p/grizzly-common-wsgi-frameworks > >>>> https://etherpad.openstack.org/p/havana-common-wsgi > >>>> > >>>> I think it's unfair to suggest that the OpenStack community decided "to use > >>>> its own unstable project instead of existing solution". > >>>> > >>>> > >>>> Best, > >>>> -jay > >>>> > >>>> _______________________________________________ > >>>> OpenStack-dev mailing list > >>>> OpenStack-dev at lists.openstack.org > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >>> > >>> > >>> -- > >>> Best regards, > >>> Nick Markov > >>> > >>> _______________________________________________ > >>> OpenStack-dev mailing list > >>> OpenStack-dev at lists.openstack.org > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >>> _______________________________________________ > >>> OpenStack-dev mailing list > >>> OpenStack-dev at lists.openstack.org > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Best regards, > Nick Markov > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ryan Petrello Senior Developer, DreamHost ryan.petrello at dreamhost.com From joe.gordon0 at gmail.com Thu Dec 4 15:03:45 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Thu, 4 Dec 2014 17:03:45 +0200 Subject: [openstack-dev] [nova] Kilo Project Priorities Message-ID: Hi all, Note: Cross posting with operators After a long double slot summit session, the nova team has come up with its list of efforts to prioritize for Kilo: http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html What does this mean? * This is a list of items we think are important to accomplish for Kilo * We are trying to prioritize work that fits under those categories. * If you would like to help with one of those priorities, please contact the owner. thoughts, comments and feedback are appreciated. best, Joe Gordon Summit etherpad: https://etherpad.openstack.org/p/kilo-nova-priorities -------------- next part -------------- An HTML attachment was scrubbed... URL: From slukjanov at mirantis.com Thu Dec 4 15:05:51 2014 From: slukjanov at mirantis.com (Sergey Lukjanov) Date: Thu, 4 Dec 2014 19:05:51 +0400 Subject: [openstack-dev] [sahara] team meeting Dec 4 1400 UTC In-Reply-To: References: Message-ID: Minutes: http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-12-04-14.02.html Log: http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-12-04-14.02.log.html On Thu, Dec 4, 2014 at 3:49 PM, Sergey Lukjanov wrote: > Hi folks, > > We'll be having the Sahara team meeting in > #openstack-meeting-3 channel. > > Agenda: > https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings > > > http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20141204T14 > > NOTE: There is another time slot and meeting room. > > -- > Sincerely yours, > Sergey Lukjanov > Sahara Technical Lead > (OpenStack Data Processing) > Principal Software Engineer > Mirantis Inc. > -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt.r.taylor at gmail.com Thu Dec 4 15:22:52 2014 From: kurt.r.taylor at gmail.com (Kurt Taylor) Date: Thu, 4 Dec 2014 09:22:52 -0600 Subject: [openstack-dev] [third-party] Third-party CI documentation WG forming Message-ID: At the kilo summit third-party session we discussed the need for a complete overhaul of the third-party CI documentation. Several in attendance volunteered to help with this effort. To help get us started, I have created an etherpad to help organize the thoughts of those involved. https://etherpad.openstack.org/p/third-party-ci-documentation Please thoroughly review the existing documentation and also add yourself as a contact to this etherpad if you wish to participate. I have put some initial thoughts and links there, but please feel free to add to it. When we get a group signed up, we can decide how to proceed. Since the Infra manual virtual sprint was such a success, how would everyone feel about a 2 day third-party CI documentation virtual sprint? I think we could bang out a pretty nice doc in that timeframe. Thanks! Kurt Taylor (krtaylor) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tristan.cacqueray at enovance.com Thu Dec 4 15:33:40 2014 From: tristan.cacqueray at enovance.com (Tristan Cacqueray) Date: Thu, 04 Dec 2014 10:33:40 -0500 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: References: <54806DE9.3030608@redhat.com> Message-ID: <54807ED4.3050501@enovance.com> On 12/04/2014 09:29 AM, Alan Pevec wrote: >> I'd like to request another exception for Neutron to avoid introducing >> regression in hostname validation for DNS nameservers: >> https://review.openstack.org/#/c/139061/ > > Nice solution for this regression, I think it's worthy last-minute exception. > Should VMT also send this officially to the vendors as a followup to > OSSA 2014-039 ? > > Cheers, > Alan > Thanks Alan for bringing this up, Well the security fix did cause a regression so this fix should warrant an OSSA ERRATA. However, I now wonder what use-case requires an hostname as nameserver, considering this will cause an extra lookup for name resolution... Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 538 bytes Desc: OpenPGP digital signature URL: From daniel.genin at jhuapl.edu Thu Dec 4 15:34:02 2014 From: daniel.genin at jhuapl.edu (Dan Genin) Date: Thu, 04 Dec 2014 10:34:02 -0500 Subject: [openstack-dev] [Nova] [RBD] Copy-on-write cloning for RBD-backed disks In-Reply-To: References: Message-ID: <54807EEA.7030303@jhuapl.edu> Hello Dmitry, This is off the topic of original email but related to RBD. Looking through the code of LibvirtDriver._get_instance_disk_info() I noticed that it only returns data for disks with or but not so RBD backed instances would be incorrectly reported to have no ephemeral disks. I don't know if this is already handled somehow by RBD code but just wanted to bring it to your attention since you seem to be working on RBD support. Best regards, Dan On 07/16/2014 05:17 PM, Dmitry Borodaenko wrote: > This message has been archived. View the original item > > I've got a bit of good news and bad news about the state of landing > the rbd-ephemeral-clone patch series for Nova in Juno. > > The good news is that the first patch in the series > (https://review.openstack.org/91722 fixing a data loss inducing bug > with live migrations of instances with RBD backed ephemeral drives) > was merged yesterday. > > The bad news is that after 2 months of sitting in review queue and > only getting its first a +1 from a core reviewer on the spec approval > freeze day, the spec for the blueprint rbd-clone-image-handler > (https://review.openstack.org/91486) wasn't approved in time. Because > of that, today the blueprint was rejected along with the rest of the > commits in the series, even though the code itself was reviewed and > approved a number of times. > > Our last chance to avoid putting this work on hold for yet another > OpenStack release cycle is to petition for a spec freeze exception in > the next Nova team meeting: > https://wiki.openstack.org/wiki/Meetings/Nova > > If you're using Ceph RBD as backend for ephemeral disks in Nova and > are interested this patch series, please speak up. Since the biggest > concern raised about this spec so far has been lack of CI coverage, > please let us know if you're already using this patch series with > Juno, Icehouse, or Havana. > > I've put together an etherpad with a summary of where things are with > this patch series and how we got here: > https://etherpad.openstack.org/p/nova-ephemeral-rbd-clone-status > > Previous thread about this patch series on ceph-users ML: > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-March/028097.html > > -- > Dmitry Borodaenko > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3449 bytes Desc: S/MIME Cryptographic Signature URL: From mestery at mestery.com Thu Dec 4 15:52:58 2014 From: mestery at mestery.com (Kyle Mestery) Date: Thu, 4 Dec 2014 09:52:58 -0600 Subject: [openstack-dev] [neutron] Neutron Priorities for Kilo Message-ID: Note: Similar to Nova, cross-posting to operators. We've published the list of priorities for Neutron during the Kilo cycle, and it's available here [1]. The team has been discussing these since before the Paris Summit, and we're in the process of approving individual specs around these right now. I'm sending this email to let the broader community know what the priorities will be and to keep everyone aware. If you're interested in helping with some of these, please reach out to us in #openstack-neutron. Thanks! Kyle [1] http://specs.openstack.org/openstack/neutron-specs/priorities/kilo-priorities.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From vadivel.openstack at gmail.com Thu Dec 4 15:59:09 2014 From: vadivel.openstack at gmail.com (Vadivel Poonathan) Date: Thu, 4 Dec 2014 07:59:09 -0800 Subject: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository? In-Reply-To: References: Message-ID: Hi Kyle and all, Was there any conclusion in the design summit or the meetings afterward about splitting the vendor plugins/drivers from the mainstream neutron and documentation of out-of-tree plugins/drivers?... Thanks, Vad -- On Thu, Oct 23, 2014 at 11:27 AM, Kyle Mestery wrote: > On Thu, Oct 23, 2014 at 12:35 PM, Vadivel Poonathan > wrote: > > Hi Kyle and Anne, > > > > Thanks for the clarifications... understood and it makes sense. > > > > However, per my understanding, the drivers (aka plugins) are meant to be > > developed and supported by third-party vendors, outside of the OpenStack > > community, and they are supposed to work as plug-n-play... they are not > part > > of the core OpenStack development, nor any of its components. If that is > the > > case, then why should OpenStack community include and maintain them as > part > > of it, for every release?... Wouldnt it be enough to limit the scope > with > > the plugin framework and built-in drivers such as LinuxBridge or OVS > etc?... > > not extending to commercial vendors?... (It is just a curious question, > > forgive me if i missed something and correct me!). > > > You haven't misunderstood anything, we're in the process of splitting > these things out, and this will be a prime focus of the Neutron design > summit track at the upcoming summit. > > Thanks, > Kyle > > > At the same time, IMHO, there must be some reference or a page within the > > scope of OpenStack documentation (not necessarily the core docs, but some > > wiki page or reference link or so - as Anne suggested) to mention the > list > > of the drivers/plugins supported as of given release and may be an > external > > link to know more details about the driver, if the link is provided by > > respective vendor. > > > > > > Anyway, besides my opinion, the wiki page similar to hypervisor driver > would > > be good for now atleast, until the direction/policy level decision is > made > > to maintain out-of-tree plugins/drivers. > > > > > > Thanks, > > Vad > > -- > > > > > > > > > > On Thu, Oct 23, 2014 at 9:46 AM, Edgar Magana > > wrote: > >> > >> I second Anne?s and Kyle comments. Actually, I like very much the wiki > >> part to provide some visibility for out-of-tree plugins/drivers but not > into > >> the official documentation. > >> > >> Thanks, > >> > >> Edgar > >> > >> From: Anne Gentle > >> Reply-To: "OpenStack Development Mailing List (not for usage questions)" > >> > >> Date: Thursday, October 23, 2014 at 8:51 AM > >> To: Kyle Mestery > >> Cc: "OpenStack Development Mailing List (not for usage questions)" > >> > >> Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update > >> about new vendor plugin, but without code in repository? > >> > >> > >> > >> On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery > >> wrote: > >>> > >>> Vad: > >>> > >>> The third-party CI is required for your upstream driver. I think > >>> what's different from my reading of this thread is the question of > >>> what is the requirement to have a driver listed in the upstream > >>> documentation which is not in the upstream codebase. To my knowledge, > >>> we haven't done this. Thus, IMHO, we should NOT be utilizing upstream > >>> documentation to document drivers which are themselves not upstream. > >>> When we split out the drivers which are currently upstream in neutron > >>> into a separate repo, they will still be upstream. So my opinion here > >>> is that if your driver is not upstream, it shouldn't be in the > >>> upstream documentation. But I'd like to hear others opinions as well. > >>> > >> > >> This is my sense as well. > >> > >> The hypervisor drivers are documented on the wiki, sometimes they're > >> in-tree, sometimes they're not, but the state of testing is documented > on > >> the wiki. I think we could take this approach for network and storage > >> drivers as well. > >> > >> https://wiki.openstack.org/wiki/HypervisorSupportMatrix > >> > >> Anne > >> > >>> > >>> Thanks, > >>> Kyle > >>> > >>> On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan > >>> wrote: > >>> > Kyle, > >>> > Gentle reminder... when you get a chance!.. > >>> > > >>> > Anne, > >>> > In case, if i need to send it to different group or email-id to reach > >>> > Kyle > >>> > Mestery, pls. let me know. Thanks for your help. > >>> > > >>> > Regards, > >>> > Vad > >>> > -- > >>> > > >>> > > >>> > On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan > >>> > wrote: > >>> >> > >>> >> Hi Kyle, > >>> >> > >>> >> Can you pls. comment on this discussion and confirm the requirements > >>> >> for > >>> >> getting out-of-tree mechanism_driver listed in the supported > >>> >> plugin/driver > >>> >> list of the Openstack Neutron docs. > >>> >> > >>> >> Thanks, > >>> >> Vad > >>> >> -- > >>> >> > >>> >> On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle > >>> >> wrote: > >>> >>> > >>> >>> > >>> >>> > >>> >>> On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan > >>> >>> wrote: > >>> >>>> > >>> >>>> Hi, > >>> >>>> > >>> >>>> >>>> On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton > >>> >>>> >>>> > >>> >>>> >>>> wrote: > >>> >>>> >>>>> > >>> >>>> >>>>> I think you will probably have to wait until after the > summit > >>> >>>> >>>>> so > >>> >>>> >>>>> we can > >>> >>>> >>>>> see the direction that will be taken with the rest of the > >>> >>>> >>>>> in-tree > >>> >>>> >>>>> drivers/plugins. It seems like we are moving towards > removing > >>> >>>> >>>>> all > >>> >>>> >>>>> of them so > >>> >>>> >>>>> we would definitely need a solution to documenting > out-of-tree > >>> >>>> >>>>> drivers as > >>> >>>> >>>>> you suggested. > >>> >>>> > >>> >>>> [Vad] while i 'm waiting for the conclusion on this subject, i 'm > >>> >>>> trying > >>> >>>> to setup the third-party CI/Test system and meet its requirements > to > >>> >>>> get my > >>> >>>> mechanism_driver listed in the Kilo's documentation, in parallel. > >>> >>>> > >>> >>>> Couple of questions/confirmations before i proceed further on this > >>> >>>> direction... > >>> >>>> > >>> >>>> 1) Is there anything more required other than the third-party > >>> >>>> CI/Test > >>> >>>> requirements ??.. like should I still need to go-through the > entire > >>> >>>> development process of submit/review/approval of the blue-print > and > >>> >>>> code of > >>> >>>> my ML2 driver which was already developed and in-use?... > >>> >>>> > >>> >>> > >>> >>> The neutron PTL Kyle Mestery can answer if there are any additional > >>> >>> requirements. > >>> >>> > >>> >>>> > >>> >>>> 2) Who is the authority to clarify and confirm the above (and how > do > >>> >>>> i > >>> >>>> contact them)?... > >>> >>> > >>> >>> > >>> >>> Elections just completed, and the newly elected PTL is Kyle > Mestery, > >>> >>> > >>> >>> > http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html. > >>> >>> > >>> >>>> > >>> >>>> > >>> >>>> Thanks again for your inputs... > >>> >>>> > >>> >>>> Regards, > >>> >>>> Vad > >>> >>>> -- > >>> >>>> > >>> >>>> On Tue, Oct 14, 2014 at 3:17 PM, Anne Gentle > >>> >>>> wrote: > >>> >>>>> > >>> >>>>> > >>> >>>>> > >>> >>>>> On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan > >>> >>>>> wrote: > >>> >>>>>> > >>> >>>>>> Agreed on the requirements of test results to qualify the vendor > >>> >>>>>> plugin to be listed in the upstream docs. > >>> >>>>>> Is there any procedure/infrastructure currently available for > this > >>> >>>>>> purpose?.. > >>> >>>>>> Pls. fwd any link/pointers on those info. > >>> >>>>>> > >>> >>>>> > >>> >>>>> Here's a link to the third-party testing setup information. > >>> >>>>> > >>> >>>>> http://ci.openstack.org/third_party.html > >>> >>>>> > >>> >>>>> Feel free to keep asking questions as you dig deeper. > >>> >>>>> Thanks, > >>> >>>>> Anne > >>> >>>>> > >>> >>>>>> > >>> >>>>>> Thanks, > >>> >>>>>> Vad > >>> >>>>>> -- > >>> >>>>>> > >>> >>>>>> On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki > >>> >>>>>> > >>> >>>>>> wrote: > >>> >>>>>>> > >>> >>>>>>> I agree with Kevin and Kyle. Even if we decided to use separate > >>> >>>>>>> tree > >>> >>>>>>> for neutron > >>> >>>>>>> plugins and drivers, they still will be regarded as part of the > >>> >>>>>>> upstream. > >>> >>>>>>> These plugins/drivers need to prove they are well integrated > with > >>> >>>>>>> Neutron master > >>> >>>>>>> in some way and gating integration proves it is well tested and > >>> >>>>>>> integrated. > >>> >>>>>>> I believe it is a reasonable assumption and requirement that a > >>> >>>>>>> vendor > >>> >>>>>>> plugin/driver > >>> >>>>>>> is listed in the upstream docs. This is a same kind of question > >>> >>>>>>> as > >>> >>>>>>> what vendor plugins > >>> >>>>>>> are tested and worth documented in the upstream docs. > >>> >>>>>>> I hope you work with the neutron team and run the third party > >>> >>>>>>> requirements. > >>> >>>>>>> > >>> >>>>>>> Thanks, > >>> >>>>>>> Akihiro > >>> >>>>>>> > >>> >>>>>>> On Tue, Oct 14, 2014 at 10:09 AM, Kyle Mestery > >>> >>>>>>> > >>> >>>>>>> wrote: > >>> >>>>>>> > On Mon, Oct 13, 2014 at 6:44 PM, Kevin Benton > >>> >>>>>>> > > >>> >>>>>>> > wrote: > >>> >>>>>>> >>>The OpenStack dev and docs team dont have to worry about > >>> >>>>>>> >>> gating/publishing/maintaining the vendor specific > >>> >>>>>>> >>> plugins/drivers. > >>> >>>>>>> >> > >>> >>>>>>> >> I disagree about the gating part. If a vendor wants to have > a > >>> >>>>>>> >> link > >>> >>>>>>> >> that > >>> >>>>>>> >> shows they are compatible with openstack, they should be > >>> >>>>>>> >> reporting > >>> >>>>>>> >> test > >>> >>>>>>> >> results on all patches. A link to a vendor driver in the > docs > >>> >>>>>>> >> should signify > >>> >>>>>>> >> some form of testing that the community is comfortable with. > >>> >>>>>>> >> > >>> >>>>>>> > I agree with Kevin here. If you want to play upstream, in > >>> >>>>>>> > whatever > >>> >>>>>>> > form that takes by the end of Kilo, you have to work with the > >>> >>>>>>> > existing > >>> >>>>>>> > third-party requirements and team to take advantage of being > a > >>> >>>>>>> > part > >>> >>>>>>> > of > >>> >>>>>>> > things like upstream docs. > >>> >>>>>>> > > >>> >>>>>>> > Thanks, > >>> >>>>>>> > Kyle > >>> >>>>>>> > > >>> >>>>>>> >> On Mon, Oct 13, 2014 at 11:33 AM, Vadivel Poonathan > >>> >>>>>>> >> wrote: > >>> >>>>>>> >>> > >>> >>>>>>> >>> Hi, > >>> >>>>>>> >>> > >>> >>>>>>> >>> If the plan is to move ALL existing vendor specific > >>> >>>>>>> >>> plugins/drivers > >>> >>>>>>> >>> out-of-tree, then having a place-holder within the > OpenStack > >>> >>>>>>> >>> domain would > >>> >>>>>>> >>> suffice, where the vendors can list their plugins/drivers > >>> >>>>>>> >>> along > >>> >>>>>>> >>> with their > >>> >>>>>>> >>> documentation as how to install and use etc. > >>> >>>>>>> >>> > >>> >>>>>>> >>> The main Openstack Neutron documentation page can explain > the > >>> >>>>>>> >>> plugin > >>> >>>>>>> >>> framework (ml2 type drivers, mechanism drivers, serviec > >>> >>>>>>> >>> plugin > >>> >>>>>>> >>> and so on) > >>> >>>>>>> >>> and its purpose/usage etc, then provide a link to refer the > >>> >>>>>>> >>> currently > >>> >>>>>>> >>> supported vendor specific plugins/drivers for more details. > >>> >>>>>>> >>> That > >>> >>>>>>> >>> way the > >>> >>>>>>> >>> documentation will be accurate to what is "in-tree" and > limit > >>> >>>>>>> >>> the > >>> >>>>>>> >>> documentation of external plugins/drivers to have just a > >>> >>>>>>> >>> reference link. So > >>> >>>>>>> >>> its now vendor's responsibility to keep their driver's > >>> >>>>>>> >>> up-to-date and their > >>> >>>>>>> >>> documentation accurate. The OpenStack dev and docs team > dont > >>> >>>>>>> >>> have > >>> >>>>>>> >>> to worry > >>> >>>>>>> >>> about gating/publishing/maintaining the vendor specific > >>> >>>>>>> >>> plugins/drivers. > >>> >>>>>>> >>> > >>> >>>>>>> >>> The built-in drivers such as LinuxBridge or OpenVSwitch etc > >>> >>>>>>> >>> can > >>> >>>>>>> >>> continue > >>> >>>>>>> >>> to be "in-tree" and their documentation will be part of > main > >>> >>>>>>> >>> Neutron's docs. > >>> >>>>>>> >>> So the Neutron is guaranteed to work with built-in > >>> >>>>>>> >>> plugins/drivers as per > >>> >>>>>>> >>> the documentation and the user is informed to refer the > >>> >>>>>>> >>> "external > >>> >>>>>>> >>> vendor > >>> >>>>>>> >>> plug-in page" for additional/specific plugins/drivers. > >>> >>>>>>> >>> > >>> >>>>>>> >>> > >>> >>>>>>> >>> Thanks, > >>> >>>>>>> >>> Vad > >>> >>>>>>> >>> -- > >>> >>>>>>> >>> > >>> >>>>>>> >>> > >>> >>>>>>> >>> On Fri, Oct 10, 2014 at 8:10 PM, Anne Gentle > >>> >>>>>>> >>> > >>> >>>>>>> >>> wrote: > >>> >>>>>>> >>>> > >>> >>>>>>> >>>> > >>> >>>>>>> >>>> > >>> >>>>>>> >>>> On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton > >>> >>>>>>> >>>> wrote: > >>> >>>>>>> >>>>> > >>> >>>>>>> >>>>> I think you will probably have to wait until after the > >>> >>>>>>> >>>>> summit > >>> >>>>>>> >>>>> so we can > >>> >>>>>>> >>>>> see the direction that will be taken with the rest of the > >>> >>>>>>> >>>>> in-tree > >>> >>>>>>> >>>>> drivers/plugins. It seems like we are moving towards > >>> >>>>>>> >>>>> removing > >>> >>>>>>> >>>>> all of them so > >>> >>>>>>> >>>>> we would definitely need a solution to documenting > >>> >>>>>>> >>>>> out-of-tree > >>> >>>>>>> >>>>> drivers as > >>> >>>>>>> >>>>> you suggested. > >>> >>>>>>> >>>>> > >>> >>>>>>> >>>>> However, I think the minimum requirements for having a > >>> >>>>>>> >>>>> driver > >>> >>>>>>> >>>>> being > >>> >>>>>>> >>>>> documented should be third-party testing of Neutron > >>> >>>>>>> >>>>> patches. > >>> >>>>>>> >>>>> Otherwise the > >>> >>>>>>> >>>>> docs will become littered with a bunch of links to > >>> >>>>>>> >>>>> drivers/plugins with no > >>> >>>>>>> >>>>> indication of what actually works, which ultimately makes > >>> >>>>>>> >>>>> Neutron look bad. > >>> >>>>>>> >>>> > >>> >>>>>>> >>>> > >>> >>>>>>> >>>> This is my line of thinking as well, expanded to > "ultimately > >>> >>>>>>> >>>> makes > >>> >>>>>>> >>>> OpenStack docs look bad" -- a perception I want to avoid. > >>> >>>>>>> >>>> > >>> >>>>>>> >>>> Keep the viewpoints coming. We have a crucial balancing > act > >>> >>>>>>> >>>> ahead: users > >>> >>>>>>> >>>> need to trust docs and trust the drivers. Ultimately the > >>> >>>>>>> >>>> responsibility for > >>> >>>>>>> >>>> the docs is in the hands of the driver contributors so it > >>> >>>>>>> >>>> seems > >>> >>>>>>> >>>> those should > >>> >>>>>>> >>>> be on a domain name where drivers control publishing and > >>> >>>>>>> >>>> OpenStack docs are > >>> >>>>>>> >>>> not a gatekeeper, quality checker, reviewer, or publisher. > >>> >>>>>>> >>>> > >>> >>>>>>> >>>> We have documented the status of hypervisor drivers on an > >>> >>>>>>> >>>> OpenStack wiki > >>> >>>>>>> >>>> page. [1] To me, that type of list could be maintained on > >>> >>>>>>> >>>> the > >>> >>>>>>> >>>> wiki page > >>> >>>>>>> >>>> better than in the docs themselves. Thoughts? Feelings? > More > >>> >>>>>>> >>>> discussion, > >>> >>>>>>> >>>> please. And thank you for the responses so far. > >>> >>>>>>> >>>> Anne > >>> >>>>>>> >>>> > >>> >>>>>>> >>>> [1] > https://wiki.openstack.org/wiki/HypervisorSupportMatrix > >>> >>>>>>> >>>> > >>> >>>>>>> >>>>> > >>> >>>>>>> >>>>> > >>> >>>>>>> >>>>> On Fri, Oct 10, 2014 at 1:28 PM, Vadivel Poonathan > >>> >>>>>>> >>>>> wrote: > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>>> Hi Anne, > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>>> Thanks for your immediate response!... > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>>> Just to clarify... I have developed and maintaining a > >>> >>>>>>> >>>>>> Neutron > >>> >>>>>>> >>>>>> plug-in > >>> >>>>>>> >>>>>> (ML2 mechanism_driver) since Grizzly and now it is > >>> >>>>>>> >>>>>> up-to-date > >>> >>>>>>> >>>>>> with Icehouse. > >>> >>>>>>> >>>>>> But it was never listed nor part of the main Openstack > >>> >>>>>>> >>>>>> releases. Now i would > >>> >>>>>>> >>>>>> like to have my plugin mentioned as "supported > >>> >>>>>>> >>>>>> plugin/mechanism_driver for > >>> >>>>>>> >>>>>> so and so vendor equipments" in the docs.openstack.org, > >>> >>>>>>> >>>>>> but > >>> >>>>>>> >>>>>> without having > >>> >>>>>>> >>>>>> the actual plugin code to be posted in the main > Openstack > >>> >>>>>>> >>>>>> GIT > >>> >>>>>>> >>>>>> repository. > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>>> Reason is that I dont have plan/bandwidth to go thru the > >>> >>>>>>> >>>>>> entire process > >>> >>>>>>> >>>>>> of new plugin blue-print/development/review/testing etc > as > >>> >>>>>>> >>>>>> required by the > >>> >>>>>>> >>>>>> Openstack development community. Bcos this is already > >>> >>>>>>> >>>>>> developed, tested and > >>> >>>>>>> >>>>>> released to some customers directly. Now I just want to > >>> >>>>>>> >>>>>> get it > >>> >>>>>>> >>>>>> to the > >>> >>>>>>> >>>>>> official Openstack documentation, so that more people > can > >>> >>>>>>> >>>>>> get > >>> >>>>>>> >>>>>> this and use. > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>>> The plugin package is made available to public from > Ubuntu > >>> >>>>>>> >>>>>> repository > >>> >>>>>>> >>>>>> along with necessary documentation. So people can > directly > >>> >>>>>>> >>>>>> get > >>> >>>>>>> >>>>>> it from > >>> >>>>>>> >>>>>> Ubuntu repository and use it. All i need is to get > listed > >>> >>>>>>> >>>>>> in > >>> >>>>>>> >>>>>> the > >>> >>>>>>> >>>>>> docs.openstack.org so that people knows that it exists > and > >>> >>>>>>> >>>>>> can > >>> >>>>>>> >>>>>> be used with > >>> >>>>>>> >>>>>> any Openstack. > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>>> Pls. confrim whether this is something possible?... > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>>> Thanks again!.. > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>>> Vad > >>> >>>>>>> >>>>>> -- > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>>> On Fri, Oct 10, 2014 at 12:18 PM, Anne Gentle > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>>> wrote: > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>>> On Fri, Oct 10, 2014 at 2:11 PM, Vadivel Poonathan > >>> >>>>>>> >>>>>>> wrote: > >>> >>>>>>> >>>>>>>> > >>> >>>>>>> >>>>>>>> Hi, > >>> >>>>>>> >>>>>>>> > >>> >>>>>>> >>>>>>>> How to include a new vendor plug-in (aka > >>> >>>>>>> >>>>>>>> mechanism_driver in > >>> >>>>>>> >>>>>>>> ML2 > >>> >>>>>>> >>>>>>>> framework) into the Openstack documentation?.. In > other > >>> >>>>>>> >>>>>>>> words, is it > >>> >>>>>>> >>>>>>>> possible to include a new plug-in in the Openstack > >>> >>>>>>> >>>>>>>> documentation page > >>> >>>>>>> >>>>>>>> without having the actual plug-in code as part of the > >>> >>>>>>> >>>>>>>> Openstack neutron > >>> >>>>>>> >>>>>>>> repository?... The actual plug-in is posted and > >>> >>>>>>> >>>>>>>> available > >>> >>>>>>> >>>>>>>> for the public to > >>> >>>>>>> >>>>>>>> download as Ubuntu package. But i need to mention > >>> >>>>>>> >>>>>>>> somewhere > >>> >>>>>>> >>>>>>>> in the Openstack > >>> >>>>>>> >>>>>>>> documentation that this new plugin is available for > the > >>> >>>>>>> >>>>>>>> public to use along > >>> >>>>>>> >>>>>>>> with its documentation. > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>>> We definitely want you to include pointers to vendor > >>> >>>>>>> >>>>>>> documentation in > >>> >>>>>>> >>>>>>> the OpenStack docs, but I'd prefer make sure they're > gate > >>> >>>>>>> >>>>>>> tested before they > >>> >>>>>>> >>>>>>> get listed on docs.openstack.org. Drivers change > enough > >>> >>>>>>> >>>>>>> release-to-release > >>> >>>>>>> >>>>>>> that it's difficult to keep up maintenance. > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>>> Lately I've been talking to driver contributors > >>> >>>>>>> >>>>>>> (hypervisor, > >>> >>>>>>> >>>>>>> storage, > >>> >>>>>>> >>>>>>> networking) about the out-of-tree changes possible. I'd > >>> >>>>>>> >>>>>>> like > >>> >>>>>>> >>>>>>> to encourage > >>> >>>>>>> >>>>>>> even out-of-tree drivers to get listed, but to store > >>> >>>>>>> >>>>>>> their > >>> >>>>>>> >>>>>>> main documents > >>> >>>>>>> >>>>>>> outside of docs.openstack.org, if they are > gate-tested. > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>>> Anyone have other ideas here? > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>>> Looping in the OpenStack-docs mailing list also. > >>> >>>>>>> >>>>>>> Anne > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>>>> > >>> >>>>>>> >>>>>>>> Pls. provide some insights into whether it is > >>> >>>>>>> >>>>>>>> possible?.. > >>> >>>>>>> >>>>>>>> and any > >>> >>>>>>> >>>>>>>> further info on this?.. > >>> >>>>>>> >>>>>>>> > >>> >>>>>>> >>>>>>>> Thanks, > >>> >>>>>>> >>>>>>>> > >>> >>>>>>> >>>>>>>> Vad > >>> >>>>>>> >>>>>>>> > >>> >>>>>>> >>>>>>>> -- > >>> >>>>>>> >>>>>>>> > >>> >>>>>>> >>>>>>>> > >>> >>>>>>> >>>>>>>> _______________________________________________ > >>> >>>>>>> >>>>>>>> OpenStack-dev mailing list > >>> >>>>>>> >>>>>>>> OpenStack-dev at lists.openstack.org > >>> >>>>>>> >>>>>>>> > >>> >>>>>>> >>>>>>>> > >>> >>>>>>> >>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >>>>>>> >>>>>>>> > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>>> _______________________________________________ > >>> >>>>>>> >>>>>>> OpenStack-dev mailing list > >>> >>>>>>> >>>>>>> OpenStack-dev at lists.openstack.org > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >>>>>>> >>>>>>> > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>>> _______________________________________________ > >>> >>>>>>> >>>>>> OpenStack-dev mailing list > >>> >>>>>>> >>>>>> OpenStack-dev at lists.openstack.org > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >>>>>>> >>>>>> > >>> >>>>>>> >>>>> > >>> >>>>>>> >>>>> > >>> >>>>>>> >>>>> > >>> >>>>>>> >>>>> -- > >>> >>>>>>> >>>>> Kevin Benton > >>> >>>>>>> >>>>> > >>> >>>>>>> >>>>> _______________________________________________ > >>> >>>>>>> >>>>> OpenStack-dev mailing list > >>> >>>>>>> >>>>> OpenStack-dev at lists.openstack.org > >>> >>>>>>> >>>>> > >>> >>>>>>> >>>>> > >>> >>>>>>> >>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >>>>>>> >>>>> > >>> >>>>>>> >>>> > >>> >>>>>>> >>>> > >>> >>>>>>> >>>> _______________________________________________ > >>> >>>>>>> >>>> OpenStack-dev mailing list > >>> >>>>>>> >>>> OpenStack-dev at lists.openstack.org > >>> >>>>>>> >>>> > >>> >>>>>>> >>>> > >>> >>>>>>> >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >>>>>>> >>>> > >>> >>>>>>> >>> > >>> >>>>>>> >>> > >>> >>>>>>> >>> _______________________________________________ > >>> >>>>>>> >>> OpenStack-dev mailing list > >>> >>>>>>> >>> OpenStack-dev at lists.openstack.org > >>> >>>>>>> >>> > >>> >>>>>>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >>>>>>> >>> > >>> >>>>>>> >> > >>> >>>>>>> >> > >>> >>>>>>> >> > >>> >>>>>>> >> -- > >>> >>>>>>> >> Kevin Benton > >>> >>>>>>> >> > >>> >>>>>>> >> _______________________________________________ > >>> >>>>>>> >> OpenStack-dev mailing list > >>> >>>>>>> >> OpenStack-dev at lists.openstack.org > >>> >>>>>>> >> > >>> >>>>>>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >>>>>>> >> > >>> >>>>>>> > > >>> >>>>>>> > _______________________________________________ > >>> >>>>>>> > OpenStack-dev mailing list > >>> >>>>>>> > OpenStack-dev at lists.openstack.org > >>> >>>>>>> > > >>> >>>>>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> -- > >>> >>>>>>> Akihiro Motoki > >>> >>>>>>> > >>> >>>>>>> _______________________________________________ > >>> >>>>>>> OpenStack-dev mailing list > >>> >>>>>>> OpenStack-dev at lists.openstack.org > >>> >>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> _______________________________________________ > >>> >>>>>> OpenStack-dev mailing list > >>> >>>>>> OpenStack-dev at lists.openstack.org > >>> >>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >>>>>> > >>> >>>>> > >>> >>>>> > >>> >>>>> _______________________________________________ > >>> >>>>> OpenStack-dev mailing list > >>> >>>>> OpenStack-dev at lists.openstack.org > >>> >>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >>>>> > >>> >>>> > >>> >>>> > >>> >>>> _______________________________________________ > >>> >>>> OpenStack-dev mailing list > >>> >>>> OpenStack-dev at lists.openstack.org > >>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >>>> > >>> >>> > >>> >>> > >>> >>> _______________________________________________ > >>> >>> OpenStack-dev mailing list > >>> >>> OpenStack-dev at lists.openstack.org > >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >>> > >>> >> > >>> > > >> > >> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Neil.Jerram at metaswitch.com Thu Dec 4 16:00:08 2014 From: Neil.Jerram at metaswitch.com (Neil Jerram) Date: Thu, 4 Dec 2014 16:00:08 +0000 Subject: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup? In-Reply-To: (Kevin Benton's message of "Wed, 3 Dec 2014 14:44:57 -0800") References: <87ppc090ne.fsf@metaswitch.com> Message-ID: <87d27z8kxj.fsf@metaswitch.com> Kevin Benton writes: > What you are proposing sounds very reasonable. If I understand > correctly, the idea is to make Nova just create the TAP device and get > it attached to the VM and leave it 'unplugged'. This would work well > and might eliminate the need for some drivers. I see no reason to > block adding a VIF type that does this. I was actually floating a slightly more radical option than that: the idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does absolutely _nothing_, not even create the TAP device. (My pending Nova spec at https://review.openstack.org/#/c/130732/ proposes VIF_TYPE_TAP, for which Nova _does_ creates the TAP device, but then does nothing else - i.e. exactly what you've described just above. But in this email thread I was musing about going even further, towards providing a platform for future networking experimentation where Nova isn't involved at all in the networking setup logic.) > However, there is a good reason that the VIF type for some OVS-based > deployments require this type of setup. The vSwitches are connected to > a central controller using openflow (or ovsdb) which configures > forwarding rules/etc. Therefore they don't have any agents running on > the compute nodes from the Neutron side to perform the step of getting > the interface plugged into the vSwitch in the first place. For this > reason, we will still need both types of VIFs. Thanks. I'm not advocating that existing VIF types should be removed, though - rather wondering if similar function could in principle be implemented without Nova VIF plugging - or what that would take. For example, suppose someone came along and wanted to implement a new OVS-like networking infrastructure? In principle could they do that without having to enhance the Nova VIF driver code? I think at the moment they couldn't, but that they would be able to if VIF_TYPE_NOOP (or possibly VIF_TYPE_TAP) was already in place. In principle I think it would then be possible for the new implementation to specify VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does the kind of configuration and vSwitch plugging that you've described above. Does that sound correct, or am I missing something else? >> 1 .When the port is created in the Neutron DB, and handled (bound > etc.) > by the plugin and/or mechanism driver, the TAP device name is already > present at that time. > > This is backwards. The tap device name is derived from the port ID, so > the port has already been created in Neutron at that point. It is just > unbound. The steps are roughly as follows: Nova calls neutron for a > port, Nova creates/plugs VIF based on port, Nova updates port on > Neutron, Neutron binds the port and notifies agent/plugin/whatever to > finish the plumbing, Neutron notifies Nova that port is active, Nova > unfreezes the VM. > > None of that should be affected by what you are proposing. The only > difference is that your Neutron agent would also perform the > 'plugging' operation. Agreed - but thanks for clarifying the exact sequence of events. I wonder if what I'm describing (either VIF_TYPE_NOOP or VIF_TYPE_TAP) might fit as part of the "Nova-network/Neutron Migration" priority that's just been announced for Kilo. I'm aware that a part of that priority is concerned with live migration, but perhaps it could also include the goal of future networking work not having to touch Nova code? Regards, Neil From brian.haley at hp.com Thu Dec 4 16:06:19 2014 From: brian.haley at hp.com (Brian Haley) Date: Thu, 04 Dec 2014 11:06:19 -0500 Subject: [openstack-dev] [neutron] Deprecating old security groups code / RPC. In-Reply-To: References: <54806D63.1030002@redhat.com> Message-ID: <5480867B.8020701@hp.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/04/2014 09:50 AM, Kyle Mestery wrote: > Is ipset support present in all supported distributions? > > > It is from Red Hat perspective, not sure Ubuntu, and the others, I think > Juno was targeted to ubuntu 14.04 only (which does have ipset kernel > support and it?s tool). > > Ipset was in kernel since 2.4.x, but RHEL6/Centos6 didn?t ship the tools > neither enabled it on kernel (AFAIK). > >> Once we verify Ubuntu's support for ipset (kernel and user tools), I'm +1 >> to this proposal. RHEL/CentOS/Fedora and SuSe look good. There is ipset support in at least 12.04 and later (packages site shows 10.04 too), so I think Ubuntu is good to go. - -Brian -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQEcBAEBAgAGBQJUgIZ7AAoJEIYQqpVulyUovo0H/A9AyJ5j+TfJMUQ0MAX1Hr48 afXF3+vtoKcZ/2oG8Qc+ERW9mQIMeNk3iw+F140Ad9DfH59v3stCuQyF/BBNYCqI BjQjUcG9sUwFf0gjFAscmeLX9NTqOFn0xbCWjsHsjrhAg5vn3Y6nuwakYfhrQJAK DRW2iz4/LrKszlNt/+9U3dwft8dmLI2lbGKy6uMkHC74pWSNrjw/MVuxwMgNNV8u eXRLNQC3wIdaTxx87DXN1APow5UgEpCnyd/zRRonYx+iBMNtHCZqzARCJYgK5QdQ 9ko4WWw7QXMmFxuBUJMFvQZATYiolQVN+sPX0SWHb99JFCufy0aFUDr+rP19dnM= =ZmnW -----END PGP SIGNATURE----- From mzoeller at de.ibm.com Thu Dec 4 16:06:48 2014 From: mzoeller at de.ibm.com (Markus Zoeller) Date: Thu, 4 Dec 2014 17:06:48 +0100 Subject: [openstack-dev] [devstack] installation ERROR ImportError: No module named oslo_concurrency Message-ID: > Hello Frank, > > Some times, I've seen that modules are missing even in the stable > branches, try to install the module independently and this should > fix your issue. > > Best regards, Venu > > > > On Thu, Dec 4, 2014 at 11:58 AM, Du Jun wrote: > > > > Hi all, > > > > I think I find a bug in the latest devstack. When I attempt to > > install devstack, I get the error message about "ImportError: > > No module named oslo_concurrency". > > > > [... stacktrace here ...] > > > > In fact, I can find > > > > oslo.concurrency>=0.1.0 # Apache-2.0 > > > > in the global-requirements.txt. So, I wonder how can I fix the > > dependency? I haved executed ./clean.sh and ./unstack.sh and then > > ./stack.sh. But, it doesn't work. > > > > -- Regards, Frank I face something similar with a devstack which is switched to branch `stable/juno`. For cloning `glance_store` devstack uses the `master` branch. For cloning `global requirements` devstack uses the `stable/juno` branch. `glance_store` (master) needs the `oslo.concurrency` package. This package is *not* listed in `global requirements` (stable/juno). The interesting traces are: | Cloning into '/opt/stack/glance_store'... | + cd /opt/stack/glance_store | + git checkout master | Already on 'master' [...] | Cloning into '/opt/stack/requirements'... | + cd /opt/stack/requirements | + git checkout stable/juno | Switched to a new branch 'stable/juno' [...] | + cd /opt/stack/requirements | + python update.py /opt/stack/glance_store | _sync_requirements_file({'glance-store': [...] [...] | Syncing /opt/stack/glance_store/requirements.txt | 'oslo.concurrency' is not in global-requirements.txt | + exit_trap From sean at dague.net Thu Dec 4 16:08:06 2014 From: sean at dague.net (Sean Dague) Date: Thu, 04 Dec 2014 11:08:06 -0500 Subject: [openstack-dev] [all] OpenStack Bootstrapping Hour - Keystone - Friday Dec 5th 20:00 UTC (15:00 Americas/New_York) Message-ID: <548086E6.4070407@dague.net> Sorry for the late announce, too much turkey and pie.... This Friday, Dec 5th, we'll be talking with Steve Martinelli and David Stanek about Keystone Authentication in OpenStack. The keys to the kingdom, Keystone! For this bootstrapping hour Steve and David will talk about using Keystone, OpenStack's Identity Service. We'll dive into the code for examples on Authentication plugins, Token providers, and Identity backends. More details here - https://wiki.openstack.org/wiki/BootstrappingHour/Authentication_Flows#Authentication_Flows_in_OpenStack The youtube live stream will be at: ttp://www.youtube.com/watch?v=Th61TgUVnzU Hope to see you there! -Sean -- Sean Dague http://dague.net From cmsj at tenshu.net Thu Dec 4 16:16:53 2014 From: cmsj at tenshu.net (Chris Jones) Date: Thu, 4 Dec 2014 16:16:53 +0000 Subject: [openstack-dev] [TripleO] Do we want to remove Nova-bm support? In-Reply-To: <547FE757.9030701@wedontsleep.org> References: <547FE757.9030701@wedontsleep.org> Message-ID: <25649A82-3CBA-4F8B-B0CE-BFAB07366AE2@tenshu.net> Hi AFAIK there are no products out there using tripleo&nova-bm, but maybe a quick post to -operators asking if this will ruin anyone's day, would be good? Cheers, -- Chris Jones > On 4 Dec 2014, at 04:47, Steve Kowalik wrote: > > Hi all, > > I'm becoming increasingly concerned about all of the code paths > in tripleo-incubator that check $USE_IRONIC -eq 0 -- that is, use > nova-baremetal rather than Ironic. We do not check nova-bm support in > CI, haven't for at least a month, and I'm concerned that parts of it > may be slowly bit-rotting. > > I think our documentation is fairly clear that nova-baremetal is > deprecated and Ironic is the way forward, and I know it flies in the > face of backwards-compatibility, but do we want to bite the bullet and > remove nova-bm support? > > Cheers, > -- > Steve > Oh, in case you got covered in that Repulsion Gel, here's some advice > the lab boys gave me: [paper rustling] DO NOT get covered in the > Repulsion Gel. > - Cave Johnson, CEO of Aperture Science > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From asahlin at linux.vnet.ibm.com Thu Dec 4 17:00:13 2014 From: asahlin at linux.vnet.ibm.com (Aaron Sahlin) Date: Thu, 04 Dec 2014 11:00:13 -0600 Subject: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon In-Reply-To: References: <547F8EFB.4030302@linux.vnet.ibm.com> Message-ID: <5480931D.7040302@linux.vnet.ibm.com> The more I think on it. I agree with Rob Cresswell comment "While clicking off the modal is relatively easy to do my accident, hitting Esc or 'X' are fairly distinct actions." While there is nothing wrong with warning the user that they will lose data after they clicked the 'x' / 'esc'... That was a deliberate action by them. So might be over engineering this. My vote is to just keep it simple and go with changing the default behavior to 'static'. On 12/4/2014 8:08 AM, Timur Sufiev wrote: > Hi Aaron, > > The only way to combine 2 aforementioned solutions I've been thinking > of is to implement David's solution as the 4th option (in addition to > true|false|static) on a per-form basis, leaving the possibility to > change the default value in configs. I guess this sort of combining > would be as simple as just putting both patches together (perhaps, > changing a bit David's js-code for catching 'click' event - to work > only for the modal forms with [data-modal-backdrop='confirm']). > > On Thu, Dec 4, 2014 at 1:30 AM, Aaron Sahlin > > wrote: > > I would be happy with either the two proposed solutions (both > improvements over the what we have now). > Any thoughts on combining them? Only close if esc or 'x' is > clicked, but also warn them if data was entered. > > > > On 12/3/2014 7:21 AM, Rob Cresswell (rcresswe) wrote: >> +1 to changing the behaviour to 'static'. Modal inside a modal is >> potentially slightly more useful, but looks messy and >> inconsistent, which I think outweighs the functionality. >> >> Rob >> >> >> On 2 Dec 2014, at 12:21, Timur Sufiev > > wrote: >> >>> Hello, Horizoneers and UX-ers! >>> >>> The default behavior of modals in Horizon (defined in turn by >>> Bootstrap defaults) regarding their closing is to simply close >>> the modal once user clicks somewhere outside of it (on the >>> backdrop element below and around the modal). This is not very >>> convenient for the modal forms containing a lot of input - when >>> it is closed without a warning all the data the user has already >>> provided is lost. Keeping this in mind, I've made a patch [1] >>> changing default Bootstrap 'modal_backdrop' parameter to >>> 'static', which means that forms are not closed once the user >>> clicks on a backdrop, while it's still possible to close them by >>> pressing 'Esc' or clicking on the 'X' link at the top right >>> border of the form. Also the patch [1] allows to customize this >>> behavior (between 'true'-current one/'false' - no backdrop >>> element/'static') on a per-form basis. >>> >>> What I didn't know at the moment I was uploading my patch is >>> that David Lyle had been working on a similar solution [2] some >>> time ago. It's a bit more elaborate than mine: if the user has >>> already filled some some inputs in the form, then a confirmation >>> dialog is shown, otherwise the form is silently dismissed as it >>> happens now. >>> >>> The whole point of writing about this in the ML is to gather >>> opinions which approach is better: >>> * stick to the current behavior; >>> * change the default behavior to 'static'; >>> * use the David's solution with confirmation dialog (once it'll >>> be rebased to the current codebase). >>> >>> What do you think? >>> >>> [1] https://review.openstack.org/#/c/113206/ >>> [2] https://review.openstack.org/#/c/23037/ >>> >>> P.S. I remember that I promised to write this email a week ago, >>> but better late than never :). >>> >>> -- >>> Timur Sufiev >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Timur Sufiev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.belanger at polybeacon.com Thu Dec 4 17:06:21 2014 From: paul.belanger at polybeacon.com (Paul Belanger) Date: Thu, 4 Dec 2014 12:06:21 -0500 Subject: [openstack-dev] Help needed to resolve "ImportError: No module named urllib" In-Reply-To: <3F78EB73E777F14187D60494F7A708F04BA7AA78@G4W3227.americas.hpqcorp.net> References: <3F78EB73E777F14187D60494F7A708F04BA7AA78@G4W3227.americas.hpqcorp.net> Message-ID: On Mon, Oct 6, 2014 at 6:58 AM, S M, Praveen Kumar wrote: > Hello All, > > > > We are trying to integrate openstack with java. > > > > While doing so we are facing ?ImportError: No module named urllib? > > > > When we import from python console it works fine. > > > > However when we are trying to import using jython 2.7 interpreter we get the > below error. > > > > Caused by: Traceback (most recent call last): > > from oslo import messaging > > File "/site-packages/oslo/messaging/__init__.py", line 18, in > > from oslo.messaging.notify import * > > File "/site-packages/oslo/messaging/notify/__init__.py", line 25, in > > > from oslo.messaging.notify.logger import * > > File "/site-packages/oslo/messaging/notify/logger.py", line 21, in > > > from oslo.messaging import transport > > File "/site-packages/oslo/messaging/transport.py", line 31, in > > from six.moves.urllib import parse > > ImportError: No module named urllib > > > > > > > > We are using python package SIX of version 1.8. > > > > > > Please let me know your inputs on this. > Did you ever resolve this? I'm also seeing this ImportError too. -- Paul Belanger | PolyBeacon, Inc. Jabber: paul.belanger at polybeacon.com | IRC: pabelanger (Freenode) Github: https://github.com/pabelanger | Twitter: https://twitter.com/pabelanger From jp at jamezpolley.com Thu Dec 4 17:17:29 2014 From: jp at jamezpolley.com (James Polley) Date: Thu, 4 Dec 2014 18:17:29 +0100 Subject: [openstack-dev] [TripleO] Alternate meeting time In-Reply-To: <54803A17.3040403@redhat.com> References: <547DD0A2.3030102@redhat.com> <547DD6E1.8070103@redhat.com> <547DD9F5.6010108@redhat.com> <54803A17.3040403@redhat.com> Message-ID: On Thu, Dec 4, 2014 at 11:40 AM, marios wrote: > On 04/12/14 11:40, James Polley wrote: > > Just taking a look at http://doodle.com/27ffgkdm5gxzr654 again - we've > > had 10 people respond so far. The winning time so far is Monday 2100UTC > > - 7 "yes" and one "If I have to". > > for me it currently shows 1200 UTC as the preferred time. > You're the 11th responder :) And yes, 1200/1400/1500 are now all leading with 8/0/3. > > So to be clear, we are voting here for the alternate meeting. The > 'original' meeting is at 1900UTC. If in fact 2100UTC ends up being the > most popular, what would be the point of an alternating meeting that is > only 2 hours apart in time? > To me the point would be to get more people able to come along to the meeting. But if the difference *was* that small, I'd be wanting to ask if changing the format or content of the meeting could convince more people to join the 1900UTC meeting - I think that having just one meeting for the whole team would be preferable, if we could manage it. But at present, it looks like if we want to maximise attendance, we should be focusing on European early afternoon. That unfortunately means that it's going to be very hard for those of us in Australia/New Zealand/China/Japan to make it - 1400UTC is 1am Sydney, 10pm Beijing. It's 7:30pm New Delhi, which might be doable, but I don't know of anyone working there who would regularly attend. > > > > > > > ... but the 2 people who can't make that time work in UTC or CET. > > Finding a time that includes those people rules out people who work in > > Eastern Australia and New Zealand. Purely in terms of getting the > > biggest numbers in, Monday 2100UTC seems like the most workable time so > far. > > > > > > > On Tue, Dec 2, 2014 at 4:25 PM, Derek Higgins > > wrote: > > > > On 02/12/14 15:12, Giulio Fidente wrote: > > > On 12/02/2014 03:45 PM, Derek Higgins wrote: > > >> On 02/12/14 14:10, James Polley wrote: > > >>> Months ago, I pushed for us to alternate meeting times to > > something that > > >>> was friendlier to me, so we started doing alternate weeks at > > 0700UTC. > > >>> That worked well for me, but wasn't working so well for a few > > people in > > >>> Europe, so we decided to give 0800UTC a try. Then DST changes > > happened, > > >>> and wiki pages got out of sync, and there was confusion about > > what time > > >>> the meeting is at.. > > >>> > > >>> The alternate meeting hasn't been very well attended for the > last ~3 > > >>> meetings. Partly I think that's due to summit and travel plans, > > but it > > >>> seems like the 0800UTC time doesn't work very well for quite a > few > > >>> people. > > >>> > > >>> So, instead of trying things at random, I've > > >>> created > > https://etherpad.openstack.org/p/tripleo-alternate-meeting-time > > >>> as a starting point for figuring out what meeting time might > > work well > > >>> for the most people. Obviously the world is round, and people > have > > >>> different schedules, and we're never going to get a meeting time > > that > > >>> works well for everyone - but it'd be nice to try to maximise > > attendance > > >>> (and minimise inconvenience) as much as we can. > > >>> > > >>> If you regularly attend, or would like to attend, the meeting, > > please > > >>> take a moment to look at the etherpad to register your vote for > > which > > >>> time works best for you. There's even a section for you to cast > your > > >>> vote if the UTC1900 meeting (aka the "main" or "US-Friendly" > > meeting) > > >>> works better for you! > > >> > > >> > > >> Can I suggest an alternative data gathering method, I've put each > > hour > > >> in a week in a poll, for each slot you have 3 options > > > > > > I think it is great, but would be even better if we could trim it > to > > > just a *single* day and once we agreed on the timeframe, we decide > on > > > the day as that probably won't count much so long as it is a > weekday I > > > suppose > > > > I think leaving the whole week in there is better, some people may > have > > different schedules on different weekdays, me for example ;-) > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne at openstack.org Thu Dec 4 17:18:32 2014 From: anne at openstack.org (Anne Gentle) Date: Thu, 4 Dec 2014 11:18:32 -0600 Subject: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository? In-Reply-To: References: Message-ID: Hi Vadivel, We do have a blueprint in the docs-specs repo under review for driver documentation and I'd like to get your input. https://review.openstack.org/#/c/133372/ Here's a relevant excerpt: The documentation team will fully document the reference drivers as specified below and just add short sections for other drivers. Guidelines for drivers that will be documented fully in the OpenStack documentation: * The complete solution must be open source and use standard hardware * The driver must be part of the respective OpenStack repository * The driver is considered one of the reference drivers For documentation of other drivers, the following guidelines apply: * The Configuration Reference will contain a small section for each driver, see below for details * Only drivers are covered that are contained in the official OpenStack project repository for drivers (for example in the main project repository or the official "third party" repository). With this policy, the docs team will document in their guides the following: * For cinder: volume drivers: document LVM only (TBD later: Samba, glusterfs); backup drivers: document swift (TBD later: ceph) * For glance: Document local storage, cinder, and swift as backends * For neutron: document ML2 plug-in with the mechanisms drivers OpenVSwitch and LinuxBridge * For nova: document KVM (mostly), send Xen open source call for help * For sahara: apache hadoop * For trove: document all supported Open Source database engines like MySQL. Let us know in the review itself if this answers your question about third-party drivers not in an official repository. Thanks, Anne On Thu, Dec 4, 2014 at 9:59 AM, Vadivel Poonathan < vadivel.openstack at gmail.com> wrote: > Hi Kyle and all, > > Was there any conclusion in the design summit or the meetings afterward > about splitting the vendor plugins/drivers from the mainstream neutron and > documentation of out-of-tree plugins/drivers?... > > Thanks, > Vad > -- > > > On Thu, Oct 23, 2014 at 11:27 AM, Kyle Mestery > wrote: > >> On Thu, Oct 23, 2014 at 12:35 PM, Vadivel Poonathan >> wrote: >> > Hi Kyle and Anne, >> > >> > Thanks for the clarifications... understood and it makes sense. >> > >> > However, per my understanding, the drivers (aka plugins) are meant to be >> > developed and supported by third-party vendors, outside of the OpenStack >> > community, and they are supposed to work as plug-n-play... they are not >> part >> > of the core OpenStack development, nor any of its components. If that >> is the >> > case, then why should OpenStack community include and maintain them as >> part >> > of it, for every release?... Wouldnt it be enough to limit the scope >> with >> > the plugin framework and built-in drivers such as LinuxBridge or OVS >> etc?... >> > not extending to commercial vendors?... (It is just a curious question, >> > forgive me if i missed something and correct me!). >> > >> You haven't misunderstood anything, we're in the process of splitting >> these things out, and this will be a prime focus of the Neutron design >> summit track at the upcoming summit. >> >> Thanks, >> Kyle >> >> > At the same time, IMHO, there must be some reference or a page within >> the >> > scope of OpenStack documentation (not necessarily the core docs, but >> some >> > wiki page or reference link or so - as Anne suggested) to mention the >> list >> > of the drivers/plugins supported as of given release and may be an >> external >> > link to know more details about the driver, if the link is provided by >> > respective vendor. >> > >> > >> > Anyway, besides my opinion, the wiki page similar to hypervisor driver >> would >> > be good for now atleast, until the direction/policy level decision is >> made >> > to maintain out-of-tree plugins/drivers. >> > >> > >> > Thanks, >> > Vad >> > -- >> > >> > >> > >> > >> > On Thu, Oct 23, 2014 at 9:46 AM, Edgar Magana > > >> > wrote: >> >> >> >> I second Anne?s and Kyle comments. Actually, I like very much the wiki >> >> part to provide some visibility for out-of-tree plugins/drivers but >> not into >> >> the official documentation. >> >> >> >> Thanks, >> >> >> >> Edgar >> >> >> >> From: Anne Gentle >> >> Reply-To: "OpenStack Development Mailing List (not for usage >> questions)" >> >> >> >> Date: Thursday, October 23, 2014 at 8:51 AM >> >> To: Kyle Mestery >> >> Cc: "OpenStack Development Mailing List (not for usage questions)" >> >> >> >> Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update >> >> about new vendor plugin, but without code in repository? >> >> >> >> >> >> >> >> On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery >> >> wrote: >> >>> >> >>> Vad: >> >>> >> >>> The third-party CI is required for your upstream driver. I think >> >>> what's different from my reading of this thread is the question of >> >>> what is the requirement to have a driver listed in the upstream >> >>> documentation which is not in the upstream codebase. To my knowledge, >> >>> we haven't done this. Thus, IMHO, we should NOT be utilizing upstream >> >>> documentation to document drivers which are themselves not upstream. >> >>> When we split out the drivers which are currently upstream in neutron >> >>> into a separate repo, they will still be upstream. So my opinion here >> >>> is that if your driver is not upstream, it shouldn't be in the >> >>> upstream documentation. But I'd like to hear others opinions as well. >> >>> >> >> >> >> This is my sense as well. >> >> >> >> The hypervisor drivers are documented on the wiki, sometimes they're >> >> in-tree, sometimes they're not, but the state of testing is documented >> on >> >> the wiki. I think we could take this approach for network and storage >> >> drivers as well. >> >> >> >> https://wiki.openstack.org/wiki/HypervisorSupportMatrix >> >> >> >> Anne >> >> >> >>> >> >>> Thanks, >> >>> Kyle >> >>> >> >>> On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan >> >>> wrote: >> >>> > Kyle, >> >>> > Gentle reminder... when you get a chance!.. >> >>> > >> >>> > Anne, >> >>> > In case, if i need to send it to different group or email-id to >> reach >> >>> > Kyle >> >>> > Mestery, pls. let me know. Thanks for your help. >> >>> > >> >>> > Regards, >> >>> > Vad >> >>> > -- >> >>> > >> >>> > >> >>> > On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan >> >>> > wrote: >> >>> >> >> >>> >> Hi Kyle, >> >>> >> >> >>> >> Can you pls. comment on this discussion and confirm the >> requirements >> >>> >> for >> >>> >> getting out-of-tree mechanism_driver listed in the supported >> >>> >> plugin/driver >> >>> >> list of the Openstack Neutron docs. >> >>> >> >> >>> >> Thanks, >> >>> >> Vad >> >>> >> -- >> >>> >> >> >>> >> On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle >> >>> >> wrote: >> >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan >> >>> >>> wrote: >> >>> >>>> >> >>> >>>> Hi, >> >>> >>>> >> >>> >>>> >>>> On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton >> >>> >>>> >>>> >> >>> >>>> >>>> wrote: >> >>> >>>> >>>>> >> >>> >>>> >>>>> I think you will probably have to wait until after the >> summit >> >>> >>>> >>>>> so >> >>> >>>> >>>>> we can >> >>> >>>> >>>>> see the direction that will be taken with the rest of the >> >>> >>>> >>>>> in-tree >> >>> >>>> >>>>> drivers/plugins. It seems like we are moving towards >> removing >> >>> >>>> >>>>> all >> >>> >>>> >>>>> of them so >> >>> >>>> >>>>> we would definitely need a solution to documenting >> out-of-tree >> >>> >>>> >>>>> drivers as >> >>> >>>> >>>>> you suggested. >> >>> >>>> >> >>> >>>> [Vad] while i 'm waiting for the conclusion on this subject, i 'm >> >>> >>>> trying >> >>> >>>> to setup the third-party CI/Test system and meet its >> requirements to >> >>> >>>> get my >> >>> >>>> mechanism_driver listed in the Kilo's documentation, in parallel. >> >>> >>>> >> >>> >>>> Couple of questions/confirmations before i proceed further on >> this >> >>> >>>> direction... >> >>> >>>> >> >>> >>>> 1) Is there anything more required other than the third-party >> >>> >>>> CI/Test >> >>> >>>> requirements ??.. like should I still need to go-through the >> entire >> >>> >>>> development process of submit/review/approval of the blue-print >> and >> >>> >>>> code of >> >>> >>>> my ML2 driver which was already developed and in-use?... >> >>> >>>> >> >>> >>> >> >>> >>> The neutron PTL Kyle Mestery can answer if there are any >> additional >> >>> >>> requirements. >> >>> >>> >> >>> >>>> >> >>> >>>> 2) Who is the authority to clarify and confirm the above (and >> how do >> >>> >>>> i >> >>> >>>> contact them)?... >> >>> >>> >> >>> >>> >> >>> >>> Elections just completed, and the newly elected PTL is Kyle >> Mestery, >> >>> >>> >> >>> >>> >> http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html >> . >> >>> >>> >> >>> >>>> >> >>> >>>> >> >>> >>>> Thanks again for your inputs... >> >>> >>>> >> >>> >>>> Regards, >> >>> >>>> Vad >> >>> >>>> -- >> >>> >>>> >> >>> >>>> On Tue, Oct 14, 2014 at 3:17 PM, Anne Gentle > > >> >>> >>>> wrote: >> >>> >>>>> >> >>> >>>>> >> >>> >>>>> >> >>> >>>>> On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan >> >>> >>>>> wrote: >> >>> >>>>>> >> >>> >>>>>> Agreed on the requirements of test results to qualify the >> vendor >> >>> >>>>>> plugin to be listed in the upstream docs. >> >>> >>>>>> Is there any procedure/infrastructure currently available for >> this >> >>> >>>>>> purpose?.. >> >>> >>>>>> Pls. fwd any link/pointers on those info. >> >>> >>>>>> >> >>> >>>>> >> >>> >>>>> Here's a link to the third-party testing setup information. >> >>> >>>>> >> >>> >>>>> http://ci.openstack.org/third_party.html >> >>> >>>>> >> >>> >>>>> Feel free to keep asking questions as you dig deeper. >> >>> >>>>> Thanks, >> >>> >>>>> Anne >> >>> >>>>> >> >>> >>>>>> >> >>> >>>>>> Thanks, >> >>> >>>>>> Vad >> >>> >>>>>> -- >> >>> >>>>>> >> >>> >>>>>> On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki >> >>> >>>>>> >> >>> >>>>>> wrote: >> >>> >>>>>>> >> >>> >>>>>>> I agree with Kevin and Kyle. Even if we decided to use >> separate >> >>> >>>>>>> tree >> >>> >>>>>>> for neutron >> >>> >>>>>>> plugins and drivers, they still will be regarded as part of >> the >> >>> >>>>>>> upstream. >> >>> >>>>>>> These plugins/drivers need to prove they are well integrated >> with >> >>> >>>>>>> Neutron master >> >>> >>>>>>> in some way and gating integration proves it is well tested >> and >> >>> >>>>>>> integrated. >> >>> >>>>>>> I believe it is a reasonable assumption and requirement that a >> >>> >>>>>>> vendor >> >>> >>>>>>> plugin/driver >> >>> >>>>>>> is listed in the upstream docs. This is a same kind of >> question >> >>> >>>>>>> as >> >>> >>>>>>> what vendor plugins >> >>> >>>>>>> are tested and worth documented in the upstream docs. >> >>> >>>>>>> I hope you work with the neutron team and run the third party >> >>> >>>>>>> requirements. >> >>> >>>>>>> >> >>> >>>>>>> Thanks, >> >>> >>>>>>> Akihiro >> >>> >>>>>>> >> >>> >>>>>>> On Tue, Oct 14, 2014 at 10:09 AM, Kyle Mestery >> >>> >>>>>>> >> >>> >>>>>>> wrote: >> >>> >>>>>>> > On Mon, Oct 13, 2014 at 6:44 PM, Kevin Benton >> >>> >>>>>>> > >> >>> >>>>>>> > wrote: >> >>> >>>>>>> >>>The OpenStack dev and docs team dont have to worry about >> >>> >>>>>>> >>> gating/publishing/maintaining the vendor specific >> >>> >>>>>>> >>> plugins/drivers. >> >>> >>>>>>> >> >> >>> >>>>>>> >> I disagree about the gating part. If a vendor wants to >> have a >> >>> >>>>>>> >> link >> >>> >>>>>>> >> that >> >>> >>>>>>> >> shows they are compatible with openstack, they should be >> >>> >>>>>>> >> reporting >> >>> >>>>>>> >> test >> >>> >>>>>>> >> results on all patches. A link to a vendor driver in the >> docs >> >>> >>>>>>> >> should signify >> >>> >>>>>>> >> some form of testing that the community is comfortable >> with. >> >>> >>>>>>> >> >> >>> >>>>>>> > I agree with Kevin here. If you want to play upstream, in >> >>> >>>>>>> > whatever >> >>> >>>>>>> > form that takes by the end of Kilo, you have to work with >> the >> >>> >>>>>>> > existing >> >>> >>>>>>> > third-party requirements and team to take advantage of >> being a >> >>> >>>>>>> > part >> >>> >>>>>>> > of >> >>> >>>>>>> > things like upstream docs. >> >>> >>>>>>> > >> >>> >>>>>>> > Thanks, >> >>> >>>>>>> > Kyle >> >>> >>>>>>> > >> >>> >>>>>>> >> On Mon, Oct 13, 2014 at 11:33 AM, Vadivel Poonathan >> >>> >>>>>>> >> wrote: >> >>> >>>>>>> >>> >> >>> >>>>>>> >>> Hi, >> >>> >>>>>>> >>> >> >>> >>>>>>> >>> If the plan is to move ALL existing vendor specific >> >>> >>>>>>> >>> plugins/drivers >> >>> >>>>>>> >>> out-of-tree, then having a place-holder within the >> OpenStack >> >>> >>>>>>> >>> domain would >> >>> >>>>>>> >>> suffice, where the vendors can list their plugins/drivers >> >>> >>>>>>> >>> along >> >>> >>>>>>> >>> with their >> >>> >>>>>>> >>> documentation as how to install and use etc. >> >>> >>>>>>> >>> >> >>> >>>>>>> >>> The main Openstack Neutron documentation page can explain >> the >> >>> >>>>>>> >>> plugin >> >>> >>>>>>> >>> framework (ml2 type drivers, mechanism drivers, serviec >> >>> >>>>>>> >>> plugin >> >>> >>>>>>> >>> and so on) >> >>> >>>>>>> >>> and its purpose/usage etc, then provide a link to refer >> the >> >>> >>>>>>> >>> currently >> >>> >>>>>>> >>> supported vendor specific plugins/drivers for more >> details. >> >>> >>>>>>> >>> That >> >>> >>>>>>> >>> way the >> >>> >>>>>>> >>> documentation will be accurate to what is "in-tree" and >> limit >> >>> >>>>>>> >>> the >> >>> >>>>>>> >>> documentation of external plugins/drivers to have just a >> >>> >>>>>>> >>> reference link. So >> >>> >>>>>>> >>> its now vendor's responsibility to keep their driver's >> >>> >>>>>>> >>> up-to-date and their >> >>> >>>>>>> >>> documentation accurate. The OpenStack dev and docs team >> dont >> >>> >>>>>>> >>> have >> >>> >>>>>>> >>> to worry >> >>> >>>>>>> >>> about gating/publishing/maintaining the vendor specific >> >>> >>>>>>> >>> plugins/drivers. >> >>> >>>>>>> >>> >> >>> >>>>>>> >>> The built-in drivers such as LinuxBridge or OpenVSwitch >> etc >> >>> >>>>>>> >>> can >> >>> >>>>>>> >>> continue >> >>> >>>>>>> >>> to be "in-tree" and their documentation will be part of >> main >> >>> >>>>>>> >>> Neutron's docs. >> >>> >>>>>>> >>> So the Neutron is guaranteed to work with built-in >> >>> >>>>>>> >>> plugins/drivers as per >> >>> >>>>>>> >>> the documentation and the user is informed to refer the >> >>> >>>>>>> >>> "external >> >>> >>>>>>> >>> vendor >> >>> >>>>>>> >>> plug-in page" for additional/specific plugins/drivers. >> >>> >>>>>>> >>> >> >>> >>>>>>> >>> >> >>> >>>>>>> >>> Thanks, >> >>> >>>>>>> >>> Vad >> >>> >>>>>>> >>> -- >> >>> >>>>>>> >>> >> >>> >>>>>>> >>> >> >>> >>>>>>> >>> On Fri, Oct 10, 2014 at 8:10 PM, Anne Gentle >> >>> >>>>>>> >>> >> >>> >>>>>>> >>> wrote: >> >>> >>>>>>> >>>> >> >>> >>>>>>> >>>> >> >>> >>>>>>> >>>> >> >>> >>>>>>> >>>> On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton >> >>> >>>>>>> >>>> wrote: >> >>> >>>>>>> >>>>> >> >>> >>>>>>> >>>>> I think you will probably have to wait until after the >> >>> >>>>>>> >>>>> summit >> >>> >>>>>>> >>>>> so we can >> >>> >>>>>>> >>>>> see the direction that will be taken with the rest of >> the >> >>> >>>>>>> >>>>> in-tree >> >>> >>>>>>> >>>>> drivers/plugins. It seems like we are moving towards >> >>> >>>>>>> >>>>> removing >> >>> >>>>>>> >>>>> all of them so >> >>> >>>>>>> >>>>> we would definitely need a solution to documenting >> >>> >>>>>>> >>>>> out-of-tree >> >>> >>>>>>> >>>>> drivers as >> >>> >>>>>>> >>>>> you suggested. >> >>> >>>>>>> >>>>> >> >>> >>>>>>> >>>>> However, I think the minimum requirements for having a >> >>> >>>>>>> >>>>> driver >> >>> >>>>>>> >>>>> being >> >>> >>>>>>> >>>>> documented should be third-party testing of Neutron >> >>> >>>>>>> >>>>> patches. >> >>> >>>>>>> >>>>> Otherwise the >> >>> >>>>>>> >>>>> docs will become littered with a bunch of links to >> >>> >>>>>>> >>>>> drivers/plugins with no >> >>> >>>>>>> >>>>> indication of what actually works, which ultimately >> makes >> >>> >>>>>>> >>>>> Neutron look bad. >> >>> >>>>>>> >>>> >> >>> >>>>>>> >>>> >> >>> >>>>>>> >>>> This is my line of thinking as well, expanded to >> "ultimately >> >>> >>>>>>> >>>> makes >> >>> >>>>>>> >>>> OpenStack docs look bad" -- a perception I want to avoid. >> >>> >>>>>>> >>>> >> >>> >>>>>>> >>>> Keep the viewpoints coming. We have a crucial balancing >> act >> >>> >>>>>>> >>>> ahead: users >> >>> >>>>>>> >>>> need to trust docs and trust the drivers. Ultimately the >> >>> >>>>>>> >>>> responsibility for >> >>> >>>>>>> >>>> the docs is in the hands of the driver contributors so it >> >>> >>>>>>> >>>> seems >> >>> >>>>>>> >>>> those should >> >>> >>>>>>> >>>> be on a domain name where drivers control publishing and >> >>> >>>>>>> >>>> OpenStack docs are >> >>> >>>>>>> >>>> not a gatekeeper, quality checker, reviewer, or >> publisher. >> >>> >>>>>>> >>>> >> >>> >>>>>>> >>>> We have documented the status of hypervisor drivers on an >> >>> >>>>>>> >>>> OpenStack wiki >> >>> >>>>>>> >>>> page. [1] To me, that type of list could be maintained on >> >>> >>>>>>> >>>> the >> >>> >>>>>>> >>>> wiki page >> >>> >>>>>>> >>>> better than in the docs themselves. Thoughts? Feelings? >> More >> >>> >>>>>>> >>>> discussion, >> >>> >>>>>>> >>>> please. And thank you for the responses so far. >> >>> >>>>>>> >>>> Anne >> >>> >>>>>>> >>>> >> >>> >>>>>>> >>>> [1] >> https://wiki.openstack.org/wiki/HypervisorSupportMatrix >> >>> >>>>>>> >>>> >> >>> >>>>>>> >>>>> >> >>> >>>>>>> >>>>> >> >>> >>>>>>> >>>>> On Fri, Oct 10, 2014 at 1:28 PM, Vadivel Poonathan >> >>> >>>>>>> >>>>> wrote: >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>>> Hi Anne, >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>>> Thanks for your immediate response!... >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>>> Just to clarify... I have developed and maintaining a >> >>> >>>>>>> >>>>>> Neutron >> >>> >>>>>>> >>>>>> plug-in >> >>> >>>>>>> >>>>>> (ML2 mechanism_driver) since Grizzly and now it is >> >>> >>>>>>> >>>>>> up-to-date >> >>> >>>>>>> >>>>>> with Icehouse. >> >>> >>>>>>> >>>>>> But it was never listed nor part of the main Openstack >> >>> >>>>>>> >>>>>> releases. Now i would >> >>> >>>>>>> >>>>>> like to have my plugin mentioned as "supported >> >>> >>>>>>> >>>>>> plugin/mechanism_driver for >> >>> >>>>>>> >>>>>> so and so vendor equipments" in the docs.openstack.org >> , >> >>> >>>>>>> >>>>>> but >> >>> >>>>>>> >>>>>> without having >> >>> >>>>>>> >>>>>> the actual plugin code to be posted in the main >> Openstack >> >>> >>>>>>> >>>>>> GIT >> >>> >>>>>>> >>>>>> repository. >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>>> Reason is that I dont have plan/bandwidth to go thru >> the >> >>> >>>>>>> >>>>>> entire process >> >>> >>>>>>> >>>>>> of new plugin blue-print/development/review/testing >> etc as >> >>> >>>>>>> >>>>>> required by the >> >>> >>>>>>> >>>>>> Openstack development community. Bcos this is already >> >>> >>>>>>> >>>>>> developed, tested and >> >>> >>>>>>> >>>>>> released to some customers directly. Now I just want to >> >>> >>>>>>> >>>>>> get it >> >>> >>>>>>> >>>>>> to the >> >>> >>>>>>> >>>>>> official Openstack documentation, so that more people >> can >> >>> >>>>>>> >>>>>> get >> >>> >>>>>>> >>>>>> this and use. >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>>> The plugin package is made available to public from >> Ubuntu >> >>> >>>>>>> >>>>>> repository >> >>> >>>>>>> >>>>>> along with necessary documentation. So people can >> directly >> >>> >>>>>>> >>>>>> get >> >>> >>>>>>> >>>>>> it from >> >>> >>>>>>> >>>>>> Ubuntu repository and use it. All i need is to get >> listed >> >>> >>>>>>> >>>>>> in >> >>> >>>>>>> >>>>>> the >> >>> >>>>>>> >>>>>> docs.openstack.org so that people knows that it >> exists and >> >>> >>>>>>> >>>>>> can >> >>> >>>>>>> >>>>>> be used with >> >>> >>>>>>> >>>>>> any Openstack. >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>>> Pls. confrim whether this is something possible?... >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>>> Thanks again!.. >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>>> Vad >> >>> >>>>>>> >>>>>> -- >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>>> On Fri, Oct 10, 2014 at 12:18 PM, Anne Gentle >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>>> wrote: >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>>> On Fri, Oct 10, 2014 at 2:11 PM, Vadivel Poonathan >> >>> >>>>>>> >>>>>>> wrote: >> >>> >>>>>>> >>>>>>>> >> >>> >>>>>>> >>>>>>>> Hi, >> >>> >>>>>>> >>>>>>>> >> >>> >>>>>>> >>>>>>>> How to include a new vendor plug-in (aka >> >>> >>>>>>> >>>>>>>> mechanism_driver in >> >>> >>>>>>> >>>>>>>> ML2 >> >>> >>>>>>> >>>>>>>> framework) into the Openstack documentation?.. In >> other >> >>> >>>>>>> >>>>>>>> words, is it >> >>> >>>>>>> >>>>>>>> possible to include a new plug-in in the Openstack >> >>> >>>>>>> >>>>>>>> documentation page >> >>> >>>>>>> >>>>>>>> without having the actual plug-in code as part of the >> >>> >>>>>>> >>>>>>>> Openstack neutron >> >>> >>>>>>> >>>>>>>> repository?... The actual plug-in is posted and >> >>> >>>>>>> >>>>>>>> available >> >>> >>>>>>> >>>>>>>> for the public to >> >>> >>>>>>> >>>>>>>> download as Ubuntu package. But i need to mention >> >>> >>>>>>> >>>>>>>> somewhere >> >>> >>>>>>> >>>>>>>> in the Openstack >> >>> >>>>>>> >>>>>>>> documentation that this new plugin is available for >> the >> >>> >>>>>>> >>>>>>>> public to use along >> >>> >>>>>>> >>>>>>>> with its documentation. >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>>> We definitely want you to include pointers to vendor >> >>> >>>>>>> >>>>>>> documentation in >> >>> >>>>>>> >>>>>>> the OpenStack docs, but I'd prefer make sure they're >> gate >> >>> >>>>>>> >>>>>>> tested before they >> >>> >>>>>>> >>>>>>> get listed on docs.openstack.org. Drivers change >> enough >> >>> >>>>>>> >>>>>>> release-to-release >> >>> >>>>>>> >>>>>>> that it's difficult to keep up maintenance. >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>>> Lately I've been talking to driver contributors >> >>> >>>>>>> >>>>>>> (hypervisor, >> >>> >>>>>>> >>>>>>> storage, >> >>> >>>>>>> >>>>>>> networking) about the out-of-tree changes possible. >> I'd >> >>> >>>>>>> >>>>>>> like >> >>> >>>>>>> >>>>>>> to encourage >> >>> >>>>>>> >>>>>>> even out-of-tree drivers to get listed, but to store >> >>> >>>>>>> >>>>>>> their >> >>> >>>>>>> >>>>>>> main documents >> >>> >>>>>>> >>>>>>> outside of docs.openstack.org, if they are >> gate-tested. >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>>> Anyone have other ideas here? >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>>> Looping in the OpenStack-docs mailing list also. >> >>> >>>>>>> >>>>>>> Anne >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>>>> >> >>> >>>>>>> >>>>>>>> Pls. provide some insights into whether it is >> >>> >>>>>>> >>>>>>>> possible?.. >> >>> >>>>>>> >>>>>>>> and any >> >>> >>>>>>> >>>>>>>> further info on this?.. >> >>> >>>>>>> >>>>>>>> >> >>> >>>>>>> >>>>>>>> Thanks, >> >>> >>>>>>> >>>>>>>> >> >>> >>>>>>> >>>>>>>> Vad >> >>> >>>>>>> >>>>>>>> >> >>> >>>>>>> >>>>>>>> -- >> >>> >>>>>>> >>>>>>>> >> >>> >>>>>>> >>>>>>>> >> >>> >>>>>>> >>>>>>>> _______________________________________________ >> >>> >>>>>>> >>>>>>>> OpenStack-dev mailing list >> >>> >>>>>>> >>>>>>>> OpenStack-dev at lists.openstack.org >> >>> >>>>>>> >>>>>>>> >> >>> >>>>>>> >>>>>>>> >> >>> >>>>>>> >>>>>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >>>>>>> >>>>>>>> >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>>> _______________________________________________ >> >>> >>>>>>> >>>>>>> OpenStack-dev mailing list >> >>> >>>>>>> >>>>>>> OpenStack-dev at lists.openstack.org >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >>>>>>> >>>>>>> >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>>> _______________________________________________ >> >>> >>>>>>> >>>>>> OpenStack-dev mailing list >> >>> >>>>>>> >>>>>> OpenStack-dev at lists.openstack.org >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >>>>>>> >>>>>> >> >>> >>>>>>> >>>>> >> >>> >>>>>>> >>>>> >> >>> >>>>>>> >>>>> >> >>> >>>>>>> >>>>> -- >> >>> >>>>>>> >>>>> Kevin Benton >> >>> >>>>>>> >>>>> >> >>> >>>>>>> >>>>> _______________________________________________ >> >>> >>>>>>> >>>>> OpenStack-dev mailing list >> >>> >>>>>>> >>>>> OpenStack-dev at lists.openstack.org >> >>> >>>>>>> >>>>> >> >>> >>>>>>> >>>>> >> >>> >>>>>>> >>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >>>>>>> >>>>> >> >>> >>>>>>> >>>> >> >>> >>>>>>> >>>> >> >>> >>>>>>> >>>> _______________________________________________ >> >>> >>>>>>> >>>> OpenStack-dev mailing list >> >>> >>>>>>> >>>> OpenStack-dev at lists.openstack.org >> >>> >>>>>>> >>>> >> >>> >>>>>>> >>>> >> >>> >>>>>>> >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >>>>>>> >>>> >> >>> >>>>>>> >>> >> >>> >>>>>>> >>> >> >>> >>>>>>> >>> _______________________________________________ >> >>> >>>>>>> >>> OpenStack-dev mailing list >> >>> >>>>>>> >>> OpenStack-dev at lists.openstack.org >> >>> >>>>>>> >>> >> >>> >>>>>>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >>>>>>> >>> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> -- >> >>> >>>>>>> >> Kevin Benton >> >>> >>>>>>> >> >> >>> >>>>>>> >> _______________________________________________ >> >>> >>>>>>> >> OpenStack-dev mailing list >> >>> >>>>>>> >> OpenStack-dev at lists.openstack.org >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >>>>>>> >> >> >>> >>>>>>> > >> >>> >>>>>>> > _______________________________________________ >> >>> >>>>>>> > OpenStack-dev mailing list >> >>> >>>>>>> > OpenStack-dev at lists.openstack.org >> >>> >>>>>>> > >> >>> >>>>>>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >>>>>>> >> >>> >>>>>>> >> >>> >>>>>>> >> >>> >>>>>>> -- >> >>> >>>>>>> Akihiro Motoki >> >>> >>>>>>> >> >>> >>>>>>> _______________________________________________ >> >>> >>>>>>> OpenStack-dev mailing list >> >>> >>>>>>> OpenStack-dev at lists.openstack.org >> >>> >>>>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >>>>>> >> >>> >>>>>> >> >>> >>>>>> >> >>> >>>>>> _______________________________________________ >> >>> >>>>>> OpenStack-dev mailing list >> >>> >>>>>> OpenStack-dev at lists.openstack.org >> >>> >>>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >>>>>> >> >>> >>>>> >> >>> >>>>> >> >>> >>>>> _______________________________________________ >> >>> >>>>> OpenStack-dev mailing list >> >>> >>>>> OpenStack-dev at lists.openstack.org >> >>> >>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >>>>> >> >>> >>>> >> >>> >>>> >> >>> >>>> _______________________________________________ >> >>> >>>> OpenStack-dev mailing list >> >>> >>>> OpenStack-dev at lists.openstack.org >> >>> >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >>>> >> >>> >>> >> >>> >>> >> >>> >>> _______________________________________________ >> >>> >>> OpenStack-dev mailing list >> >>> >>> OpenStack-dev at lists.openstack.org >> >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >>> >> >>> >> >> >>> > >> >> >> >> >> >> >> >> _______________________________________________ >> >> OpenStack-dev mailing list >> >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> > >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp at jamezpolley.com Thu Dec 4 17:20:42 2014 From: jp at jamezpolley.com (James Polley) Date: Thu, 4 Dec 2014 18:20:42 +0100 Subject: [openstack-dev] [TripleO] Meeting purpose In-Reply-To: <54807146.7070605@redhat.com> References: <1417699460.2112.13.camel@dovetail.localdomain> <54807146.7070605@redhat.com> Message-ID: On Thu, Dec 4, 2014 at 3:35 PM, Jay Dobies wrote: > As an example of something that I think doesn't add much value in the >>> meeting - DerekH has already been giving semi-regular CI/CD status >>> reports via email. I'd like to make these weekly update emails >>> regular, and take the update off the meeting agenda. I'm offering to >>> share the load with him to make this easier to achieve. >>> >> > The Tuskar item is the same way. Not sure how that was added as an > explicit agenda item, but I don't see why we'd call out to one particular > project within TripleO. Anything we'd need eyes on should be covered when > we chime in about specs or reviews needing eyes. > > Are there other things on our regular agenda that you feel aren't >>> offering much value? >>> >> >> I'd propose we axe the regular agenda entirely and let people promote >> things in open discussion if they need to. In fact the regular agenda >> often seems like a bunch of motions we go through... to the extent that >> while the TripleO meeting was going on we've actually discussed what was >> in my opinion the most important things in the normal #tripleo IRC >> channel. Is getting through our review stats really that important!? >> > > I think the review stats would be better handled in e-mail format like > Derek's CI status e-mails. We don't want the reviews to get out of hand, > but the time spent pasting in the links and everyone looking at the stats > during the meeting itself are wasteful. I could see bringing it up if it's > becoming a problem, but the number crunching doesn't need to be part of the > meeting. I agree; I think it's useful to make sure we keep on top of the stats, but I don't think it needs to be done in the meeting. > > > Are there things you'd like to see moved onto, or off, the agenda? >>> >> >> Perhaps a streamlined agenda like this would work better: >> >> * Bugs >> > > This one is valuable and I like the idea of keeping it. > > * Projects needing releases >> > > Is this even needed as well? It feels like for months now the answer is > always "Yes, release the world". > But the next question is: "Who is going to release the world?" and that is usually a volunteer from the meeting. Perhaps if we had a regular release team this could be arranged outside of the meeting. > > I think our cadence on those release can be slowed down as well (the last > few releases I've done have had minimal churn at best), but I'm not trying > to thread jack into that discussion. I bring it up because we could remove > that from the meeting and do an entirely new model where we get the release > volunteer through other means on a (potentially) less frequent release > basis. > > * Open Discussion (including important SPECs, CI, or anything needing >> attention). ** Leader might have to drive this ** >> > > I like the idea of a specific Specs/Reviews section. It should be quick, > but a specific point in time where people can #info a review they need eyes > on. I think it appeals to my OCD to have this more structured than > interspersed with other topics in open discussion. One issue I have with these is that I don't know if they get seen by people who aren't at the meeting. Perhaps a weekly email pointing to the minutes and hilighing these would help? > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sross at redhat.com Thu Dec 4 17:22:21 2014 From: sross at redhat.com (Solly Ross) Date: Thu, 4 Dec 2014 12:22:21 -0500 (EST) Subject: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon In-Reply-To: <5480931D.7040302@linux.vnet.ibm.com> References: <547F8EFB.4030302@linux.vnet.ibm.com> <5480931D.7040302@linux.vnet.ibm.com> Message-ID: <981140203.10818981.1417713741747.JavaMail.zimbra@redhat.com> Just throwing in my two cents: Multiple times, I've lost typed in data in Horizon because I've accidentally pressed escape without thinking (I use Vim, so pressing escape when I'm done typing is second nature). While clicking the 'x' button is often deliberate, pressing escape can be accidental (either through habit or through accidentally pressing the key). Best Regards, Solly Ross ----- Original Message ----- > From: "Aaron Sahlin" > To: "OpenStack Development Mailing List (not for usage questions)" > Sent: Thursday, December 4, 2014 12:00:13 PM > Subject: Re: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon > > The more I think on it. I agree with Rob Cresswell comment "While clicking > off the modal is relatively easy to do my accident, hitting Esc or ?X? are > fairly distinct actions." > > While there is nothing wrong with warning the user that they will lose data > after they clicked the 'x' / 'esc'... That was a deliberate action by them. > So might be over engineering this. > > My vote is to just keep it simple and go with changing the default behavior > to 'static'. > > > On 12/4/2014 8:08 AM, Timur Sufiev wrote: > > > > Hi Aaron, > > The only way to combine 2 aforementioned solutions I've been thinking of is > to implement David's solution as the 4th option (in addition to > true|false|static) on a per-form basis, leaving the possibility to change > the default value in configs. I guess this sort of combining would be as > simple as just putting both patches together (perhaps, changing a bit > David's js-code for catching 'click' event - to work only for the modal > forms with [data-modal-backdrop='confirm']). > > On Thu, Dec 4, 2014 at 1:30 AM, Aaron Sahlin < asahlin at linux.vnet.ibm.com > > wrote: > > > > I would be happy with either the two proposed solutions (both improvements > over the what we have now). > Any thoughts on combining them? Only close if esc or 'x' is clicked, but also > warn them if data was entered. > > > > On 12/3/2014 7:21 AM, Rob Cresswell (rcresswe) wrote: > > > > +1 to changing the behaviour to ?static'. Modal inside a modal is potentially > slightly more useful, but looks messy and inconsistent, which I think > outweighs the functionality. > > Rob > > > On 2 Dec 2014, at 12:21, Timur Sufiev < tsufiev at mirantis.com > wrote: > > > > > Hello, Horizoneers and UX-ers! > > The default behavior of modals in Horizon (defined in turn by Bootstrap > defaults) regarding their closing is to simply close the modal once user > clicks somewhere outside of it (on the backdrop element below and around the > modal). This is not very convenient for the modal forms containing a lot of > input - when it is closed without a warning all the data the user has > already provided is lost. Keeping this in mind, I've made a patch [1] > changing default Bootstrap 'modal_backdrop' parameter to 'static', which > means that forms are not closed once the user clicks on a backdrop, while > it's still possible to close them by pressing 'Esc' or clicking on the 'X' > link at the top right border of the form. Also the patch [1] allows to > customize this behavior (between 'true'-current one/'false' - no backdrop > element/'static') on a per-form basis. > > What I didn't know at the moment I was uploading my patch is that David Lyle > had been working on a similar solution [2] some time ago. It's a bit more > elaborate than mine: if the user has already filled some some inputs in the > form, then a confirmation dialog is shown, otherwise the form is silently > dismissed as it happens now. > > The whole point of writing about this in the ML is to gather opinions which > approach is better: > * stick to the current behavior; > * change the default behavior to 'static'; > * use the David's solution with confirmation dialog (once it'll be rebased to > the current codebase). > > What do you think? > > [1] https://review.openstack.org/#/c/113206/ > [2] https://review.openstack.org/#/c/23037/ > > P.S. I remember that I promised to write this email a week ago, but better > late than never :). > > -- > Timur Sufiev > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Timur Sufiev > > > _______________________________________________ > OpenStack-dev mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From anteaya at anteaya.info Thu Dec 4 17:24:39 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Thu, 04 Dec 2014 12:24:39 -0500 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: <547FEC20.7000308@redhat.com> Message-ID: <548098D7.4090702@anteaya.info> On 12/04/2014 05:21 AM, Mathieu Rohon wrote: > On Thu, Dec 4, 2014 at 8:38 AM, Sumit Naiksatam > wrote: >> On Wed, Dec 3, 2014 at 9:07 PM, Adam Young wrote: >>> On 12/03/2014 06:24 PM, Sukhdev Kapur wrote: >>> >>> Congratulations Henry and Kevin. It has always been pleasure working with >>> you guys..... >>> >>> >>> If I may express my opinion, Bob's contribution to ML2 has been quite >>> substantial. The kind of stability ML2 has achieved makes a statement of his >>> dedication to this work. I have worked very closely with Bob on several >>> issues and co-chaired ML2-Subteam with him and have developed tremendous >>> respect for his dedication. >>> Reading his email reply makes me believe he wants to continue to contribute >>> as core developer. Therefore, I would like to take an opportunity to appeal >>> to the core team to consider granting him his wish - i.e. vote -1 on his >>> removal. >>> >>> If I might venture an outside voice in support of Bob: you don't want to >>> chase away the continuity. Yes, sometimes the day job makes us focus on >>> things other than upstream work for a while, but I would say that you should >>> err on the side of keeping someone that is otherwise still engaged. >>> Especially when that core has been as fundamental on a project as I know Bob >>> to have been on Quantum....er Neutron. >> >> I would definitely echo the above sentiments; Bob has continually made >> valuable design contributions to ML2 and Neutron that go beyond the >> review count metric. Kindly consider keeping him as a part of the core >> team. > > Working with bob in the ML2 sub team was a real pleasure. He is > providing a good technical and community leadership. > Its reviews are really valuable, since he always reviews a patch in > the context of the overall project and other work in progress. > This takes more time. > >> That said, a big +1 to both, Henry and Kevin, as additions to the core >> team! Welcome!! >> >> Thanks, >> ~Sumit. >> >>> >>> >>> >>> >>> >>> >>> regards.. >>> -Sukhdev >>> >>> >>> >>> >>> >>> >>> On Wed, Dec 3, 2014 at 11:48 AM, Edgar Magana >>> wrote: >>>> >>>> I give +2 to Henry and Kevin. So, Congratulations Folks! >>>> I have been working with both of them and great quality reviews are always >>>> coming out from them. >>>> >>>> Many thanks to Nachi and Bob for their hard work! >>>> >>>> Edgar >>>> >>>> On 12/2/14, 7:59 AM, "Kyle Mestery" wrote: >>>> >>>>> Now that we're in the thick of working hard on Kilo deliverables, I'd >>>>> like to make some changes to the neutron core team. Reviews are the >>>>> most important part of being a core reviewer, so we need to ensure >>>>> cores are doing reviews. The stats for the 180 day period [1] indicate >>>>> some changes are needed for cores who are no longer reviewing. >>>>> >>>>> First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from >>>>> neutron-core. Bob and Nachi have been core members for a while now. >>>>> They have contributed to Neutron over the years in reviews, code and >>>>> leading sub-teams. I'd like to thank them for all that they have done >>>>> over the years. I'd also like to propose that should they start >>>>> reviewing more going forward the core team looks to fast track them >>>>> back into neutron-core. But for now, their review stats place them >>>>> below the rest of the team for 180 days. >>>>> >>>>> As part of the changes, I'd also like to propose two new members to >>>>> neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have >>>>> been very active in reviews, meetings, and code for a while now. Henry >>>>> lead the DB team which fixed Neutron DB migrations during Juno. Kevin >>>>> has been actively working across all of Neutron, he's done some great >>>>> work on security fixes and stability fixes in particular. Their >>>>> comments in reviews are insightful and they have helped to onboard new >>>>> reviewers and taken the time to work with people on their patches. >>>>> >>>>> Existing neutron cores, please vote +1/-1 for the addition of Henry >>>>> and Kevin to the core team. >>>>> >>>>> Thanks! >>>>> Kyle >>>>> >>>>> [1] http://stackalytics.com/report/contribution/neutron-group/180 >>>>> I think we move into very dangerous territory if we are equating a core review Gerrit permission (it is just a Gerrit permission, if it is perceived as anything other than that that is a perception we have created ourselves) with value as an OpenStack contributor. I work with and value many OpenStack contributors daily who do not have a core review Gerrit permission. I do not advocate for any decision that even indirectly conveys the message that I value their contribution any differently then any other contributor simply based on Gerrit permissions. As to who in Neutron should or should not have certain Gerrit permissions is not for me to say, as I do not have the perspective (other than third party things) to have an opinion. Thank you, Anita. From mbirru at gmail.com Thu Dec 4 17:35:49 2014 From: mbirru at gmail.com (Murali B) Date: Thu, 4 Dec 2014 23:05:49 +0530 Subject: [openstack-dev] SRIOV failures error- In-Reply-To: References: Message-ID: Hi Irena, Thanks for response. we made the change to have same network across all the compute nodes. After the change we are getting the new error saying the binding failed for the vif port when we create the VM. FYI: Please find the complete logs below Neutron/server.log 2014-12-04 18:35:02.498 6997 WARNING neutron.plugins.ml2.managers [req-9fbe04eb-b31b-43a1-aba1-2c1009cf0dc4 None] Failed to bind port f2536225-12bf-4381-adc7-e05660ea9bac on host compute4 2014-12-04 18:35:02.529 6997 WARNING neutron.plugins.ml2.plugin [req-9fbe04eb-b31b-43a1-aba1-2c1009cf0dc4 None] In _notify_port_updated(), no bound segment for port f2536225-12bf-4381-adc7-e05660ea9bac on network 2cb7d304-9d31-4e28-b6ea-24f9abda99c1 2014-12-04 13:27:02.669 6997 ERROR neutron.notifiers.nova [-] Failed to notify nova on events: [{'name': 'network-changed', 'server_uuid': u'3eff58fe-0cc1-4b15-ae0b-70af47cc8f73'}] Attached the document with complete bug report. please go through for more information Thanks -Murali On Wed, Dec 3, 2014 at 12:10 PM, Irena Berezovsky wrote: > Hi Murali, > > Seems there is a mismatch between pci_whitelist configuration and > requested network. > > In the table below: > > "physical_network": "*physnet2*" > > > > In the error you sent, there is: > > > [InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=58584ee1-8a41-4979-9905-4d18a3df3425,spec=[{physical_network=' > *physnet1*'}])], > > > > Please check the neutron and nova configuration for physical_network. > > > > Cheers, > > Irena > > > > *From:* Murali B [mailto:mbirru at gmail.com] > *Sent:* Wednesday, December 03, 2014 5:19 AM > *To:* openstack-dev at lists.openstack.org; itzikb at redhat.com > *Subject:* [openstack-dev] SRIOV failures error- > > > > Hi Itzik, > > > > Thank you for your reply > > > > Please find the below output for "#echo 'use nova;select > hypervisor_hostname,pci_stats from > > compute_nodes;' | mysql -u root" > > > > MariaDB [nova]> select hypervisor_hostname,pci_stats from compute_nodes; > > +---------------------+-------------------------------------------------------------------------------------------+ > | hypervisor_hostname | > pci_stats > | > > +---------------------+-------------------------------------------------------------------------------------------+ > | compute2 | > [] > | > | xilinx-r720 | [{"count": 1, "vendor_id": "8086", > "physical_network": "physnet2", "product_id": "10ed"}] | > | compute1 | > [] > | > | compute4 | > [] > | > > +---------------------+-------------------------------------------------------------------------------------------+ > > > > we have enabled SRIOV agent on compute node 4 as well as xilinx-r720 > > > > Thanks > > -Murali > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: BUG_Juno_SRIOV.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 8235 bytes Desc: not available URL: From paul.belanger at polybeacon.com Thu Dec 4 18:13:49 2014 From: paul.belanger at polybeacon.com (Paul Belanger) Date: Thu, 4 Dec 2014 13:13:49 -0500 Subject: [openstack-dev] Help needed to resolve "ImportError: No module named urllib" In-Reply-To: References: <3F78EB73E777F14187D60494F7A708F04BA7AA78@G4W3227.americas.hpqcorp.net> Message-ID: On Thu, Dec 4, 2014 at 12:06 PM, Paul Belanger wrote: > On Mon, Oct 6, 2014 at 6:58 AM, S M, Praveen Kumar > wrote: >> >> Please let me know your inputs on this. >> > Did you ever resolve this? I'm also seeing this ImportError too. > Thanks to dhellmann for the help, my issue was related to setuptools still using d2to1 with pbr. So, upgrading pbr to the latest and dropping d2to1 was the fix. -- Paul Belanger | PolyBeacon, Inc. Jabber: paul.belanger at polybeacon.com | IRC: pabelanger (Freenode) Github: https://github.com/pabelanger | Twitter: https://twitter.com/pabelanger From stefano at openstack.org Thu Dec 4 18:44:37 2014 From: stefano at openstack.org (Stefano Maffulli) Date: Thu, 04 Dec 2014 10:44:37 -0800 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: <548098D7.4090702@anteaya.info> References: <547FEC20.7000308@redhat.com> <548098D7.4090702@anteaya.info> Message-ID: <5480AB95.1000103@openstack.org> On 12/04/2014 09:24 AM, Anita Kuno wrote: > I think we move into very dangerous territory if we are equating a core > review Gerrit permission (it is just a Gerrit permission, if it is > perceived as anything other than that that is a perception we have > created ourselves) with value as an OpenStack contributor. +2 to what Anita is saying. We're talking about a tick box in gerrit and it's mostly a burden, an obligation. It's also a box that has been proven multiple times to be bi-directional: sometimes people get to go work on other things, or get called back in active duty, and reassigned as code janitors. I haven't read anything but great comments and thank you notes for Bob and Nati. They're both great developers and have taken good care of Neutron's code. Once they're ready to get dirty again, it'll take only a couple of weeks doing reviews to be picked. Now, if you (and, more importantly, your employer) thinks that being core reviewer is a honor and your bonus depends on it, contact me offline to discuss this further. Regards, Stef From clint at fewbar.com Thu Dec 4 19:04:45 2014 From: clint at fewbar.com (Clint Byrum) Date: Thu, 04 Dec 2014 11:04:45 -0800 Subject: [openstack-dev] [TripleO] [Ironic] Do we want to remove Nova-bm support? In-Reply-To: <547FE757.9030701@wedontsleep.org> References: <547FE757.9030701@wedontsleep.org> Message-ID: <1417719771-sup-5483@fewbar.com> Excerpts from Steve Kowalik's message of 2014-12-03 20:47:19 -0800: > Hi all, > > I'm becoming increasingly concerned about all of the code paths > in tripleo-incubator that check $USE_IRONIC -eq 0 -- that is, use > nova-baremetal rather than Ironic. We do not check nova-bm support in > CI, haven't for at least a month, and I'm concerned that parts of it > may be slowly bit-rotting. > > I think our documentation is fairly clear that nova-baremetal is > deprecated and Ironic is the way forward, and I know it flies in the > face of backwards-compatibility, but do we want to bite the bullet and > remove nova-bm support? Has Ironic settled on a migration path/tool from nova-bm? If yes, then we should remove nova-bm support and point people at the migration documentation. If Ironic decided not to provide one, then we should just remove support as well. If Ironic just isn't done yet, then removing nova-bm in TripleO is premature and we should wait for them to finish. From openstack at nemebean.com Thu Dec 4 19:12:10 2014 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 04 Dec 2014 13:12:10 -0600 Subject: [openstack-dev] [TripleO] Do we want to remove Nova-bm support? In-Reply-To: <547FE757.9030701@wedontsleep.org> References: <547FE757.9030701@wedontsleep.org> Message-ID: <5480B20A.5080804@nemebean.com> FWIW, I think the "correct" thing to do here is to get our Juno jobs up and running and have one of them verify the nova-bm code paths for this cycle, and then remove it next cycle. That said, I have no idea how close we are to actually having Juno jobs and I agree that we have no idea if the nova-bm code actually works anymore (although that applies to backwards compat as a whole too). I guess I'm inclined to just leave it though. AFAIK the nova-bm code isn't hurting anything, and if it does happen to be working and have a user then removing it would break them for no good reason. If it's not working then it's not working and nobody's going to accidentally start using it. The only real downside of leaving it is if it is working and someone would happen to override our defaults, ignore all the deprecation warnings, and start using it anyway. I don't see that as a big concern. But I'm not super attached to nova-bm either, so just my 2 cents. -Ben On 12/03/2014 10:47 PM, Steve Kowalik wrote: > Hi all, > > I'm becoming increasingly concerned about all of the code paths > in tripleo-incubator that check $USE_IRONIC -eq 0 -- that is, use > nova-baremetal rather than Ironic. We do not check nova-bm support in > CI, haven't for at least a month, and I'm concerned that parts of it > may be slowly bit-rotting. > > I think our documentation is fairly clear that nova-baremetal is > deprecated and Ironic is the way forward, and I know it flies in the > face of backwards-compatibility, but do we want to bite the bullet and > remove nova-bm support? > > Cheers, > From ayoung at redhat.com Thu Dec 4 19:18:56 2014 From: ayoung at redhat.com (Adam Young) Date: Thu, 04 Dec 2014 14:18:56 -0500 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: <5480AB95.1000103@openstack.org> References: <547FEC20.7000308@redhat.com> <548098D7.4090702@anteaya.info> <5480AB95.1000103@openstack.org> Message-ID: <5480B3A0.1020004@redhat.com> On 12/04/2014 01:44 PM, Stefano Maffulli wrote: > On 12/04/2014 09:24 AM, Anita Kuno wrote: >> I think we move into very dangerous territory if we are equating a core >> review Gerrit permission (it is just a Gerrit permission, if it is >> perceived as anything other than that that is a perception we have >> created ourselves) with value as an OpenStack contributor. Forget the +2. I would want Bob to have -2 capability on Neutron. A -1 can be reset with just a new review. A -2 is the real power of core. The power to say "this is wrong, and you need to address it before it moves on." And I say that as someone who has been on the receiving end of multiple -2s... > +2 to what Anita is saying. > > We're talking about a tick box in gerrit and it's mostly a burden, an > obligation. > > It's also a box that has been proven multiple times to be > bi-directional: sometimes people get to go work on other things, or get > called back in active duty, and reassigned as code janitors. > > I haven't read anything but great comments and thank you notes for Bob > and Nati. They're both great developers and have taken good care of > Neutron's code. Once they're ready to get dirty again, it'll take only a > couple of weeks doing reviews to be picked. > > Now, if you (and, more importantly, your employer) thinks that being > core reviewer is a honor and your bonus depends on it, contact me > offline to discuss this further. > > Regards, > Stef > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ijw.ubuntu at cack.org.uk Thu Dec 4 19:19:14 2014 From: ijw.ubuntu at cack.org.uk (Ian Wells) Date: Thu, 4 Dec 2014 11:19:14 -0800 Subject: [openstack-dev] [Neutron] Edge-VPN and Edge-Id In-Reply-To: <119A1974-380C-461E-9937-65B9763E39E6@brocade.com> References: <119A1974-380C-461E-9937-65B9763E39E6@brocade.com> Message-ID: On 1 December 2014 at 21:26, Mohammad Hanif wrote: > I hope we all understand how edge VPN works and what interactions are > introduced as part of this spec. I see references to neutron-network > mapping to the tunnel which is not at all case and the edge-VPN spec > doesn?t propose it. At a very high level, there are two main concepts: > > 1. Creation of a per tenant VPN ?service? on a PE (physical router) > which has a connectivity to other PEs using some tunnel (not known to > tenant or tenant-facing). An attachment circuit for this VPN service is > also created which carries a ?list" of tenant networks (the list is > initially empty) . > 2. Tenant ?updates? the list of tenant networks in the attachment > circuit which essentially allows the VPN ?service? to add or remove the > network from being part of that VPN. > > A service plugin implements what is described in (1) and provides an API > which is called by what is described in (2). The Neutron driver only > ?updates? the attachment circuit using an API (attachment circuit is also > part of the service plugin? data model). I don?t see where we are > introducing large data model changes to Neutron? > Well, you have attachment types, tunnels, and so on - these are all objects with data models, and your spec is on Neutron so I'm assuming you plan on putting them into the Neutron database - where they are, for ever more, a Neutron maintenance overhead both on the dev side and also on the ops side, specifically at upgrade. How else one introduces a network service in OpenStack if it is not through > a service plugin? > Again, I've missed something here, so can you define 'service plugin' for me? How similar is it to a Neutron extension - which we agreed at the summit we should take pains to avoid, per Salvatore's session? And the answer to that is to stop talking about plugins or trying to integrate this into the Neutron API or the Neutron DB, and make it an independent service with a small and well defined interaction with Neutron, which is what the edge-id proposal suggests. If we do incorporate it into Neutron then there are probably 90% of Openstack users and developers who don't want or need it but care a great deal if it breaks the tests. If it isn't in Neutron they simply don't install it. > As we can see, tenant needs to communicate (explicit or otherwise) to > add/remove its networks to/from the VPN. There has to be a channel and the > APIs to achieve this. > Agreed. I'm suggesting it should be a separate service endpoint. -- Ian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shanthakumar.k at hp.com Thu Dec 4 19:19:13 2014 From: shanthakumar.k at hp.com (K, Shanthakumar (HP Cloud)) Date: Thu, 4 Dec 2014 19:19:13 +0000 Subject: [openstack-dev] cinder netapp 7 mode nfs driver - failed to mount share Message-ID: <23261B8C079082499641DD0AE94EA2BF19510CA4@G9W0337.americas.hpqcorp.net> Hi All, I'm trying to integrate Netapp 7 mode NFS driver with openstack cinder volume provisioning using Juno 2. During the driver initialization I'm getting the following error and volume creation is failing. Do I need to do any changes in storage or client side ? Please refer the log below 2014-12-04 18:40:32.642 49304 DEBUG cinder.volume.drivers.nfs [req-e14faed4-4ff7-47a0-8fcb-67d1a4cfd2e4 - - - - -] shares loaded: {u'10.XX.XX.XX:/nfsvol': None} _load_shares_config /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/cinder/volume/drivers/nfs.py:326 2014-12-04 18:40:32.655 49304 DEBUG cinder.openstack.common.processutils [req-e14faed4-4ff7-47a0-8fcb-67d1a4cfd2e4 - - - - -] Running cmd (subprocess): mkdir -p /mnt/state/var/lib/cinder/mnt/527dc3646de39dbda076e3a72dca54e5 execute /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:160 2014-12-04 18:40:32.666 49304 DEBUG cinder.openstack.common.processutils [req-e14faed4-4ff7-47a0-8fcb-67d1a4cfd2e4 - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 10.XX.XX.XX:/nfsvol /mnt/state/var/lib/cinder/mnt/527dc3646de39dbda076e3a72dca54e5 execute /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:160 2014-12-04 18:40:32.779 49304 DEBUG cinder.brick.remotefs.remotefs [req-e14faed4-4ff7-47a0-8fcb-67d1a4cfd2e4 - - - - -] Failed to do pnfs mount. _mount_nfs /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/cinder/brick/remotefs/remotefs.py:128 2014-12-04 18:40:32.780 49304 DEBUG cinder.openstack.common.processutils [req-e14faed4-4ff7-47a0-8fcb-67d1a4cfd2e4 - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 10.XX.XX.XX:/nfsvol /mnt/state/var/lib/cinder/mnt/527dc3646de39dbda076e3a72dca54e5 execute /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:160 2014-12-04 18:40:32.915 49304 DEBUG cinder.brick.remotefs.remotefs [req-e14faed4-4ff7-47a0-8fcb-67d1a4cfd2e4 - - - - -] Failed to do nfs mount. _mount_nfs /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/cinder/brick/remotefs/remotefs.py:128 2014-12-04 18:40:32.916 49304 WARNING cinder.volume.drivers.nfs [req-e14faed4-4ff7-47a0-8fcb-67d1a4cfd2e4 - - - - -] Exception during mounting NFS mount failed for share 10.XX.XX.XX:/nfsvol.Error - {'pnfs': u"Unexpected error while running command.\nCommand: sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 10.XX.XX.XX:/nfsvol /mnt/state/var/lib/cinder/mnt/527dc3646de39dbda076e3a72dca54e5\nExit code: 32\nStdout: ''\nStderr: 'mount.nfs: an incorrect mount option was specified\\n'", 'nfs': u"Unexpected error while running command.\nCommand: sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs 10.XX.XX.XX:/nfsvol /mnt/state/var/lib/cinder/mnt/527dc3646de39dbda076e3a72dca54e5\nExit code: 32\nStdout: ''\nStderr: 'mount.nfs: Protocol not supported\\n'"} 2014-12-04 18:40:32.917 49304 DEBUG cinder.volume.drivers.nfs [req-e14faed4-4ff7-47a0-8fcb-67d1a4cfd2e4 - - - - -] Available shares [] _ensure_shares_mounted /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/cinder/volume/drivers/nfs.py:181 Thanks & Regards, Shanthakumar K -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at jvf.cc Thu Dec 4 19:22:12 2014 From: jay at jvf.cc (Jay Faulkner) Date: Thu, 4 Dec 2014 19:22:12 +0000 Subject: [openstack-dev] [TripleO] [Ironic] Do we want to remove Nova-bm support? In-Reply-To: <1417719771-sup-5483@fewbar.com> References: <547FE757.9030701@wedontsleep.org> <1417719771-sup-5483@fewbar.com> Message-ID: > On Dec 4, 2014, at 11:04 AM, Clint Byrum wrote: > > Excerpts from Steve Kowalik's message of 2014-12-03 20:47:19 -0800: >> Hi all, >> >> I'm becoming increasingly concerned about all of the code paths >> in tripleo-incubator that check $USE_IRONIC -eq 0 -- that is, use >> nova-baremetal rather than Ironic. We do not check nova-bm support in >> CI, haven't for at least a month, and I'm concerned that parts of it >> may be slowly bit-rotting. >> >> I think our documentation is fairly clear that nova-baremetal is >> deprecated and Ironic is the way forward, and I know it flies in the >> face of backwards-compatibility, but do we want to bite the bullet and >> remove nova-bm support? > > Has Ironic settled on a migration path/tool from nova-bm? If yes, then > we should remove nova-bm support and point people at the migration > documentation. > Clint, I believe this is the migration document: https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration. As it was required for graduation, as far as I?m aware this is all the work that?s going to be done by Ironic for nova-bm migration. FWIW, I?m +1 to removing this support from TripleO as other uses of nova-bm have been deprecated across the Juno release, and this, IMO, should follow the same pattern. Thanks, Jay > If Ironic decided not to provide one, then we should just remove support > as well. > > If Ironic just isn't done yet, then removing nova-bm in TripleO is > premature and we should wait for them to finish. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mestery at mestery.com Thu Dec 4 19:27:31 2014 From: mestery at mestery.com (Kyle Mestery) Date: Thu, 4 Dec 2014 13:27:31 -0600 Subject: [openstack-dev] [neutron] Cancelling next week's Neutron meeting and Neutron Drivers meeting Message-ID: Due to the mid-cycle which is happening next week, I'm cancelling the weekly Neutron meeting, as well as the Neutron Drivers meeting. The meeting page has been updated for both meetings [1] [2]. Thanks! Kyle [1] https://wiki.openstack.org/wiki/Network/Meetings [2] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers -------------- next part -------------- An HTML attachment was scrubbed... URL: From ijw.ubuntu at cack.org.uk Thu Dec 4 19:34:13 2014 From: ijw.ubuntu at cack.org.uk (Ian Wells) Date: Thu, 4 Dec 2014 11:34:13 -0800 Subject: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup? In-Reply-To: <87d27z8kxj.fsf@metaswitch.com> References: <87ppc090ne.fsf@metaswitch.com> <87d27z8kxj.fsf@metaswitch.com> Message-ID: On 4 December 2014 at 08:00, Neil Jerram wrote: > Kevin Benton writes: > I was actually floating a slightly more radical option than that: the > idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does > absolutely _nothing_, not even create the TAP device. > Nova always does something, and that something amounts to 'attaches the VM to where it believes the endpoint to be'. Effectively you should view the VIF type as the form that's decided on during negotiation between Neutron and Nova - Neutron says 'I will do this much and you have to take it from there'. (In fact, I would prefer that it was *more* of a negotiation, in the sense that the hypervisor driver had a say to Neutron of what VIF types it supported and preferred, and Neutron could choose from a selection, but I don't think it adds much value at the moment and I didn't want to propose a change just for the sake of it.) I think you're just proposing that the hypervisor driver should do less of the grunt work of connection. Also, libvirt is not the only hypervisor driver and I've found it interesting to nose through the others for background reading, even if you're not using them much. For example, suppose someone came along and wanted to implement a new > OVS-like networking infrastructure? In principle could they do that > without having to enhance the Nova VIF driver code? I think at the > moment they couldn't, but that they would be able to if VIF_TYPE_NOOP > (or possibly VIF_TYPE_TAP) was already in place. In principle I think > it would then be possible for the new implementation to specify > VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does the kind > of configuration and vSwitch plugging that you've described above. > At the moment, the rule is that *if* you create a new type of infrastructure then *at that point* you create your new VIF plugging type to support it - vhostuser being a fine example, having been rejected on the grounds that it was, at the end of Juno, speculative. I'm not sure I particularly like this approach but that's how things are at the moment - largely down to not wanting to add code that isn;t used and therefore tested. None of this is criticism of your proposal, which sounds reasonable; I was just trying to provide a bit of context. -- Ian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Thu Dec 4 19:37:55 2014 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 4 Dec 2014 13:37:55 -0600 Subject: [openstack-dev] cinder netapp 7 mode nfs driver - failed to mount share In-Reply-To: <23261B8C079082499641DD0AE94EA2BF19510CA4@G9W0337.americas.hpqcorp.net> References: <23261B8C079082499641DD0AE94EA2BF19510CA4@G9W0337.americas.hpqcorp.net> Message-ID: On Thu, Dec 4, 2014 at 1:19 PM, K, Shanthakumar (HP Cloud) < shanthakumar.k at hp.com> wrote: > I'm trying to integrate Netapp 7 mode NFS driver with openstack cinder > volume provisioning using Juno 2. > > During the driver initialization I?m getting the following error and > volume creation is failing. > > Do I need to do any changes in storage or client side ? > The log indicates this is a client-side issue: > 2014-12-04 18:40:32.916 49304 WARNING cinder.volume.drivers.nfs > [req-e14faed4-4ff7-47a0-8fcb-67d1a4cfd2e4 - - - - -] Exception during > mounting NFS mount failed for share 10.XX.XX.XX:/nfsvol.Error - {'pnfs': > u"Unexpected error while running command.\nCommand: sudo cinder-rootwrap > /etc/cinder/rootwrap.conf mount -t nfs -o vers=4,minorversion=1 > 10.XX.XX.XX:/nfsvol > /mnt/state/var/lib/cinder/mnt/527dc3646de39dbda076e3a72dca54e5\nExit code: > 32\nStdout: ''\nStderr: 'mount.nfs: an incorrect mount option was > specified\\n'", 'nfs': u"Unexpected error while running command.\nCommand: > sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs > 10.XX.XX.XX:/nfsvol > /mnt/state/var/lib/cinder/mnt/527dc3646de39dbda076e3a72dca54e5\nExit code: > 32\nStdout: ''\nStderr: 'mount.nfs: Protocol not supported\\n'"} > Make sure you have everything installed for NFS support on your cinder volume nodes and nova compute nodes. This varies by distro, but you should be able to do a manual mount of your NetApp shares before Cinder will be happy. dt -- Dean Troyer dtroyer at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Dec 4 19:38:31 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 4 Dec 2014 19:38:31 +0000 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: <547FEC20.7000308@redhat.com> References: <547FEC20.7000308@redhat.com> Message-ID: <20141204193831.GK2497@yuggoth.org> On 2014-12-03 23:24:41 +0000 (+0000), Sukhdev Kapur wrote: [...] > If I may express my opinion, Bob's contribution to ML2 has been > quite substantial. The kind of stability ML2 has achieved makes a > statement of his dedication to this work. I have worked very > closely with Bob on several issues and co-chaired ML2-Subteam with > him and have developed tremendous respect for his dedication. He is a remarkable contributor to the project--of that I have no doubt. > Reading his email reply makes me believe he wants to continue to > contribute as core developer. And here, I believe, is the source of the confusion: "core developer" isn't a specific role in OpenStack projects. We have all sorts of contributions including not only software development but also documentation, translation, infrastructure management, quality assurance, et cetera, et cetera... _and_ code review. > Therefore, I would like to take an opportunity to appeal to the > core team to consider granting him his wish - i.e. vote -1 on his > removal. The proposal was to remove him from the list of Neutron core *reviewers* (the people tasked with wading through the endless volumes of code contributed by developers day after day). This says nothing of his remarkable ongoing contributions to the project, except merely that he hasn't had a lot of time to review other people's code contributions for a while now. It's not a stigma, it's simply a reflection of which kinds of contributions he focuses on making. -- Jeremy Stanley From anteaya at anteaya.info Thu Dec 4 19:40:16 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Thu, 04 Dec 2014 14:40:16 -0500 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: <5480B3A0.1020004@redhat.com> References: <547FEC20.7000308@redhat.com> <548098D7.4090702@anteaya.info> <5480AB95.1000103@openstack.org> <5480B3A0.1020004@redhat.com> Message-ID: <5480B8A0.9040205@anteaya.info> On 12/04/2014 02:18 PM, Adam Young wrote: > On 12/04/2014 01:44 PM, Stefano Maffulli wrote: >> On 12/04/2014 09:24 AM, Anita Kuno wrote: >>> I think we move into very dangerous territory if we are equating a core >>> review Gerrit permission (it is just a Gerrit permission, if it is >>> perceived as anything other than that that is a perception we have >>> created ourselves) with value as an OpenStack contributor. > Forget the +2. I would want Bob to have -2 capability on Neutron. > > A -1 can be reset with just a new review. A -2 is the real power of > core. The power to say "this is wrong, and you need to address it > before it moves on." > > And I say that as someone who has been on the receiving end of multiple > -2s... That is not currently an acl setting we have in Gerrit. If you feel strongly that we should add an additional category for Gerrit acls, do add an agenda item to the infra team meeting and we would be glad to hear your rational and discuss what we are and are not able to do in Gerrit. Thanks Adam, Anita. > > > >> +2 to what Anita is saying. >> >> We're talking about a tick box in gerrit and it's mostly a burden, an >> obligation. >> >> It's also a box that has been proven multiple times to be >> bi-directional: sometimes people get to go work on other things, or get >> called back in active duty, and reassigned as code janitors. >> >> I haven't read anything but great comments and thank you notes for Bob >> and Nati. They're both great developers and have taken good care of >> Neutron's code. Once they're ready to get dirty again, it'll take only a >> couple of weeks doing reviews to be picked. >> >> Now, if you (and, more importantly, your employer) thinks that being >> core reviewer is a honor and your bonus depends on it, contact me >> offline to discuss this further. >> >> Regards, >> Stef >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From devananda.vdv at gmail.com Thu Dec 4 19:57:27 2014 From: devananda.vdv at gmail.com (Devananda van der Veen) Date: Thu, 04 Dec 2014 19:57:27 +0000 Subject: [openstack-dev] [TripleO] [Ironic] Do we want to remove Nova-bm support? References: <547FE757.9030701@wedontsleep.org> <1417719771-sup-5483@fewbar.com> Message-ID: On Thu Dec 04 2014 at 11:05:53 AM Clint Byrum wrote: > Excerpts from Steve Kowalik's message of 2014-12-03 20:47:19 -0800: > > Hi all, > > > > I'm becoming increasingly concerned about all of the code paths > > in tripleo-incubator that check $USE_IRONIC -eq 0 -- that is, use > > nova-baremetal rather than Ironic. We do not check nova-bm support in > > CI, haven't for at least a month, and I'm concerned that parts of it > > may be slowly bit-rotting. > > > > I think our documentation is fairly clear that nova-baremetal is > > deprecated and Ironic is the way forward, and I know it flies in the > > face of backwards-compatibility, but do we want to bite the bullet and > > remove nova-bm support? > > Has Ironic settled on a migration path/tool from nova-bm? If yes, then > we should remove nova-bm support and point people at the migration > documentation. > Such a tool was created and has been provided for the Juno release as a "sideways migration". That is, an in-place migration from Juno Nova Baremetal to Juno Ironic is supported. Such is documented here: https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration That is all that will be provided, as Baremetal has been removed from Nova at the start of the Kilo cycle. -Deva > If Ironic decided not to provide one, then we should just remove support > as well. > > If Ironic just isn't done yet, then removing nova-bm in TripleO is > premature and we should wait for them to finish. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manishgmailbox-os at yahoo.com Thu Dec 4 19:58:11 2014 From: manishgmailbox-os at yahoo.com (Manish Godara) Date: Thu, 4 Dec 2014 19:58:11 +0000 (UTC) Subject: [openstack-dev] =?utf-8?b?562U5aSNOiAgW05ldXRyb25dW0wyIEFnZW50?= =?utf-8?q?=5D=5BDebt=5D_Bootstrapping_an_L2_agent_debt_repayment_task_for?= =?utf-8?q?ce?= In-Reply-To: <5478974D.9030303@suse.com> References: <5478974D.9030303@suse.com> Message-ID: <13748657.3385832.1417723091237.JavaMail.yahoo@jws10633.mail.bf1.yahoo.com> Original message bounced and I didn't notice. ? thanks,manish ----- Subject:?Re: [openstack-dev] ??: [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent debt repayment task force Thanks Rossella.? I'll take a look this week.? My name is also on the etherpad already and can help with this.? Somehow missed this thread earlier. regards, On Friday, November 28, 2014 7:42 AM, Rossella Sblendido wrote: On 11/27/2014 12:21 PM, marios wrote: > Hi, so far we have this going > https://etherpad.openstack.org/p/restructure-l2-agent I finally pushed a design spec based on the etherpad above, https://review.openstack.org/#/c/137808/ . Anybody interested please comment on the review. cheers, Rossella _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From pieter.c.kruithof-jr at hp.com Thu Dec 4 19:59:30 2014 From: pieter.c.kruithof-jr at hp.com (Kruithof, Piet) Date: Thu, 4 Dec 2014 19:59:30 +0000 Subject: [openstack-dev] FW: [horizon] [ux] Changing how the modals are closed in Horizon In-Reply-To: References: Message-ID: My preference would be ?change the default behavior to 'static?? for the following reasons: - There are plenty of ways to close the modal, so there?s not really a need for this feature. - There are no visual cues, such as an ?X? or a Cancel button, that selecting outside of the modal closes it. - Downside is losing all of your data. My two cents? Begin forwarded message: From: "Rob Cresswell (rcresswe)" > To: "OpenStack Development Mailing List (not for usage questions)" > Date: December 3, 2014 at 5:21:51 AM PST Subject: Re: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon Reply-To: "OpenStack Development Mailing List (not for usage questions)" > +1 to changing the behaviour to ?static'. Modal inside a modal is potentially slightly more useful, but looks messy and inconsistent, which I think outweighs the functionality. Rob On 2 Dec 2014, at 12:21, Timur Sufiev > wrote: Hello, Horizoneers and UX-ers! The default behavior of modals in Horizon (defined in turn by Bootstrap defaults) regarding their closing is to simply close the modal once user clicks somewhere outside of it (on the backdrop element below and around the modal). This is not very convenient for the modal forms containing a lot of input - when it is closed without a warning all the data the user has already provided is lost. Keeping this in mind, I've made a patch [1] changing default Bootstrap 'modal_backdrop' parameter to 'static', which means that forms are not closed once the user clicks on a backdrop, while it's still possible to close them by pressing 'Esc' or clicking on the 'X' link at the top right border of the form. Also the patch [1] allows to customize this behavior (between 'true'-current one/'false' - no backdrop element/'static') on a per-form basis. What I didn't know at the moment I was uploading my patch is that David Lyle had been working on a similar solution [2] some time ago. It's a bit more elaborate than mine: if the user has already filled some some inputs in the form, then a confirmation dialog is shown, otherwise the form is silently dismissed as it happens now. The whole point of writing about this in the ML is to gather opinions which approach is better: * stick to the current behavior; * change the default behavior to 'static'; * use the David's solution with confirmation dialog (once it'll be rebased to the current codebase). What do you think? [1] https://review.openstack.org/#/c/113206/ [2] https://review.openstack.org/#/c/23037/ P.S. I remember that I promised to write this email a week ago, but better late than never :). -- Timur Sufiev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From apilotti at cloudbasesolutions.com Thu Dec 4 20:27:37 2014 From: apilotti at cloudbasesolutions.com (Alessandro Pilotti) Date: Thu, 4 Dec 2014 20:27:37 +0000 Subject: [openstack-dev] [Nova] Tracking Kilo priorities In-Reply-To: References: <8B8F90D1-9C05-47BB-934E-6768A02ABA0C@cloudbasesolutions.com> Message-ID: <6B7B89BE-60B7-4912-9BC4-C32FA61410EE@cloudbasesolutions.com> Hi Michael, > On 25 Nov 2014, at 02:35, Michael Still wrote: > > On Tue, Nov 25, 2014 at 11:23 AM, Alessandro Pilotti > wrote: >> Hi Michael, >> >>> On 25 Nov 2014, at 01:57, Michael Still wrote: >>> >>> First off, sorry for the slow reply. I was on vacation last week. >>> >>> The project priority list was produced as part of the design summit, >>> and reflects nova's need to pay down technical debt in order to keep >>> meeting our users needs. So, whilst driver changes are important, they >>> doesn't belong on that etherpad. >>> >>> That said, I think the best way to help us keep up with our code >>> review requirements is to be an active reviewer. I know we say this a >>> lot, but many cores optimize for patches which have already been >>> reviewed and +1'ed by a non-core. So... Code review even with a +1 >>> makes a real difference to us being able to keep up. >>> >> >> Thanks for your reply, we actually do quite a lot of cross reviews, with the >> general rule being that every patch produced by one of the Hyper-V team members >> needs to be reviewed by at least another two. >> >> The main issue is that reviews get lost at every rebase and keeping track of >> this becomes not trivial when there are a lot of open patches under review, >> mostly interdependent. It's not easy to keep people motivated to do this, but >> we do our best! > > This is good news, and I will admit that I don't track the review > throughput of sub teams in nova. > > I feel like focusing on the start of each review chain is useful here. > If you have the first two reviews in a chain with a couple of +1s on > them already, then I think that's a reasonable set of reviews to bring > up at a nova meeting. For today?s Nova metting, I?d like to propose: https://review.openstack.org/#/c/129235/ https://review.openstack.org/#/c/136484/ If possible, one of the other patches waiting since a couple of months is: https://review.openstack.org/#/c/131734/ Thanks, Alessandro > I sometimes see a +2 or two on reviews now at > the beginning of a chain, and that's wasted effort. > > Michael > > -- > Rackspace Australia From mikal at stillhq.com Thu Dec 4 20:37:23 2014 From: mikal at stillhq.com (Michael Still) Date: Fri, 5 Dec 2014 07:37:23 +1100 Subject: [openstack-dev] [Nova] Tracking Kilo priorities In-Reply-To: <6B7B89BE-60B7-4912-9BC4-C32FA61410EE@cloudbasesolutions.com> References: <8B8F90D1-9C05-47BB-934E-6768A02ABA0C@cloudbasesolutions.com> <6B7B89BE-60B7-4912-9BC4-C32FA61410EE@cloudbasesolutions.com> Message-ID: Cool -- I have added these to the agenda. Michael On Fri, Dec 5, 2014 at 7:27 AM, Alessandro Pilotti wrote: > Hi Michael, > >> On 25 Nov 2014, at 02:35, Michael Still wrote: >> >> On Tue, Nov 25, 2014 at 11:23 AM, Alessandro Pilotti >> wrote: >>> Hi Michael, >>> >>>> On 25 Nov 2014, at 01:57, Michael Still wrote: >>>> >>>> First off, sorry for the slow reply. I was on vacation last week. >>>> >>>> The project priority list was produced as part of the design summit, >>>> and reflects nova's need to pay down technical debt in order to keep >>>> meeting our users needs. So, whilst driver changes are important, they >>>> doesn't belong on that etherpad. >>>> >>>> That said, I think the best way to help us keep up with our code >>>> review requirements is to be an active reviewer. I know we say this a >>>> lot, but many cores optimize for patches which have already been >>>> reviewed and +1'ed by a non-core. So... Code review even with a +1 >>>> makes a real difference to us being able to keep up. >>>> >>> >>> Thanks for your reply, we actually do quite a lot of cross reviews, with the >>> general rule being that every patch produced by one of the Hyper-V team members >>> needs to be reviewed by at least another two. >>> >>> The main issue is that reviews get lost at every rebase and keeping track of >>> this becomes not trivial when there are a lot of open patches under review, >>> mostly interdependent. It's not easy to keep people motivated to do this, but >>> we do our best! >> >> This is good news, and I will admit that I don't track the review >> throughput of sub teams in nova. >> >> I feel like focusing on the start of each review chain is useful here. >> If you have the first two reviews in a chain with a couple of +1s on >> them already, then I think that's a reasonable set of reviews to bring >> up at a nova meeting. > > For today?s Nova metting, I?d like to propose: > > https://review.openstack.org/#/c/129235/ > https://review.openstack.org/#/c/136484/ > > If possible, one of the other patches waiting since a couple of months is: > https://review.openstack.org/#/c/131734/ > > Thanks, > > Alessandro > >> I sometimes see a +2 or two on reviews now at >> the beginning of a chain, and that's wasted effort. >> >> Michael >> >> -- >> Rackspace Australia > -- Rackspace Australia From clay.gerrard at gmail.com Thu Dec 4 20:50:14 2014 From: clay.gerrard at gmail.com (Clay Gerrard) Date: Thu, 4 Dec 2014 12:50:14 -0800 Subject: [openstack-dev] [swift] a way of checking replicate completion on swift cluster In-Reply-To: <88AC986BE9D6C44AB9659D6A59329C6F51E3C4F3@G01JPEXMBYT01> References: <0239431E587EFB4AAC571DD74E73204B1524C64E@G01JPEXMBYT04> <0239431E587EFB4AAC571DD74E73204B1524DEF2@G01JPEXMBYT04> <0239431E587EFB4AAC571DD74E73204B1525EB02@G01JPEXMBYT04> <88AC986BE9D6C44AB9659D6A59329C6F51E3C4F3@G01JPEXMBYT01> Message-ID: more fidelity in the recon's seems fine, statsd emissions are also a popular target for telemetry radiation. On Thu, Nov 27, 2014 at 5:01 AM, Osanai, Hisashi < osanai.hisashi at jp.fujitsu.com> wrote: > > Hi, > > I think it is a good idea to have the object-replicator's failure info > in recon like the other replicators. > > I think the following info can be added in object-replicator in addition to > "object_replication_last" and "object_replication_time". > > If there is any technical reason to not add them, I can make it. What do > you think? > > { > "replication_last": 1416334368.60865, > "replication_stats": { > "attempted": 13346, > "empty": 0, > "failure": 870, > "failure_nodes": {"192.168.0.1": 3, > "192.168.0.2": 860, > "192.168.0.3": 7}, > "hashmatch": 0, > "remove": 0, > "start": 1416354240.9761429, > "success": 1908 > "ts_repl": 0 > }, > "replication_time": 2316.5563162644703, > "object_replication_last": 1416334368.60865, > "object_replication_time": 2316.5563162644703 > } > > Cheers, > Hisashi Osanai > > On Tuesday, November 25, 2014 4:37 PM, Matsuda, Kenichiro [mailto: > matsuda_kenichi at jp.fujitsu.com] wrote: > > I understood that the logs are necessary to judge whether no failure on > > object-replicator. > > And also, I thought that the recon info of object-replicator having > failure > > (just like the recon info of account-replicator and container-replicator) > > is useful. > > Are there any reason to not included failure in recon? > > On Tuesday, November 25, 2014 5:53 AM, Clay Gerrard [mailto: > clay.gerrard at gmail.com] wrote: > > > replication logs > > On Friday, November 21, 2014 4:22 AM, Clay Gerrard [mailto: > clay.gerrard at gmail.com] wrote: > > You might check if the swift-recon tool has the data you're looking > for. It can report > > the last completed replication pass time across nodes in the ring. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Thu Dec 4 21:24:02 2014 From: clint at fewbar.com (Clint Byrum) Date: Thu, 04 Dec 2014 13:24:02 -0800 Subject: [openstack-dev] [TripleO] Do we want to remove Nova-bm support? In-Reply-To: <5480B20A.5080804@nemebean.com> References: <547FE757.9030701@wedontsleep.org> <5480B20A.5080804@nemebean.com> Message-ID: <1417727947-sup-1606@fewbar.com> Excerpts from Ben Nemec's message of 2014-12-04 11:12:10 -0800: > FWIW, I think the "correct" thing to do here is to get our Juno jobs up > and running and have one of them verify the nova-bm code paths for this > cycle, and then remove it next cycle. > > That said, I have no idea how close we are to actually having Juno jobs > and I agree that we have no idea if the nova-bm code actually works > anymore (although that applies to backwards compat as a whole too). > > I guess I'm inclined to just leave it though. AFAIK the nova-bm code > isn't hurting anything, and if it does happen to be working and have a > user then removing it would break them for no good reason. If it's not > working then it's not working and nobody's going to accidentally start > using it. The only real downside of leaving it is if it is working and > someone would happen to override our defaults, ignore all the > deprecation warnings, and start using it anyway. I don't see that as a > big concern. > > But I'm not super attached to nova-bm either, so just my 2 cents. > I think this is overly cautious, but I can't think of a moderately cautious plan, so let's just land deprecation warning messages in the image builds and devtest scripts. I don't know if there's much more we can do without running the risk of yanking the rug out from some silent user out there. From mriedem at linux.vnet.ibm.com Thu Dec 4 21:31:59 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Thu, 04 Dec 2014 15:31:59 -0600 Subject: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches Message-ID: <5480D2CF.8050301@linux.vnet.ibm.com> This came up in the nova meeting today, I've opened a bug [1] for it. Since this isn't maintained by infra we don't have log indexing so I can't use logstash to see how pervasive it us, but multiple people are reporting the same thing in IRC. Who is maintaining the nova-docker CI and can look at this? It also looks like the log format for the nova-docker CI is a bit weird, can that be cleaned up to be more consistent with other CI log results? [1] https://bugs.launchpad.net/nova-docker/+bug/1399443 -- Thanks, Matt Riedemann From Sean_Collins2 at cable.comcast.com Thu Dec 4 21:36:16 2014 From: Sean_Collins2 at cable.comcast.com (Collins, Sean) Date: Thu, 4 Dec 2014 21:36:16 +0000 Subject: [openstack-dev] [Neutron][IPv6] No IPv6 meeting next week - Dec. 9th 2014 Message-ID: Due to the mid-cycle, and other neutron meetings being cancelled, we will not meet on the 9th -- Sean M. Collins -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne at openstack.org Thu Dec 4 21:36:54 2014 From: anne at openstack.org (Anne Gentle) Date: Thu, 4 Dec 2014 15:36:54 -0600 Subject: [openstack-dev] What's Up Doc? Dec 4 2014 [nova][glance][neutron] Message-ID: This week the Install Guide collaborators met to come to agreement on the scope and goals for the upstream Install Guides. With four distros, four guides, we discussed the goals and scope for the upstream Install guides. We will continue with Install Guide solutions at next week's Doc Team meeting since we didn't get to it during the first hour. Here are the notes and logs from that meeting: Notes: http://eavesdrop.openstack.org/meetings/docteam/2014/docteam.2014-12-03-19.02.html Log: http://eavesdrop.openstack.org/meetings/docteam/2014/docteam.2014-12-03-19.02.log.html The OpenStack PTLs are recording presentations, about fifteen minutes for each PTL, to inform everyone about the plans for Kilo. I?ve attached the presentation and you can also watch the recording here. https://www.youtube.com/watch?v=ZJHhoA8dCNI&feature=youtu.be Today I uploaded a specification for the migration to the new web design. Please review it carefully. Look for where you'd like to help also. There are a lot of tasks and I'll be asking for help. https://review.openstack.org/#/c/139154/ Another API spec merged this past week, just nova, neutron, and glance to go. Please take a look: nova: https://review.openstack.org/129329, https://review.openstack.org/#/c/130550/ glance: https://review.openstack.org/#/c/129367/ neutron: https://review.openstack.org/#/c/134350/ Thanks all - Anne -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenStackDocKiloUpdate.pdf Type: application/pdf Size: 157786 bytes Desc: not available URL: From mriedem at linux.vnet.ibm.com Thu Dec 4 21:51:22 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Thu, 04 Dec 2014 15:51:22 -0600 Subject: [openstack-dev] [nova] global or per-project specific ssl config options, or both? In-Reply-To: References: <547FE9BA.1010700@linux.vnet.ibm.com> Message-ID: <5480D75A.2010607@linux.vnet.ibm.com> On 12/4/2014 6:02 AM, Davanum Srinivas wrote: > +1 to @markmc's "default is global value and override for project > specific key" suggestion. > > -- dims > > > > On Wed, Dec 3, 2014 at 11:57 PM, Matt Riedemann > wrote: >> I've posted this to the 12/4 nova meeting agenda but figured I'd socialize >> it here also. >> >> SSL options - do we make them per-project or global, or both? Neutron and >> Cinder have config-group specific SSL options in nova, Glance is using oslo >> sslutils global options since Juno which was contentious for a time in a >> separate review in Icehouse [1]. >> >> Now [2] wants to break that out for Glance, but we also have a patch [3] for >> Keystone to use the global oslo SSL options, we should be consistent, but >> does that require a blueprint now? >> >> In the Icehouse patch, markmc suggested using a DictOpt where the default >> value is the global value, which could be coming from the oslo [ssl] group >> and then you could override that with a project-specific key, e.g. cinder, >> neutron, glance, keystone. >> >> [1] https://review.openstack.org/#/c/84522/ >> [2] https://review.openstack.org/#/c/131066/ >> [3] https://review.openstack.org/#/c/124296/ >> >> -- >> >> Thanks, >> >> Matt Riedemann >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > The consensus in the nova meeting today, I think, was that we generally like the idea of the DictOpt with global oslo ssl as the default and then be able to configure that per-service if needed. Does anyone want to put up a POC on how that would work to see how ugly and/or usable that would be? I haven't dug into the DictOpt stuff yet and am kind of time-constrained at the moment. -- Thanks, Matt Riedemann From jsbryant at electronicjungle.net Thu Dec 4 22:00:53 2014 From: jsbryant at electronicjungle.net (Jay S. Bryant) Date: Thu, 04 Dec 2014 16:00:53 -0600 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: References: <547DE593.9090709@electronicjungle.net> Message-ID: <5480D995.2000305@electronicjungle.net> On 12/03/2014 01:17 PM, Alan Pevec wrote: > 2014-12-02 17:15 GMT+01:00 Jay S. Bryant : >>> Cinder >>> https://review.openstack.org/137537 - small change and limited to the >>> VMWare driver >> +1 I think this is fine to make an exception for. > one more Cinder exception proposal was added in StableJuno etherpad > * https://review.openstack.org/#/c/138526/ (This is currently the > master version but I will be proposing to stable/juno as soon as it is > approved in Master) The Brocade FS San Lookup facility is currently > broken and this revert is necessary to get it working again. > > Jay, what's the status there, I see master change failed in gate? > > Cheers, > Alan Finally was able to get the master version through the gate and cherry-picked it here: https://review.openstack.org/#/c/139194/ That one has made it through the check. So, if it isn't too late, it could use a +2/+A. Thanks! Jay From mikal at stillhq.com Thu Dec 4 22:06:29 2014 From: mikal at stillhq.com (Michael Still) Date: Fri, 5 Dec 2014 09:06:29 +1100 Subject: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches In-Reply-To: <5480D2CF.8050301@linux.vnet.ibm.com> References: <5480D2CF.8050301@linux.vnet.ibm.com> Message-ID: +Eric and Ian On Fri, Dec 5, 2014 at 8:31 AM, Matt Riedemann wrote: > This came up in the nova meeting today, I've opened a bug [1] for it. Since > this isn't maintained by infra we don't have log indexing so I can't use > logstash to see how pervasive it us, but multiple people are reporting the > same thing in IRC. > > Who is maintaining the nova-docker CI and can look at this? > > It also looks like the log format for the nova-docker CI is a bit weird, can > that be cleaned up to be more consistent with other CI log results? > > [1] https://bugs.launchpad.net/nova-docker/+bug/1399443 > > -- > > Thanks, > > Matt Riedemann > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Rackspace Australia From dougw at a10networks.com Thu Dec 4 22:17:03 2014 From: dougw at a10networks.com (Doug Wiegley) Date: Thu, 4 Dec 2014 22:17:03 +0000 Subject: [openstack-dev] [neutron][lbaas] lbaas v2 drivers/specs Message-ID: <3A46DF73-D43B-4685-B022-4A479B21CDD5@a10networks.com> Hi lbaas, Just a reminder that the spec submission deadline is Dec 8th (this Monday.) If you are working on lbaas v2 features or drivers, and had a spec in Juno, it must be re-submitted for Kilo. LBaaS v2 specs that are currently submitted for Kilo: LBaaS V2 API and object model definition - https://review.openstack.org/#/c/138205/ A10 Networks LBaaS v2 driver - https://review.openstack.org/#/c/138930/ Spec for the brocade lbaas driver based on v2 lbaas data model - https://review.openstack.org/#/c/134108/ Specs from Juno that have not been re-submitted yet: LBaaS Layer 7 rules - https://github.com/openstack/neutron-specs/blob/master/specs/juno-incubator/lbaas-l7-rules.rst LBaaS reference implementation TLS support - https://github.com/openstack/neutron-specs/blob/master/specs/juno-incubator/lbaas-ref-driver-impl-tls.rst LBaaS Refactor HAProxy namespace driver - https://github.com/openstack/neutron-specs/blob/master/specs/juno-incubator/lbaas-refactor-haproxy-namespace-driver-to-new-driver-interface.rst Neutron LBaaS TLS - Specification - https://github.com/openstack/neutron-specs/blob/master/specs/juno-incubator/lbaas-tls.rst Radware LBaaS Driver - https://github.com/openstack/neutron-specs/blob/master/specs/juno-incubator/radware-lbaas-driver.rst If you were working on a spec in Juno, and no longer have time, please reply here or let me know directly. Thanks, Doug From mriedem at linux.vnet.ibm.com Thu Dec 4 22:38:45 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Thu, 04 Dec 2014 16:38:45 -0600 Subject: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches In-Reply-To: References: <5480D2CF.8050301@linux.vnet.ibm.com> Message-ID: <5480E275.8020005@linux.vnet.ibm.com> On 12/4/2014 4:06 PM, Michael Still wrote: > +Eric and Ian > > On Fri, Dec 5, 2014 at 8:31 AM, Matt Riedemann > wrote: >> This came up in the nova meeting today, I've opened a bug [1] for it. Since >> this isn't maintained by infra we don't have log indexing so I can't use >> logstash to see how pervasive it us, but multiple people are reporting the >> same thing in IRC. >> >> Who is maintaining the nova-docker CI and can look at this? >> >> It also looks like the log format for the nova-docker CI is a bit weird, can >> that be cleaned up to be more consistent with other CI log results? >> >> [1] https://bugs.launchpad.net/nova-docker/+bug/1399443 >> >> -- >> >> Thanks, >> >> Matt Riedemann >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Also, according to the 3rd party CI requirements [1] I should see nova-docker CI in the third party wiki page [2] so I can get details on who to contact when this fails but that's not done. [1] http://ci.openstack.org/third_party.html#requirements [2] https://wiki.openstack.org/wiki/ThirdPartySystems -- Thanks, Matt Riedemann From sean at dague.net Thu Dec 4 22:50:28 2014 From: sean at dague.net (Sean Dague) Date: Thu, 04 Dec 2014 17:50:28 -0500 Subject: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches In-Reply-To: <5480E275.8020005@linux.vnet.ibm.com> References: <5480D2CF.8050301@linux.vnet.ibm.com> <5480E275.8020005@linux.vnet.ibm.com> Message-ID: <5480E534.2050601@dague.net> On 12/04/2014 05:38 PM, Matt Riedemann wrote: > > > On 12/4/2014 4:06 PM, Michael Still wrote: >> +Eric and Ian >> >> On Fri, Dec 5, 2014 at 8:31 AM, Matt Riedemann >> wrote: >>> This came up in the nova meeting today, I've opened a bug [1] for it. >>> Since >>> this isn't maintained by infra we don't have log indexing so I can't use >>> logstash to see how pervasive it us, but multiple people are >>> reporting the >>> same thing in IRC. >>> >>> Who is maintaining the nova-docker CI and can look at this? >>> >>> It also looks like the log format for the nova-docker CI is a bit >>> weird, can >>> that be cleaned up to be more consistent with other CI log results? >>> >>> [1] https://bugs.launchpad.net/nova-docker/+bug/1399443 >>> >>> -- >>> >>> Thanks, >>> >>> Matt Riedemann >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> > > Also, according to the 3rd party CI requirements [1] I should see > nova-docker CI in the third party wiki page [2] so I can get details on > who to contact when this fails but that's not done. > > [1] http://ci.openstack.org/third_party.html#requirements > [2] https://wiki.openstack.org/wiki/ThirdPartySystems It's not the 3rd party CI job we are talking about, it's the one in the check queue which is run by infra. But, more importantly, jobs in those queues need shepards that will fix them. Otherwise they will get deleted. Clarkb provided the fix for the log structure right now - https://review.openstack.org/#/c/139237/1 so at least it will look vaguely sane on failures -Sean -- Sean Dague http://dague.net From apevec at gmail.com Thu Dec 4 23:01:36 2014 From: apevec at gmail.com (Alan Pevec) Date: Fri, 5 Dec 2014 00:01:36 +0100 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: <5480D995.2000305@electronicjungle.net> References: <547DE593.9090709@electronicjungle.net> <5480D995.2000305@electronicjungle.net> Message-ID: 2014-12-04 23:00 GMT+01:00 Jay S. Bryant : > Finally was able to get the master version through the gate and > cherry-picked it here: > https://review.openstack.org/#/c/139194/ > > That one has made it through the check. So, if it isn't too late, it could > use a +2/+A. Looks good and is exception worthy, approved. From apevec at gmail.com Thu Dec 4 23:14:35 2014 From: apevec at gmail.com (Alan Pevec) Date: Fri, 5 Dec 2014 00:14:35 +0100 Subject: [openstack-dev] [oslo] oslo.messaging config option deprecation In-Reply-To: References: <547F7656.80708@dague.net> <547F7C46.7010304@nemebean.com> Message-ID: > Filed as https://bugs.launchpad.net/oslo.messaging/+bug/1399085 > Since this is blocking stable/juno I've pushed partial revert of > Revert "Cap Oslo and client library versions" > https://review.openstack.org/138963 > as a quickfix before the 2014.2.1 release today. > We'll of course need to revisit that, once oslo.messaging is fixed. oslo.messaging 1.5.1 was released so we've revisited this on #openstack-stable and instead of this quickfix pushed https://review.openstack.org/#q,I6107b996d5da808c3222696a9549ee06c22f80b9,n,z and https://review.openstack.org/#q,I47138da02c58073b03e3fb40537cc6f0b6a94a3c,n,z From greg at greghaynes.net Thu Dec 4 23:20:53 2014 From: greg at greghaynes.net (Gregory Haynes) Date: Thu, 04 Dec 2014 15:20:53 -0800 Subject: [openstack-dev] [TripleO] Using python logging.conf for openstack services Message-ID: <1417735253.3315371.198607341.1EF60179@webmail.messagingengine.com> Hello TripleOers, I got a patch together to move us off of our upstart exec | logger -t hack [1] and this got me wondering - why aren't we using the python logging.conf supported by most OpenStack projects [2] to write out logs to files in our desired location? This is highly desirable for a couple reasons: * Less complexity / more straightforward. Basically we wouldn't have to run rsyslog or similar and have app config to talk to syslog then syslog config to put our logs where we want. We also don't have to battle with upstart + rsyslog vs systemd-journald differences and maintain two sets of configuration. * We get actual control over formatting. This is a double edged sword in that AFAICT you *have* to control formatting if you're using a logging.conf with a custom log handler. This means it would be a bit of a divergence from our "use the defaults" policy but there are some logging formats in the OpenStack docs [3] named "normal", maybe this could be acceptable? The big win here is we can avoid issues like having duplicate timestamps [4] (this issue still exists on Ubuntu, at least) without having to do two sets of configuration, one for upstart + rsyslog, one for systemd. * This makes setting custom logging configuration a lot more feasible for operators. As-is, if an operator wants to forward logs to an existing central log server we dont really have a good way for them to do this. We also have a requirement that we can come up with a way to expose the rsyslog/journald config options needed to do this to operators. If we are using logging.conf we can just use our existing passthrough-config system to let operators simply write out custom logging.conf files which are already documented by OpenStack. Thoughts? Comments? Concerns? Cheers, Greg [1] - https://review.openstack.org/#/c/138844/ [2] - http://docs.openstack.org/admin-guide-cloud/content/section_manage-logs.html * Note that Swift does not support this [3] - http://docs.openstack.org/trunk/config-reference/content/section_keystone-logging.conf.html [4] - https://review.openstack.org/#/c/104619/ -- Gregory Haynes greg at greghaynes.net From iwienand at redhat.com Thu Dec 4 23:48:07 2014 From: iwienand at redhat.com (Ian Wienand) Date: Fri, 05 Dec 2014 10:48:07 +1100 Subject: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023) In-Reply-To: <1417631988-sup-5421@fewbar.com> References: <54755DE7.2070107@redhat.com> <547607C3.3090604@nemebean.com> <1417495283-sup-444@fewbar.com> <547E1177.4030705@redhat.com> <1417559171-sup-8883@fewbar.com> <547E98F0.8010406@redhat.com> <3C6FDAE6-9BEB-4E9B-91A8-BD5AE724C6D7@tenshu.net> <1417631988-sup-5421@fewbar.com> Message-ID: <5480F2B7.8040207@redhat.com> On 12/04/2014 05:41 AM, Clint Byrum wrote: > What if the patch is reworked to leave the current trace-all-the-time > mode in place, and we iterate on each script to make tracing conditional > as we add proper logging? I have run [1] over patchset 15 to keep whatever was originally using -x tracing itself by default. I did not do this originally because it seems to me this list of files could be approximated with rand(), but it should maintain the status quo. -i [1] https://gist.github.com/ianw/71bbda9e6acc74ccd0fd From clint at fewbar.com Thu Dec 4 23:56:00 2014 From: clint at fewbar.com (Clint Byrum) Date: Thu, 04 Dec 2014 15:56:00 -0800 Subject: [openstack-dev] [TripleO] Using python logging.conf for openstack services In-Reply-To: <1417735253.3315371.198607341.1EF60179@webmail.messagingengine.com> References: <1417735253.3315371.198607341.1EF60179@webmail.messagingengine.com> Message-ID: <1417737096-sup-7722@fewbar.com> Excerpts from Gregory Haynes's message of 2014-12-04 15:20:53 -0800: > Hello TripleOers, > > I got a patch together to move us off of our upstart exec | > logger -t hack [1] and this got me wondering - why aren't we > using the python logging.conf supported by most OpenStack projects [2] > to write out logs to files in our desired location? > > This is highly desirable for a couple reasons: > > * Less complexity / more straightforward. Basically we wouldn't have to > run rsyslog or similar and have app config to talk to syslog then syslog > config to put our logs where we want. We also don't have to battle with > upstart + rsyslog vs systemd-journald differences and maintain two sets > of configuration. > +1 for less complexity and for just using the normal OS logging facilities that exist and are quite efficient. > * We get actual control over formatting. This is a double edged sword in > that AFAICT you *have* to control formatting if you're using a > logging.conf with a custom log handler. This means it would be a bit of > a divergence from our "use the defaults" policy but there are some > logging formats in the OpenStack docs [3] named "normal", maybe this > could be acceptable? The big win here is we can avoid issues like having > duplicate timestamps [4] (this issue still exists on Ubuntu, at least) > without having to do two sets of configuration, one for upstart + > rsyslog, one for systemd. > This thread might need to get an [all] tag, and we might need to ask the question "why isn't syslog the default?". I don't have a good answer for that, so I think we might want to consider doing it, but I especially would like to hear from operators how many of them actually log things to syslog vs. on disk or something else. > * This makes setting custom logging configuration a lot more feasible > for operators. As-is, if an operator wants to forward logs to an > existing central log server we dont really have a good way for them to > do this. We also have a requirement that we can come up with a way to > expose the rsyslog/journald config options needed to do this to > operators. If we are using logging.conf we can just use our existing > passthrough-config system to let operators simply write out custom > logging.conf files which are already documented by OpenStack. As usual, the parameters we have should just be things that we want to ask users about on nearly every deployment. So I concur that passthrough is the way to go, since it will only be for those who don't want to use syslog for logging. From mikal at stillhq.com Fri Dec 5 00:05:28 2014 From: mikal at stillhq.com (Michael Still) Date: Fri, 5 Dec 2014 11:05:28 +1100 Subject: [openstack-dev] [Nova] Spring cleaning nova-core Message-ID: One of the things that happens over time is that some of our core reviewers move on to other projects. This is a normal and healthy thing, especially as nova continues to spin out projects into other parts of OpenStack. However, it is important that our core reviewers be active, as it keeps them up to date with the current ways we approach development in Nova. I am therefore removing some no longer sufficiently active cores from the nova-core group. I?d like to thank the following people for their contributions over the years: * cbehrens: Chris Behrens * vishvananda: Vishvananda Ishaya * dan-prince: Dan Prince * belliott: Brian Elliott * p-draigbrady: Padraig Brady I?d love to see any of these cores return if they find their available time for code reviews increases. Thanks, Michael -- Rackspace Australia From brandon.logan at RACKSPACE.COM Fri Dec 5 00:05:33 2014 From: brandon.logan at RACKSPACE.COM (Brandon Logan) Date: Fri, 5 Dec 2014 00:05:33 +0000 Subject: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. In-Reply-To: References: <1416867738.3960.19.camel@localhost> Message-ID: <1417737958.4577.25.camel@localhost> Sorry it's taken me a while to respond to this. So I wasn't thinking about this correctly. I was afraid you would have to pass in a full tree of parent child representations to /loadbalancers to update anything a load balancer it is associated to (including down to members). However, after thinking about it, a user would just make an association call on each object. For Example, associate member1 with pool1, associate pool1 with listener1, then associate loadbalancer1 with listener1. Updating is just as simple as updating each entity. This does bring up another problem though. If a listener can live on many load balancers, and a pool can live on many listeners, and a member can live on many pools, there's lot of permutations to keep track of for status. you can't just link a member's status to a load balancer bc a member can exist on many pools under that load balancer, and each pool can exist under many listeners under that load balancer. For example, say I have these: lb1 lb2 listener1 listener2 pool1 pool2 member1 member2 lb1 -> [listener1, listener2] lb2 -> [listener1] listener1 -> [pool1, pool2] listener2 -> [pool1] pool1 -> [member1, member2] pool2 -> [member1] member1 can now have a different statuses under pool1 and pool2. since listener1 and listener2 both have pool1, this means member1 will now have a different status for listener1 -> pool1 and listener2 -> pool2 combination. And so forth for load balancers. Basically there's a lot of permutations and combinations to keep track of with this model for statuses. Showing these in the body of load balancer details can get quite large. I hope this makes sense because my brain is ready to explode. Thanks, Brandon On Thu, 2014-11-27 at 08:52 +0000, Samuel Bercovici wrote: > Brandon, can you please explain further (1) bellow? > > -----Original Message----- > From: Brandon Logan [mailto:brandon.logan at RACKSPACE.COM] > Sent: Tuesday, November 25, 2014 12:23 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. > > My impression is that the statuses of each entity will be shown on a detailed info request of a loadbalancer. The root level objects would not have any statuses. For example a user makes a GET request to /loadbalancers/{lb_id} and the status of every child of that load balancer is show in a "status_tree" json object. For example: > > {"name": "loadbalancer1", > "status_tree": > {"listeners": > [{"name": "listener1", "operating_status": "ACTIVE", > "default_pool": > {"name": "pool1", "status": "ACTIVE", > "members": > [{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}} > > Sam, correct me if I am wrong. > > I generally like this idea. I do have a few reservations with this: > > 1) Creating and updating a load balancer requires a full tree configuration with the current extension/plugin logic in neutron. Since updates will require a full tree, it means the user would have to know the full tree configuration just to simply update a name. Solving this would require nested child resources in the URL, which the current neutron extension/plugin does not allow. Maybe the new one will. > > 2) The status_tree can get quite large depending on the number of listeners and pools being used. This is a minor issue really as it will make horizon's (or any other UI tool's) job easier to show statuses. > > Thanks, > Brandon > > On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote: > > Hi Samuel, > > > > > > We've actually been avoiding having a deeper discussion about status > > in Neutron LBaaS since this can get pretty hairy as the back-end > > implementations get more complicated. I suspect managing that is > > probably one of the bigger reasons we have disagreements around object > > sharing. Perhaps it's time we discussed representing state "correctly" > > (whatever that means), instead of a round-a-bout discussion about > > object sharing (which, I think, is really just avoiding this issue)? > > > > > > Do you have a proposal about how status should be represented > > (possibly including a description of the state machine) if we collapse > > everything down to be logical objects except the loadbalancer object? > > (From what you're proposing, I suspect it might be too general to, for > > example, represent the UP/DOWN status of members of a given pool.) > > > > > > Also, from an haproxy perspective, sharing pools within a single > > listener actually isn't a problem. That is to say, having the same > > L7Policy pointing at the same pool is OK, so I personally don't have a > > problem allowing sharing of objects within the scope of parent > > objects. What do the rest of y'all think? > > > > > > Stephen > > > > > > > > On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici > > wrote: > > Hi Stephen, > > > > > > > > 1. The issue is that if we do 1:1 and allow status/state > > to proliferate throughout all objects we will then get an > > issue to fix it later, hence even if we do not do sharing, I > > would still like to have all objects besides LB be treated as > > logical. > > > > 2. The 3rd use case bellow will not be reasonable without > > pool sharing between different policies. Specifying different > > pools which are the same for each policy make it non-started > > to me. > > > > > > > > -Sam. > > > > > > > > > > > > > > > > From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] > > Sent: Friday, November 21, 2014 10:26 PM > > To: OpenStack Development Mailing List (not for usage > > questions) > > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects > > in LBaaS - Use Cases that led us to adopt this. > > > > > > > > I think the idea was to implement 1:1 initially to reduce the > > amount of code and operational complexity we'd have to deal > > with in initial revisions of LBaaS v2. Many to many can be > > simulated in this scenario, though it does shift the burden of > > maintenance to the end user. It does greatly simplify the > > initial code for v2, in any case, though. > > > > > > > > > > > > Did we ever agree to allowing listeners to be shared among > > load balancers? I think that still might be a N:1 > > relationship even in our latest models. > > > > > > > > > > There's also the difficulty introduced by supporting different > > flavors: Since flavors are essentially an association between > > a load balancer object and a driver (with parameters), once > > flavors are introduced, any sub-objects of a given load > > balancer objects must necessarily be purely logical until they > > are associated with a load balancer. I know there was talk of > > forcing these objects to be sub-objects of a load balancer > > which can't be accessed independently of the load balancer > > (which would have much the same effect as what you discuss: > > State / status only make sense once logical objects have an > > instantiation somewhere.) However, the currently proposed API > > treats most objects as root objects, which breaks this > > paradigm. > > > > > > > > > > > > How we handle status and updates once there's an instantiation > > of these logical objects is where we start getting into real > > complexity. > > > > > > > > > > > > It seems to me there's a lot of complexity introduced when we > > allow a lot of many to many relationships without a whole lot > > of benefit in real-world deployment scenarios. In most cases, > > objects are not going to be shared, and in those cases with > > sufficiently complicated deployments in which shared objects > > could be used, the user is likely to be sophisticated enough > > and skilled enough to manage updating what are essentially > > "copies" of objects, and would likely have an opinion about > > how individual failures should be handled which wouldn't > > necessarily coincide with what we developers of the system > > would assume. That is to say, allowing too many many to many > > relationships feels like a solution to a problem that doesn't > > really exist, and introduces a lot of unnecessary complexity. > > > > > > > > > > > > In any case, though, I feel like we should walk before we run: > > Implementing 1:1 initially is a good idea to get us rolling. > > Whether we then implement 1:N or M:N after that is another > > question entirely. But in any case, it seems like a bad idea > > to try to start with M:N. > > > > > > > > > > > > Stephen > > > > > > > > > > > > > > > > On Thu, Nov 20, 2014 at 4:52 AM, Samuel Bercovici > > wrote: > > > > Hi, > > > > Per discussion I had at OpenStack Summit/Paris with Brandon > > and Doug, I would like to remind everyone why we choose to > > follow a model where pools and listeners are shared (many to > > many relationships). > > > > Use Cases: > > 1. The same application is being exposed via different LB > > objects. > > For example: users coming from the internal "private" > > organization network, have an LB1(private_VIP) --> > > Listener1(TLS) -->Pool1 and user coming from the "internet", > > have LB2(public_vip)-->Listener1(TLS)-->Pool1. > > This may also happen to support ipv4 and ipv6: LB_v4(ipv4_VIP) > > --> Listener1(TLS) -->Pool1 and LB_v6(ipv6_VIP) --> > > Listener1(TLS) -->Pool1 > > The operator would like to be able to manage the pool > > membership in cases of updates and error in a single place. > > > > 2. The same group of servers is being used via different > > listeners optionally also connected to different LB objects. > > For example: users coming from the internal "private" > > organization network, have an LB1(private_VIP) --> > > Listener1(HTTP) -->Pool1 and user coming from the "internet", > > have LB2(public_vip)-->Listener2(TLS)-->Pool1. > > The LBs may use different flavors as LB2 needs TLS termination > > and may prefer a different "stronger" flavor. > > The operator would like to be able to manage the pool > > membership in cases of updates and error in a single place. > > > > 3. The same group of servers is being used in several > > different L7_Policies connected to a listener. Such listener > > may be reused as in use case 1. > > For example: LB1(VIP1)-->Listener_L7(TLS) > > | > > > > +-->L7_Policy1(rules..)-->Pool1 > > | > > > > +-->L7_Policy2(rules..)-->Pool2 > > | > > > > +-->L7_Policy3(rules..)-->Pool1 > > | > > > > +-->L7_Policy3(rules..)-->Reject > > > > > > I think that the "key" issue handling correctly the > > "provisioning" state and the operation state in a many to many > > model. > > This is an issue as we have attached status fields to each and > > every object in the model. > > A side effect of the above is that to understand the > > "provisioning/operation" status one needs to check many > > different objects. > > > > To remedy this, I would like to turn all objects besides the > > LB to be logical objects. This means that the only place to > > manage the status/state will be on the LB object. > > Such status should be hierarchical so that logical object > > attached to an LB, would have their status consumed out of the > > LB object itself (in case of an error). > > We also need to discuss how modifications of a logical object > > will be "rendered" to the concrete LB objects. > > You may want to revisit > > https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r the "Logical Model + Provisioning Status + Operation Status + Statistics" for a somewhat more detailed explanation albeit it uses the LBaaS v1 model as a reference. > > > > Regards, > > -Sam. > > > > > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > -- > > > > Stephen Balukoff > > Blue Box Group, LLC > > (800)613-4305 x807 > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > -- > > Stephen Balukoff > > Blue Box Group, LLC > > (800)613-4305 x807 > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tony at bakeyournoodle.com Fri Dec 5 01:03:49 2014 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 5 Dec 2014 12:03:49 +1100 Subject: [openstack-dev] Session length on wiki.openstack.org Message-ID: <20141205010348.GY84915@thor.bakeyournoodle.com> Hello Wiki masters, Is there anyway to extend the session length on the wiki? In my current work flow I login to the wiki do work and then get distracted by code/IRC when I go back to the wiki I'm almost always logged out (I'm guessing due to inactivity). It feels like this is about 30mins but I could be wrong. Is there anyway for me to tweak this session length for myself? If not can it be increased to say 2 hours? Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From jsbryant at electronicjungle.net Fri Dec 5 01:16:47 2014 From: jsbryant at electronicjungle.net (Jay S. Bryant) Date: Thu, 04 Dec 2014 19:16:47 -0600 Subject: [openstack-dev] [stable] Exception proposals for 2014.2.1 In-Reply-To: References: <547DE593.9090709@electronicjungle.net> <5480D995.2000305@electronicjungle.net> Message-ID: <5481077F.9000706@electronicjungle.net> On 12/04/2014 05:01 PM, Alan Pevec wrote: > 2014-12-04 23:00 GMT+01:00 Jay S. Bryant : >> Finally was able to get the master version through the gate and >> cherry-picked it here: >> https://review.openstack.org/#/c/139194/ >> >> That one has made it through the check. So, if it isn't too late, it could >> use a +2/+A. > Looks good and is exception worthy, approved. Thanks so much Alan! From carl at ecbaldwin.net Fri Dec 5 01:37:48 2014 From: carl at ecbaldwin.net (Carl Baldwin) Date: Thu, 4 Dec 2014 18:37:48 -0700 Subject: [openstack-dev] Session length on wiki.openstack.org In-Reply-To: <20141205010348.GY84915@thor.bakeyournoodle.com> References: <20141205010348.GY84915@thor.bakeyournoodle.com> Message-ID: +1 I've been meaning to say something like this but never got around to it. Thanks for speaking up. On Thu, Dec 4, 2014 at 6:03 PM, Tony Breeds wrote: > Hello Wiki masters, > Is there anyway to extend the session length on the wiki? In my current > work flow I login to the wiki do work and then get distracted by code/IRC when > I go back to the wiki I'm almost always logged out (I'm guessing due to > inactivity). It feels like this is about 30mins but I could be wrong. > > Is there anyway for me to tweak this session length for myself? > If not can it be increased to say 2 hours? > > Yours Tony. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From wei.d.chen at intel.com Fri Dec 5 02:52:12 2014 From: wei.d.chen at intel.com (Chen, Wei D) Date: Fri, 5 Dec 2014 02:52:12 +0000 Subject: [openstack-dev] [Cinder] Support Modifying Volume Image Metadata Message-ID: Hi all, We talked about this topic (Support Modifying Volume Image Metadata) in this week's IRC meeting. Unfortunately, I didn't prepared well and couldn't detail one concrete use case on this spec. Apologize for messing up the meeting! Now, we have updated the spec and addressed some comments we got from the meeting, so it's great if someone could review the spec again and give me your comments there (https://review.openstack.org/#/c/136253/). Thanks in the advance. Best Regards, Dave Chen -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6648 bytes Desc: not available URL: From osanai.hisashi at jp.fujitsu.com Fri Dec 5 03:04:16 2014 From: osanai.hisashi at jp.fujitsu.com (Osanai, Hisashi) Date: Fri, 5 Dec 2014 03:04:16 +0000 Subject: [openstack-dev] [swift] a way of checking replicate completion on swift cluster In-Reply-To: References: <0239431E587EFB4AAC571DD74E73204B1524C64E@G01JPEXMBYT04> <0239431E587EFB4AAC571DD74E73204B1524DEF2@G01JPEXMBYT04> <0239431E587EFB4AAC571DD74E73204B1525EB02@G01JPEXMBYT04> <88AC986BE9D6C44AB9659D6A59329C6F51E3C4F3@G01JPEXMBYT01> Message-ID: <88AC986BE9D6C44AB9659D6A59329C6F51E49B3D@G01JPEXMBYT01> Thank you for the response. I updated the following patch with the idea. https://review.openstack.org/#/c/138342/ On Friday, December 05, 2014 5:50 AM, Clay Gerrard wrote: > more fidelity in the recon's seems fine, statsd emissions are > also a popular target >for telemetry radiation. Thanks again! Hisashi Osanai From sbalukoff at bluebox.net Fri Dec 5 05:16:44 2014 From: sbalukoff at bluebox.net (Stephen Balukoff) Date: Thu, 4 Dec 2014 21:16:44 -0800 Subject: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. In-Reply-To: <1417737958.4577.25.camel@localhost> References: <1416867738.3960.19.camel@localhost> <1417737958.4577.25.camel@localhost> Message-ID: Hi Brandon, Yeah, in your example, member1 could potentially have 8 different statuses (and this is a small example!)... If that member starts flapping, it means that every time it flaps there are 8 notifications being passed upstream. Note that this problem actually doesn't get any better if we're not sharing objects but are just duplicating them (ie. not sharing objects but the user makes references to the same back-end machine as 8 different members.) To be honest, I don't see sharing entities at many levels like this being the rule for most of our installations-- maybe a few percentage points of installations will do an excessive sharing of members, but I doubt it. So really, even though reporting status like this is likely to generate a pretty big tree of data, I don't think this is actually a problem, eh. And I don't see sharing entities actually reducing the workload of what needs to happen behind the scenes. (It just allows us to conceal more of this work from the user.) Stephen On Thu, Dec 4, 2014 at 4:05 PM, Brandon Logan wrote: > Sorry it's taken me a while to respond to this. > > So I wasn't thinking about this correctly. I was afraid you would have > to pass in a full tree of parent child representations to /loadbalancers > to update anything a load balancer it is associated to (including down > to members). However, after thinking about it, a user would just make > an association call on each object. For Example, associate member1 with > pool1, associate pool1 with listener1, then associate loadbalancer1 with > listener1. Updating is just as simple as updating each entity. > > This does bring up another problem though. If a listener can live on > many load balancers, and a pool can live on many listeners, and a member > can live on many pools, there's lot of permutations to keep track of for > status. you can't just link a member's status to a load balancer bc a > member can exist on many pools under that load balancer, and each pool > can exist under many listeners under that load balancer. For example, > say I have these: > > lb1 > lb2 > listener1 > listener2 > pool1 > pool2 > member1 > member2 > > lb1 -> [listener1, listener2] > lb2 -> [listener1] > listener1 -> [pool1, pool2] > listener2 -> [pool1] > pool1 -> [member1, member2] > pool2 -> [member1] > > member1 can now have a different statuses under pool1 and pool2. since > listener1 and listener2 both have pool1, this means member1 will now > have a different status for listener1 -> pool1 and listener2 -> pool2 > combination. And so forth for load balancers. > > Basically there's a lot of permutations and combinations to keep track > of with this model for statuses. Showing these in the body of load > balancer details can get quite large. > > I hope this makes sense because my brain is ready to explode. > > Thanks, > Brandon > > On Thu, 2014-11-27 at 08:52 +0000, Samuel Bercovici wrote: > > Brandon, can you please explain further (1) bellow? > > > > -----Original Message----- > > From: Brandon Logan [mailto:brandon.logan at RACKSPACE.COM] > > Sent: Tuesday, November 25, 2014 12:23 AM > > To: openstack-dev at lists.openstack.org > > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - > Use Cases that led us to adopt this. > > > > My impression is that the statuses of each entity will be shown on a > detailed info request of a loadbalancer. The root level objects would not > have any statuses. For example a user makes a GET request to > /loadbalancers/{lb_id} and the status of every child of that load balancer > is show in a "status_tree" json object. For example: > > > > {"name": "loadbalancer1", > > "status_tree": > > {"listeners": > > [{"name": "listener1", "operating_status": "ACTIVE", > > "default_pool": > > {"name": "pool1", "status": "ACTIVE", > > "members": > > [{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}} > > > > Sam, correct me if I am wrong. > > > > I generally like this idea. I do have a few reservations with this: > > > > 1) Creating and updating a load balancer requires a full tree > configuration with the current extension/plugin logic in neutron. Since > updates will require a full tree, it means the user would have to know the > full tree configuration just to simply update a name. Solving this would > require nested child resources in the URL, which the current neutron > extension/plugin does not allow. Maybe the new one will. > > > > 2) The status_tree can get quite large depending on the number of > listeners and pools being used. This is a minor issue really as it will > make horizon's (or any other UI tool's) job easier to show statuses. > > > > Thanks, > > Brandon > > > > On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote: > > > Hi Samuel, > > > > > > > > > We've actually been avoiding having a deeper discussion about status > > > in Neutron LBaaS since this can get pretty hairy as the back-end > > > implementations get more complicated. I suspect managing that is > > > probably one of the bigger reasons we have disagreements around object > > > sharing. Perhaps it's time we discussed representing state "correctly" > > > (whatever that means), instead of a round-a-bout discussion about > > > object sharing (which, I think, is really just avoiding this issue)? > > > > > > > > > Do you have a proposal about how status should be represented > > > (possibly including a description of the state machine) if we collapse > > > everything down to be logical objects except the loadbalancer object? > > > (From what you're proposing, I suspect it might be too general to, for > > > example, represent the UP/DOWN status of members of a given pool.) > > > > > > > > > Also, from an haproxy perspective, sharing pools within a single > > > listener actually isn't a problem. That is to say, having the same > > > L7Policy pointing at the same pool is OK, so I personally don't have a > > > problem allowing sharing of objects within the scope of parent > > > objects. What do the rest of y'all think? > > > > > > > > > Stephen > > > > > > > > > > > > On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici > > > wrote: > > > Hi Stephen, > > > > > > > > > > > > 1. The issue is that if we do 1:1 and allow status/state > > > to proliferate throughout all objects we will then get an > > > issue to fix it later, hence even if we do not do sharing, I > > > would still like to have all objects besides LB be treated as > > > logical. > > > > > > 2. The 3rd use case bellow will not be reasonable without > > > pool sharing between different policies. Specifying different > > > pools which are the same for each policy make it non-started > > > to me. > > > > > > > > > > > > -Sam. > > > > > > > > > > > > > > > > > > > > > > > > From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] > > > Sent: Friday, November 21, 2014 10:26 PM > > > To: OpenStack Development Mailing List (not for usage > > > questions) > > > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects > > > in LBaaS - Use Cases that led us to adopt this. > > > > > > > > > > > > I think the idea was to implement 1:1 initially to reduce the > > > amount of code and operational complexity we'd have to deal > > > with in initial revisions of LBaaS v2. Many to many can be > > > simulated in this scenario, though it does shift the burden of > > > maintenance to the end user. It does greatly simplify the > > > initial code for v2, in any case, though. > > > > > > > > > > > > > > > > > > Did we ever agree to allowing listeners to be shared among > > > load balancers? I think that still might be a N:1 > > > relationship even in our latest models. > > > > > > > > > > > > > > > There's also the difficulty introduced by supporting different > > > flavors: Since flavors are essentially an association between > > > a load balancer object and a driver (with parameters), once > > > flavors are introduced, any sub-objects of a given load > > > balancer objects must necessarily be purely logical until they > > > are associated with a load balancer. I know there was talk of > > > forcing these objects to be sub-objects of a load balancer > > > which can't be accessed independently of the load balancer > > > (which would have much the same effect as what you discuss: > > > State / status only make sense once logical objects have an > > > instantiation somewhere.) However, the currently proposed API > > > treats most objects as root objects, which breaks this > > > paradigm. > > > > > > > > > > > > > > > > > > How we handle status and updates once there's an instantiation > > > of these logical objects is where we start getting into real > > > complexity. > > > > > > > > > > > > > > > > > > It seems to me there's a lot of complexity introduced when we > > > allow a lot of many to many relationships without a whole lot > > > of benefit in real-world deployment scenarios. In most cases, > > > objects are not going to be shared, and in those cases with > > > sufficiently complicated deployments in which shared objects > > > could be used, the user is likely to be sophisticated enough > > > and skilled enough to manage updating what are essentially > > > "copies" of objects, and would likely have an opinion about > > > how individual failures should be handled which wouldn't > > > necessarily coincide with what we developers of the system > > > would assume. That is to say, allowing too many many to many > > > relationships feels like a solution to a problem that doesn't > > > really exist, and introduces a lot of unnecessary complexity. > > > > > > > > > > > > > > > > > > In any case, though, I feel like we should walk before we run: > > > Implementing 1:1 initially is a good idea to get us rolling. > > > Whether we then implement 1:N or M:N after that is another > > > question entirely. But in any case, it seems like a bad idea > > > to try to start with M:N. > > > > > > > > > > > > > > > > > > Stephen > > > > > > > > > > > > > > > > > > > > > > > > On Thu, Nov 20, 2014 at 4:52 AM, Samuel Bercovici > > > wrote: > > > > > > Hi, > > > > > > Per discussion I had at OpenStack Summit/Paris with Brandon > > > and Doug, I would like to remind everyone why we choose to > > > follow a model where pools and listeners are shared (many to > > > many relationships). > > > > > > Use Cases: > > > 1. The same application is being exposed via different LB > > > objects. > > > For example: users coming from the internal "private" > > > organization network, have an LB1(private_VIP) --> > > > Listener1(TLS) -->Pool1 and user coming from the "internet", > > > have LB2(public_vip)-->Listener1(TLS)-->Pool1. > > > This may also happen to support ipv4 and ipv6: LB_v4(ipv4_VIP) > > > --> Listener1(TLS) -->Pool1 and LB_v6(ipv6_VIP) --> > > > Listener1(TLS) -->Pool1 > > > The operator would like to be able to manage the pool > > > membership in cases of updates and error in a single place. > > > > > > 2. The same group of servers is being used via different > > > listeners optionally also connected to different LB objects. > > > For example: users coming from the internal "private" > > > organization network, have an LB1(private_VIP) --> > > > Listener1(HTTP) -->Pool1 and user coming from the "internet", > > > have LB2(public_vip)-->Listener2(TLS)-->Pool1. > > > The LBs may use different flavors as LB2 needs TLS termination > > > and may prefer a different "stronger" flavor. > > > The operator would like to be able to manage the pool > > > membership in cases of updates and error in a single place. > > > > > > 3. The same group of servers is being used in several > > > different L7_Policies connected to a listener. Such listener > > > may be reused as in use case 1. > > > For example: LB1(VIP1)-->Listener_L7(TLS) > > > | > > > > > > +-->L7_Policy1(rules..)-->Pool1 > > > | > > > > > > +-->L7_Policy2(rules..)-->Pool2 > > > | > > > > > > +-->L7_Policy3(rules..)-->Pool1 > > > | > > > > > > +-->L7_Policy3(rules..)-->Reject > > > > > > > > > I think that the "key" issue handling correctly the > > > "provisioning" state and the operation state in a many to many > > > model. > > > This is an issue as we have attached status fields to each and > > > every object in the model. > > > A side effect of the above is that to understand the > > > "provisioning/operation" status one needs to check many > > > different objects. > > > > > > To remedy this, I would like to turn all objects besides the > > > LB to be logical objects. This means that the only place to > > > manage the status/state will be on the LB object. > > > Such status should be hierarchical so that logical object > > > attached to an LB, would have their status consumed out of the > > > LB object itself (in case of an error). > > > We also need to discuss how modifications of a logical object > > > will be "rendered" to the concrete LB objects. > > > You may want to revisit > > > > https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r > the "Logical Model + Provisioning Status + Operation Status + Statistics" > for a somewhat more detailed explanation albeit it uses the LBaaS v1 model > as a reference. > > > > > > Regards, > > > -Sam. > > > > > > > > > > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Stephen Balukoff > > > Blue Box Group, LLC > > > (800)613-4305 x807 > > > > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > > > -- > > > Stephen Balukoff > > > Blue Box Group, LLC > > > (800)613-4305 x807 > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Fri Dec 5 06:26:26 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Fri, 05 Dec 2014 06:26:26 +0000 Subject: [openstack-dev] [Horizon] REST API split and new requirements.txt Message-ID: Hi all, just to let you know that on request from Thai I split the REST API change into two: https://review.openstack.org/#/c/136676 - original change which just has the base code https://review.openstack.org/#/c/139532 - keystone specifics The identity WIP should now base itself off the second change. I've also submitted a change (https://review.openstack.org/#/c/139284/) with the new angular dependencies to go in requirements.txt, based on a couple of xstatic packages I created today. Also created a change for global-requirements as well. Eventually those will be superseded by bower, but for now it's xstatic. If anyone would like to be added to PyPI to be able to update those xstatic packages, please ask. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Fri Dec 5 08:03:54 2014 From: gkotton at vmware.com (Gary Kotton) Date: Fri, 5 Dec 2014 08:03:54 +0000 Subject: [openstack-dev] [Nova] MS Update Message-ID: Hi, MS has been down for a few days. The following patches will help us get it up and running again: - requirements - https://review.openstack.org/139545 - oslo.vmware - https://review.openstack.org/139296 (depends on the patch above) - devstack - https://review.openstack.org/139515 Hopefully once the above are in we will be back to business as usual. Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From blak111 at gmail.com Fri Dec 5 08:23:09 2014 From: blak111 at gmail.com (Kevin Benton) Date: Fri, 5 Dec 2014 00:23:09 -0800 Subject: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup? In-Reply-To: <87d27z8kxj.fsf@metaswitch.com> References: <87ppc090ne.fsf@metaswitch.com> <87d27z8kxj.fsf@metaswitch.com> Message-ID: I see the difference now. The main concern I see with the NOOP type is that creating the virtual interface could require different logic for certain hypervisors. In that case Neutron would now have to know things about nova and to me it seems like that's slightly too far the other direction. On Thu, Dec 4, 2014 at 8:00 AM, Neil Jerram wrote: > Kevin Benton writes: > > > What you are proposing sounds very reasonable. If I understand > > correctly, the idea is to make Nova just create the TAP device and get > > it attached to the VM and leave it 'unplugged'. This would work well > > and might eliminate the need for some drivers. I see no reason to > > block adding a VIF type that does this. > > I was actually floating a slightly more radical option than that: the > idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does > absolutely _nothing_, not even create the TAP device. > > (My pending Nova spec at https://review.openstack.org/#/c/130732/ > proposes VIF_TYPE_TAP, for which Nova _does_ creates the TAP device, but > then does nothing else - i.e. exactly what you've described just above. > But in this email thread I was musing about going even further, towards > providing a platform for future networking experimentation where Nova > isn't involved at all in the networking setup logic.) > > > However, there is a good reason that the VIF type for some OVS-based > > deployments require this type of setup. The vSwitches are connected to > > a central controller using openflow (or ovsdb) which configures > > forwarding rules/etc. Therefore they don't have any agents running on > > the compute nodes from the Neutron side to perform the step of getting > > the interface plugged into the vSwitch in the first place. For this > > reason, we will still need both types of VIFs. > > Thanks. I'm not advocating that existing VIF types should be removed, > though - rather wondering if similar function could in principle be > implemented without Nova VIF plugging - or what that would take. > > For example, suppose someone came along and wanted to implement a new > OVS-like networking infrastructure? In principle could they do that > without having to enhance the Nova VIF driver code? I think at the > moment they couldn't, but that they would be able to if VIF_TYPE_NOOP > (or possibly VIF_TYPE_TAP) was already in place. In principle I think > it would then be possible for the new implementation to specify > VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does the kind > of configuration and vSwitch plugging that you've described above. > > Does that sound correct, or am I missing something else? > > >> 1 .When the port is created in the Neutron DB, and handled (bound > > etc.) > > by the plugin and/or mechanism driver, the TAP device name is already > > present at that time. > > > > This is backwards. The tap device name is derived from the port ID, so > > the port has already been created in Neutron at that point. It is just > > unbound. The steps are roughly as follows: Nova calls neutron for a > > port, Nova creates/plugs VIF based on port, Nova updates port on > > Neutron, Neutron binds the port and notifies agent/plugin/whatever to > > finish the plumbing, Neutron notifies Nova that port is active, Nova > > unfreezes the VM. > > > > None of that should be affected by what you are proposing. The only > > difference is that your Neutron agent would also perform the > > 'plugging' operation. > > Agreed - but thanks for clarifying the exact sequence of events. > > I wonder if what I'm describing (either VIF_TYPE_NOOP or VIF_TYPE_TAP) > might fit as part of the "Nova-network/Neutron Migration" priority > that's just been announced for Kilo. I'm aware that a part of that > priority is concerned with live migration, but perhaps it could also > include the goal of future networking work not having to touch Nova > code? > > Regards, > Neil > -- Kevin Benton -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.moe at ericsson.com Fri Dec 5 08:33:00 2014 From: erik.moe at ericsson.com (Erik Moe) Date: Fri, 5 Dec 2014 08:33:00 +0000 Subject: [openstack-dev] [Neutron] Edge-VPN and Edge-Id In-Reply-To: References: <119A1974-380C-461E-9937-65B9763E39E6@brocade.com> Message-ID: One reason for trying to get an more complete API into Neutron is to have a standardized API. So users know what to expect and for providers to have something to comply to. Do you suggest we bring this standardization work to some other forum, OPNFV for example? Neutron provides low level hooks and the rest is defined elsewhere. Maybe this could work, but there would probably be other issues if the actual implementation is not on the edge or outside Neutron. /Erik From: Ian Wells [mailto:ijw.ubuntu at cack.org.uk] Sent: den 4 december 2014 20:19 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id On 1 December 2014 at 21:26, Mohammad Hanif > wrote: I hope we all understand how edge VPN works and what interactions are introduced as part of this spec. I see references to neutron-network mapping to the tunnel which is not at all case and the edge-VPN spec doesn?t propose it. At a very high level, there are two main concepts: 1. Creation of a per tenant VPN ?service? on a PE (physical router) which has a connectivity to other PEs using some tunnel (not known to tenant or tenant-facing). An attachment circuit for this VPN service is also created which carries a ?list" of tenant networks (the list is initially empty) . 2. Tenant ?updates? the list of tenant networks in the attachment circuit which essentially allows the VPN ?service? to add or remove the network from being part of that VPN. A service plugin implements what is described in (1) and provides an API which is called by what is described in (2). The Neutron driver only ?updates? the attachment circuit using an API (attachment circuit is also part of the service plugin? data model). I don?t see where we are introducing large data model changes to Neutron? Well, you have attachment types, tunnels, and so on - these are all objects with data models, and your spec is on Neutron so I'm assuming you plan on putting them into the Neutron database - where they are, for ever more, a Neutron maintenance overhead both on the dev side and also on the ops side, specifically at upgrade. How else one introduces a network service in OpenStack if it is not through a service plugin? Again, I've missed something here, so can you define 'service plugin' for me? How similar is it to a Neutron extension - which we agreed at the summit we should take pains to avoid, per Salvatore's session? And the answer to that is to stop talking about plugins or trying to integrate this into the Neutron API or the Neutron DB, and make it an independent service with a small and well defined interaction with Neutron, which is what the edge-id proposal suggests. If we do incorporate it into Neutron then there are probably 90% of Openstack users and developers who don't want or need it but care a great deal if it breaks the tests. If it isn't in Neutron they simply don't install it. As we can see, tenant needs to communicate (explicit or otherwise) to add/remove its networks to/from the VPN. There has to be a channel and the APIs to achieve this. Agreed. I'm suggesting it should be a separate service endpoint. -- Ian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbobrov at mirantis.com Fri Dec 5 08:37:10 2014 From: bbobrov at mirantis.com (Boris Bobrov) Date: Fri, 5 Dec 2014 11:37:10 +0300 Subject: [openstack-dev] [all] OpenStack Bootstrapping Hour - Keystone - Friday Dec 5th 20:00 UTC (15:00 Americas/New_York) In-Reply-To: <548086E6.4070407@dague.net> References: <548086E6.4070407@dague.net> Message-ID: <201412051137.10718.bbobrov@mirantis.com> On Thursday 04 December 2014 19:08:06 Sean Dague wrote: > Sorry for the late announce, too much turkey and pie.... > > This Friday, Dec 5th, we'll be talking with Steve Martinelli and David > Stanek about Keystone Authentication in OpenStack. Wiki page says that the event will be Friday Dec 5th - 19:00 UTC (15:00 Americas/New_York), while the subject in your mail has 20:00 UTC. Could you please clarify that? From thierry at openstack.org Fri Dec 5 08:53:17 2014 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 05 Dec 2014 09:53:17 +0100 Subject: [openstack-dev] Session length on wiki.openstack.org In-Reply-To: References: <20141205010348.GY84915@thor.bakeyournoodle.com> Message-ID: <5481727D.5000305@openstack.org> I agree, and I cross-posted that question to openstack-infra to make sure the infra team sees it: http://lists.openstack.org/pipermail/openstack-infra/2014-December/002215.html Carl Baldwin wrote: > +1 I've been meaning to say something like this but never got around > to it. Thanks for speaking up. > > On Thu, Dec 4, 2014 at 6:03 PM, Tony Breeds wrote: >> Hello Wiki masters, >> Is there anyway to extend the session length on the wiki? In my current >> work flow I login to the wiki do work and then get distracted by code/IRC when >> I go back to the wiki I'm almost always logged out (I'm guessing due to >> inactivity). It feels like this is about 30mins but I could be wrong. >> >> Is there anyway for me to tweak this session length for myself? >> If not can it be increased to say 2 hours? >> >> Yours Tony. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thierry Carrez (ttx) From JonPaul.Sullivan at hp.com Fri Dec 5 10:02:47 2014 From: JonPaul.Sullivan at hp.com (Sullivan, Jon Paul) Date: Fri, 5 Dec 2014 10:02:47 +0000 Subject: [openstack-dev] [TripleO] Alternate meeting time In-Reply-To: References: <547DD0A2.3030102@redhat.com> <547DD6E1.8070103@redhat.com> <547DD9F5.6010108@redhat.com> <54803A17.3040403@redhat.com> Message-ID: From: James Polley [mailto:jp at jamezpolley.com] Sent: 04 December 2014 17:17 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [TripleO] Alternate meeting time On Thu, Dec 4, 2014 at 11:40 AM, marios > wrote: On 04/12/14 11:40, James Polley wrote: > Just taking a look at http://doodle.com/27ffgkdm5gxzr654 again - we've > had 10 people respond so far. The winning time so far is Monday 2100UTC > - 7 "yes" and one "If I have to". for me it currently shows 1200 UTC as the preferred time. You're the 11th responder :) And yes, 1200/1400/1500 are now all leading with 8/0/3. So to be clear, we are voting here for the alternate meeting. The 'original' meeting is at 1900UTC. If in fact 2100UTC ends up being the most popular, what would be the point of an alternating meeting that is only 2 hours apart in time? To me the point would be to get more people able to come along to the meeting. But if the difference *was* that small, I'd be wanting to ask if changing the format or content of the meeting could convince more people to join the 1900UTC meeting - I think that having just one meeting for the whole team would be preferable, if we could manage it. But at present, it looks like if we want to maximise attendance, we should be focusing on European early afternoon. That unfortunately means that it's going to be very hard for those of us in Australia/New Zealand/China/Japan to make it - 1400UTC is 1am Sydney, 10pm Beijing. It's 7:30pm New Delhi, which might be doable, but I don't know of anyone working there who would regularly attend. [JP] - how about we rethink both meeting times then? 19:00 UTC seems like a time that is convenient for only one timezone, and ideally each meeting time should at least be convenient for 2 major geographies. If 15:00 UTC were one meeting, that should be a time convenient for all of Europe, and also ok back as far back as the USA West coast. Then a meeting up at 21:00 UTC should cover most of Australasia and also provide a good alternate time through to the US East coast. Even if those aren?t the 2 times chosen, maybe that is the thinking we need here? Thanks, Jon-Paul Sullivan ? Cloud Services - @hpcloud Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway. Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's Quay, Dublin 2. Registered Number: 361933 The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated, you should consider this message and attachments as "HP CONFIDENTIAL". -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.gilliard at gmail.com Fri Dec 5 10:07:08 2014 From: matthew.gilliard at gmail.com (Matthew Gilliard) Date: Fri, 5 Dec 2014 10:07:08 +0000 Subject: [openstack-dev] [nova] global or per-project specific ssl config options, or both? In-Reply-To: <5480D75A.2010607@linux.vnet.ibm.com> References: <547FE9BA.1010700@linux.vnet.ibm.com> <5480D75A.2010607@linux.vnet.ibm.com> Message-ID: Hi Matt, Nova, I'll look into this. Gilliard On Thu, Dec 4, 2014 at 9:51 PM, Matt Riedemann wrote: > > > On 12/4/2014 6:02 AM, Davanum Srinivas wrote: >> >> +1 to @markmc's "default is global value and override for project >> specific key" suggestion. >> >> -- dims >> >> >> >> On Wed, Dec 3, 2014 at 11:57 PM, Matt Riedemann >> wrote: >>> >>> I've posted this to the 12/4 nova meeting agenda but figured I'd >>> socialize >>> it here also. >>> >>> SSL options - do we make them per-project or global, or both? Neutron and >>> Cinder have config-group specific SSL options in nova, Glance is using >>> oslo >>> sslutils global options since Juno which was contentious for a time in a >>> separate review in Icehouse [1]. >>> >>> Now [2] wants to break that out for Glance, but we also have a patch [3] >>> for >>> Keystone to use the global oslo SSL options, we should be consistent, but >>> does that require a blueprint now? >>> >>> In the Icehouse patch, markmc suggested using a DictOpt where the default >>> value is the global value, which could be coming from the oslo [ssl] >>> group >>> and then you could override that with a project-specific key, e.g. >>> cinder, >>> neutron, glance, keystone. >>> >>> [1] https://review.openstack.org/#/c/84522/ >>> [2] https://review.openstack.org/#/c/131066/ >>> [3] https://review.openstack.org/#/c/124296/ >>> >>> -- >>> >>> Thanks, >>> >>> Matt Riedemann >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> > > The consensus in the nova meeting today, I think, was that we generally like > the idea of the DictOpt with global oslo ssl as the default and then be able > to configure that per-service if needed. > > Does anyone want to put up a POC on how that would work to see how ugly > and/or usable that would be? I haven't dug into the DictOpt stuff yet and > am kind of time-constrained at the moment. > > > -- > > Thanks, > > Matt Riedemann > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dhanesh1212 at gmail.com Fri Dec 5 10:59:50 2014 From: dhanesh1212 at gmail.com (dhanesh1212121212) Date: Fri, 5 Dec 2014 16:29:50 +0530 Subject: [openstack-dev] Openstack setup in Data Center with two Dell Blade Server Message-ID: Hi All, We have a requirement to configure Openstack Juno setup in Data Center with two Dell Blade Server. We are planning to use one blade server to install Centos 7 which will be our Hypervisor Second machine we will install Xenserver. Inside Xenserver we will create two CentOS 7 machine one for Management , Object Storage , Block Storage and Second one (CentOS 7) for Network Node. Please guide me on this. Thanks and regards, Dhanesh M. -------------- next part -------------- An HTML attachment was scrubbed... URL: From berrange at redhat.com Fri Dec 5 12:03:49 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Fri, 5 Dec 2014 12:03:49 +0000 Subject: [openstack-dev] [nova] policy on old / virtually abandoned patches In-Reply-To: <546B3663.4040502@dague.net> References: <546B3663.4040502@dague.net> Message-ID: <20141205120349.GH2383@redhat.com> On Tue, Nov 18, 2014 at 07:06:59AM -0500, Sean Dague wrote: > Nova currently has 197 patches that have seen no activity in the last 4 > weeks (project:openstack/nova age:4weeks status:open). On a somewhat related note, nova-specs currently has 17 specs open against specs/juno, most with -2 votes. I think we should just mass-abandon anything still touching the specs/juno directory. If people cared about them still they would have submitted for specs/kilo. So any objection to killing everything in the list below: +-------------------------------------+-------------------------------------------------------+----------+-------+---------+----------+ | URL | Subject | Created | Tests | Reviews | Workflow | +-------------------------------------+-------------------------------------------------------+----------+-------+---------+----------+ | https://review.openstack.org/86938 | Add tasks to the v3 API | 237 days | 1 | -2 | | | https://review.openstack.org/88334 | Add support for USB controller | 231 days | 1 | -2 | | | https://review.openstack.org/89766 | Add useful metrics into utilization based scheduli... | 226 days | 1 | -2 | | | https://review.openstack.org/90239 | Blueprint for Cinder Multi attach volumes | 224 days | 1 | -2 | | | https://review.openstack.org/90647 | Add utilization based weighers on top of MetricsWe... | 221 days | 1 | -2 | | | https://review.openstack.org/96543 | Smart Scheduler (Solver Scheduler) - Constraint ba... | 189 days | 1 | -2 | | | https://review.openstack.org/97441 | Add nova spec for bp/isnot-operator | 185 days | 1 | -2 | | | https://review.openstack.org/99476 | Dedicate aggregates for specific tenants | 176 days | 1 | -2 | | | https://review.openstack.org/99576 | Add client token to CreateServer | 176 days | 1 | -2 | | | https://review.openstack.org/101921 | Spec for Neutron migration feature | 164 days | 1 | -2 | | | https://review.openstack.org/103617 | Support Identity V3 API | 157 days | 1 | -1 | | | https://review.openstack.org/105385 | Leverage the features of IBM GPFS to store cached ... | 150 days | 1 | -2 | | | https://review.openstack.org/108582 | Add ironic boot mode filters | 136 days | 1 | -2 | | | https://review.openstack.org/110639 | Blueprint for the implementation of Nested Quota D... | 127 days | 1 | -2 | | | https://review.openstack.org/111308 | Added VirtProperties object blueprint | 125 days | 1 | -2 | | | https://review.openstack.org/111745 | Improve instance boot time | 122 days | 1 | -2 | | | https://review.openstack.org/116280 | Add a new filter to implement project isolation fe... | 104 days | 1 | -2 | | +-------------------------------------+-------------------------------------------------------+----------+-------+---------+----------+ Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From matt at oliver.net.au Fri Dec 5 12:16:28 2014 From: matt at oliver.net.au (Matthew Oliver) Date: Fri, 5 Dec 2014 23:16:28 +1100 Subject: [openstack-dev] [nova] policy on old / virtually abandoned patches In-Reply-To: <20141205120349.GH2383@redhat.com> References: <546B3663.4040502@dague.net> <20141205120349.GH2383@redhat.com> Message-ID: I have a script that does 95% of what you want: https://github.com/matthewoliver/swift_abandon_notifier We are using it for swift reviews. At the moment the only thing it doesn't do is actually abandon, it instead sends a warning email and waits n days (2 weeks by default) for action, if it still turns up it adds it to a list of abandoned changes. Eg: http://abandoner.oliver.net.au So anything that appears in that list can be abandoned by a core. Feel free to use it (just uses a yaml file for configuration) and we can all benefit from enhancements made ;) Matt On Dec 5, 2014 11:06 PM, "Daniel P. Berrange" wrote: > On Tue, Nov 18, 2014 at 07:06:59AM -0500, Sean Dague wrote: > > Nova currently has 197 patches that have seen no activity in the last 4 > > weeks (project:openstack/nova age:4weeks status:open). > > On a somewhat related note, nova-specs currently has 17 specs > open against specs/juno, most with -2 votes. I think we should > just mass-abandon anything still touching the specs/juno directory. > If people cared about them still they would have submitted for > specs/kilo. > > So any objection to killing everything in the list below: > > > +-------------------------------------+-------------------------------------------------------+----------+-------+---------+----------+ > | URL | Subject > | Created | Tests | Reviews | Workflow | > > +-------------------------------------+-------------------------------------------------------+----------+-------+---------+----------+ > | https://review.openstack.org/86938 | Add tasks to the v3 API > | 237 days | 1 | -2 | | > | https://review.openstack.org/88334 | Add support for USB controller > | 231 days | 1 | -2 | | > | https://review.openstack.org/89766 | Add useful metrics into > utilization based scheduli... | 226 days | 1 | -2 | | > | https://review.openstack.org/90239 | Blueprint for Cinder Multi attach > volumes | 224 days | 1 | -2 | | > | https://review.openstack.org/90647 | Add utilization based weighers on > top of MetricsWe... | 221 days | 1 | -2 | | > | https://review.openstack.org/96543 | Smart Scheduler (Solver > Scheduler) - Constraint ba... | 189 days | 1 | -2 | | > | https://review.openstack.org/97441 | Add nova spec for > bp/isnot-operator | 185 days | 1 | -2 | > | > | https://review.openstack.org/99476 | Dedicate aggregates for specific > tenants | 176 days | 1 | -2 | | > | https://review.openstack.org/99576 | Add client token to CreateServer > | 176 days | 1 | -2 | | > | https://review.openstack.org/101921 | Spec for Neutron migration > feature | 164 days | 1 | -2 | | > | https://review.openstack.org/103617 | Support Identity V3 API > | 157 days | 1 | -1 | | > | https://review.openstack.org/105385 | Leverage the features of IBM GPFS > to store cached ... | 150 days | 1 | -2 | | > | https://review.openstack.org/108582 | Add ironic boot mode filters > | 136 days | 1 | -2 | | > | https://review.openstack.org/110639 | Blueprint for the implementation > of Nested Quota D... | 127 days | 1 | -2 | | > | https://review.openstack.org/111308 | Added VirtProperties object > blueprint | 125 days | 1 | -2 | | > | https://review.openstack.org/111745 | Improve instance boot time > | 122 days | 1 | -2 | | > | https://review.openstack.org/116280 | Add a new filter to implement > project isolation fe... | 104 days | 1 | -2 | | > > +-------------------------------------+-------------------------------------------------------+----------+-------+---------+----------+ > > > Regards, > Daniel > -- > |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ > :| > |: http://libvirt.org -o- http://virt-manager.org > :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ > :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc > :| > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at dague.net Fri Dec 5 12:22:09 2014 From: sean at dague.net (Sean Dague) Date: Fri, 05 Dec 2014 07:22:09 -0500 Subject: [openstack-dev] [all] OpenStack Bootstrapping Hour - Keystone - Friday Dec 5th 20:00 UTC (15:00 Americas/New_York) In-Reply-To: <201412051137.10718.bbobrov@mirantis.com> References: <548086E6.4070407@dague.net> <201412051137.10718.bbobrov@mirantis.com> Message-ID: <5481A371.9030408@dague.net> On 12/05/2014 03:37 AM, Boris Bobrov wrote: > On Thursday 04 December 2014 19:08:06 Sean Dague wrote: >> Sorry for the late announce, too much turkey and pie.... >> >> This Friday, Dec 5th, we'll be talking with Steve Martinelli and David >> Stanek about Keystone Authentication in OpenStack. > > Wiki page says that the event will be Friday Dec 5th - 19:00 UTC (15:00 > Americas/New_York), while the subject in your mail has 20:00 UTC. Could you > please clarify that? It's 20:00 UTC, sorry about that. With the DST switch it looks like there was a bad copy/paste somewhere. Also the youtube link should give you a real time countdown - https://www.youtube.com/watch?v=Th61TgUVnzU to when it is. -Sean -- Sean Dague http://dague.net From berrange at redhat.com Fri Dec 5 12:27:16 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Fri, 5 Dec 2014 12:27:16 +0000 Subject: [openstack-dev] [nova] policy on old / virtually abandoned patches In-Reply-To: References: <546B3663.4040502@dague.net> <20141205120349.GH2383@redhat.com> Message-ID: <20141205122716.GI2383@redhat.com> On Fri, Dec 05, 2014 at 11:16:28PM +1100, Matthew Oliver wrote: > I have a script that does 95% of what you want: > > https://github.com/matthewoliver/swift_abandon_notifier > > We are using it for swift reviews. At the moment the only thing it doesn't > do is actually abandon, it instead sends a warning email and waits n days > (2 weeks by default) for action, if it still turns up it adds it to a list > of abandoned changes. Nova already has a similar script that does the abandon too, but both yours & novas are based on activity / review feedback. I'm explicitly considering abanadoning based on the file path for the spec. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From ihrachys at redhat.com Fri Dec 5 12:27:29 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Fri, 05 Dec 2014 13:27:29 +0100 Subject: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository? In-Reply-To: References: Message-ID: <5481A4B1.3010405@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 04/12/14 16:59, Vadivel Poonathan wrote: > Hi Kyle and all, > > Was there any conclusion in the design summit or the meetings > afterward about splitting the vendor plugins/drivers from the > mainstream neutron and documentation of out-of-tree > plugins/drivers?... It's expected that the following spec that covers the plugins split to be approved and implemented during Kilo: https://review.openstack.org/134680 > > Thanks, Vad -- > > > On Thu, Oct 23, 2014 at 11:27 AM, Kyle Mestery > > wrote: > > On Thu, Oct 23, 2014 at 12:35 PM, Vadivel Poonathan > > > wrote: >> Hi Kyle and Anne, >> >> Thanks for the clarifications... understood and it makes sense. >> >> However, per my understanding, the drivers (aka plugins) are >> meant to be developed and supported by third-party vendors, >> outside of the OpenStack community, and they are supposed to work >> as plug-n-play... they are not part of the core OpenStack >> development, nor any of its components. If that is the case, then >> why should OpenStack community include and maintain them as part >> of it, for every release?... Wouldnt it be enough to limit the >> scope with the plugin framework and built-in drivers such as >> LinuxBridge or OVS etc?... not extending to commercial >> vendors?... (It is just a curious question, forgive me if i >> missed something and correct me!). >> > You haven't misunderstood anything, we're in the process of > splitting these things out, and this will be a prime focus of the > Neutron design summit track at the upcoming summit. > > Thanks, Kyle > >> At the same time, IMHO, there must be some reference or a page > within the >> scope of OpenStack documentation (not necessarily the core docs, > but some >> wiki page or reference link or so - as Anne suggested) to >> mention > the list >> of the drivers/plugins supported as of given release and may be >> an > external >> link to know more details about the driver, if the link is >> provided by respective vendor. >> >> >> Anyway, besides my opinion, the wiki page similar to hypervisor > driver would >> be good for now atleast, until the direction/policy level >> decision > is made >> to maintain out-of-tree plugins/drivers. >> >> >> Thanks, Vad -- >> >> >> >> >> On Thu, Oct 23, 2014 at 9:46 AM, Edgar Magana > > >> wrote: >>> >>> I second Anne?s and Kyle comments. Actually, I like very much >>> the > wiki >>> part to provide some visibility for out-of-tree >>> plugins/drivers > but not into >>> the official documentation. >>> >>> Thanks, >>> >>> Edgar >>> >>> From: Anne Gentle >> > Reply-To: "OpenStack Development >>> Mailing List (not for usage > questions)" >>> > >>> Date: Thursday, October 23, 2014 at 8:51 AM To: Kyle Mestery >>> > Cc: >>> "OpenStack Development Mailing List (not for usage questions)" >>> > >>> Subject: Re: [openstack-dev] [Neutron] Neutron documentation >>> to > update >>> about new vendor plugin, but without code in repository? >>> >>> >>> >>> On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery > > >>> wrote: >>>> >>>> Vad: >>>> >>>> The third-party CI is required for your upstream driver. I >>>> think what's different from my reading of this thread is the >>>> question of what is the requirement to have a driver listed >>>> in the upstream documentation which is not in the upstream >>>> codebase. To my > knowledge, >>>> we haven't done this. Thus, IMHO, we should NOT be utilizing > upstream >>>> documentation to document drivers which are themselves not >>>> upstream. When we split out the drivers which are currently >>>> upstream in > neutron >>>> into a separate repo, they will still be upstream. So my >>>> opinion > here >>>> is that if your driver is not upstream, it shouldn't be in >>>> the upstream documentation. But I'd like to hear others >>>> opinions as > well. >>>> >>> >>> This is my sense as well. >>> >>> The hypervisor drivers are documented on the wiki, sometimes >>> they're in-tree, sometimes they're not, but the state of >>> testing is > documented on >>> the wiki. I think we could take this approach for network and >>> storage drivers as well. >>> >>> https://wiki.openstack.org/wiki/HypervisorSupportMatrix >>> >>> Anne >>> >>>> >>>> Thanks, Kyle >>>> >>>> On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan >>>> > wrote: >>>>> Kyle, Gentle reminder... when you get a chance!.. >>>>> >>>>> Anne, In case, if i need to send it to different group or >>>>> email-id > to reach >>>>> Kyle Mestery, pls. let me know. Thanks for your help. >>>>> >>>>> Regards, Vad -- >>>>> >>>>> >>>>> On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan >>>>> > wrote: >>>>>> >>>>>> Hi Kyle, >>>>>> >>>>>> Can you pls. comment on this discussion and confirm the > requirements >>>>>> for getting out-of-tree mechanism_driver listed in the >>>>>> supported plugin/driver list of the Openstack Neutron >>>>>> docs. >>>>>> >>>>>> Thanks, Vad -- >>>>>> >>>>>> On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle > > >>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan >>>>>>> > wrote: >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>>>>>> On Fri, Oct 10, 2014 at 7:36 PM, Kevin >>>>>>>>>>>> Benton >>>>>>>>>>> > wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> I think you will probably have to wait >>>>>>>>>>>>> until after > the summit >>>>>>>>>>>>> so we can see the direction that will be >>>>>>>>>>>>> taken with the rest of the in-tree >>>>>>>>>>>>> drivers/plugins. It seems like we are >>>>>>>>>>>>> moving towards > removing >>>>>>>>>>>>> all of them so we would definitely need a >>>>>>>>>>>>> solution to documenting > out-of-tree >>>>>>>>>>>>> drivers as you suggested. >>>>>>>> >>>>>>>> [Vad] while i 'm waiting for the conclusion on this > subject, i 'm >>>>>>>> trying to setup the third-party CI/Test system and >>>>>>>> meet its > requirements to >>>>>>>> get my mechanism_driver listed in the Kilo's >>>>>>>> documentation, in > parallel. >>>>>>>> >>>>>>>> Couple of questions/confirmations before i proceed >>>>>>>> further > on this >>>>>>>> direction... >>>>>>>> >>>>>>>> 1) Is there anything more required other than the >>>>>>>> third-party CI/Test requirements ??.. like should I >>>>>>>> still need to go-through > the entire >>>>>>>> development process of submit/review/approval of the > blue-print and >>>>>>>> code of my ML2 driver which was already developed and >>>>>>>> in-use?... >>>>>>>> >>>>>>> >>>>>>> The neutron PTL Kyle Mestery can answer if there are >>>>>>> any > additional >>>>>>> requirements. >>>>>>> >>>>>>>> >>>>>>>> 2) Who is the authority to clarify and confirm the >>>>>>>> above > (and how do >>>>>>>> i contact them)?... >>>>>>> >>>>>>> >>>>>>> Elections just completed, and the newly elected PTL is >>>>>>> Kyle > Mestery, >>>>>>> >>>>>>> > http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html. > > >>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Thanks again for your inputs... >>>>>>>> >>>>>>>> Regards, Vad -- >>>>>>>> >>>>>>>> On Tue, Oct 14, 2014 at 3:17 PM, Anne Gentle > > >>>>>>>> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan >>>>>>>>> > wrote: >>>>>>>>>> >>>>>>>>>> Agreed on the requirements of test results to >>>>>>>>>> qualify the > vendor >>>>>>>>>> plugin to be listed in the upstream docs. Is >>>>>>>>>> there any procedure/infrastructure currently >>>>>>>>>> available > for this >>>>>>>>>> purpose?.. Pls. fwd any link/pointers on those >>>>>>>>>> info. >>>>>>>>>> >>>>>>>>> >>>>>>>>> Here's a link to the third-party testing setup >>>>>>>>> information. >>>>>>>>> >>>>>>>>> http://ci.openstack.org/third_party.html >>>>>>>>> >>>>>>>>> Feel free to keep asking questions as you dig >>>>>>>>> deeper. Thanks, Anne >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Thanks, Vad -- >>>>>>>>>> >>>>>>>>>> On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki >>>>>>>>>> > >>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>> I agree with Kevin and Kyle. Even if we decided >>>>>>>>>>> to use > separate >>>>>>>>>>> tree for neutron plugins and drivers, they >>>>>>>>>>> still will be regarded as part > of the >>>>>>>>>>> upstream. These plugins/drivers need to prove >>>>>>>>>>> they are well > integrated with >>>>>>>>>>> Neutron master in some way and gating >>>>>>>>>>> integration proves it is well > tested and >>>>>>>>>>> integrated. I believe it is a reasonable >>>>>>>>>>> assumption and requirement > that a >>>>>>>>>>> vendor plugin/driver is listed in the upstream >>>>>>>>>>> docs. This is a same kind of > question >>>>>>>>>>> as what vendor plugins are tested and worth >>>>>>>>>>> documented in the upstream docs. I hope you >>>>>>>>>>> work with the neutron team and run the third > party >>>>>>>>>>> requirements. >>>>>>>>>>> >>>>>>>>>>> Thanks, Akihiro >>>>>>>>>>> >>>>>>>>>>> On Tue, Oct 14, 2014 at 10:09 AM, Kyle Mestery >>>>>>>>>>> >>>>>>>>>> > wrote: >>>>>>>>>>>> On Mon, Oct 13, 2014 at 6:44 PM, Kevin >>>>>>>>>>>> Benton >>>>>>>>>>> > wrote: >>>>>>>>>>>>>> The OpenStack dev and docs team dont have >>>>>>>>>>>>>> to worry about >>>>>>>>>>>>>> gating/publishing/maintaining the vendor >>>>>>>>>>>>>> specific plugins/drivers. >>>>>>>>>>>>> >>>>>>>>>>>>> I disagree about the gating part. If a >>>>>>>>>>>>> vendor wants > to have a >>>>>>>>>>>>> link that shows they are compatible with >>>>>>>>>>>>> openstack, they should be reporting test >>>>>>>>>>>>> results on all patches. A link to a vendor >>>>>>>>>>>>> driver in > the docs >>>>>>>>>>>>> should signify some form of testing that >>>>>>>>>>>>> the community is > comfortable with. >>>>>>>>>>>>> >>>>>>>>>>>> I agree with Kevin here. If you want to play >>>>>>>>>>>> upstream, in whatever form that takes by the >>>>>>>>>>>> end of Kilo, you have to work > with the >>>>>>>>>>>> existing third-party requirements and team to >>>>>>>>>>>> take advantage of > being a >>>>>>>>>>>> part of things like upstream docs. >>>>>>>>>>>> >>>>>>>>>>>> Thanks, Kyle >>>>>>>>>>>> >>>>>>>>>>>>> On Mon, Oct 13, 2014 at 11:33 AM, Vadivel >>>>>>>>>>>>> Poonathan > wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>> >>>>>>>>>>>>>> If the plan is to move ALL existing >>>>>>>>>>>>>> vendor specific plugins/drivers >>>>>>>>>>>>>> out-of-tree, then having a place-holder >>>>>>>>>>>>>> within the > OpenStack >>>>>>>>>>>>>> domain would suffice, where the vendors >>>>>>>>>>>>>> can list their > plugins/drivers >>>>>>>>>>>>>> along with their documentation as how to >>>>>>>>>>>>>> install and use etc. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The main Openstack Neutron documentation >>>>>>>>>>>>>> page can > explain the >>>>>>>>>>>>>> plugin framework (ml2 type drivers, >>>>>>>>>>>>>> mechanism drivers, serviec plugin and so >>>>>>>>>>>>>> on) and its purpose/usage etc, then >>>>>>>>>>>>>> provide a link to > refer the >>>>>>>>>>>>>> currently supported vendor specific >>>>>>>>>>>>>> plugins/drivers for more > details. >>>>>>>>>>>>>> That way the documentation will be >>>>>>>>>>>>>> accurate to what is "in-tree" > and limit >>>>>>>>>>>>>> the documentation of external >>>>>>>>>>>>>> plugins/drivers to have just a reference >>>>>>>>>>>>>> link. So its now vendor's responsibility >>>>>>>>>>>>>> to keep their driver's up-to-date and >>>>>>>>>>>>>> their documentation accurate. The >>>>>>>>>>>>>> OpenStack dev and docs > team dont >>>>>>>>>>>>>> have to worry about >>>>>>>>>>>>>> gating/publishing/maintaining the vendor >>>>>>>>>>>>>> specific plugins/drivers. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The built-in drivers such as LinuxBridge >>>>>>>>>>>>>> or > OpenVSwitch etc >>>>>>>>>>>>>> can continue to be "in-tree" and their >>>>>>>>>>>>>> documentation will be part > of main >>>>>>>>>>>>>> Neutron's docs. So the Neutron is >>>>>>>>>>>>>> guaranteed to work with built-in >>>>>>>>>>>>>> plugins/drivers as per the documentation >>>>>>>>>>>>>> and the user is informed to refer the >>>>>>>>>>>>>> "external vendor plug-in page" for >>>>>>>>>>>>>> additional/specific plugins/drivers. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks, Vad -- >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Fri, Oct 10, 2014 at 8:10 PM, Anne >>>>>>>>>>>>>> Gentle >>>>>>>>>>>>> > wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Fri, Oct 10, 2014 at 7:36 PM, Kevin >>>>>>>>>>>>>>> Benton >>>>>>>>>>>>>> > wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I think you will probably have to >>>>>>>>>>>>>>>> wait until after the summit so we >>>>>>>>>>>>>>>> can see the direction that will be >>>>>>>>>>>>>>>> taken with the rest > of the >>>>>>>>>>>>>>>> in-tree drivers/plugins. It seems >>>>>>>>>>>>>>>> like we are moving towards removing >>>>>>>>>>>>>>>> all of them so we would definitely >>>>>>>>>>>>>>>> need a solution to documenting >>>>>>>>>>>>>>>> out-of-tree drivers as you >>>>>>>>>>>>>>>> suggested. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> However, I think the minimum >>>>>>>>>>>>>>>> requirements for having a driver >>>>>>>>>>>>>>>> being documented should be >>>>>>>>>>>>>>>> third-party testing of Neutron >>>>>>>>>>>>>>>> patches. Otherwise the docs will >>>>>>>>>>>>>>>> become littered with a bunch of links >>>>>>>>>>>>>>>> to drivers/plugins with no indication >>>>>>>>>>>>>>>> of what actually works, which > ultimately makes >>>>>>>>>>>>>>>> Neutron look bad. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> This is my line of thinking as well, >>>>>>>>>>>>>>> expanded to > "ultimately >>>>>>>>>>>>>>> makes OpenStack docs look bad" -- a >>>>>>>>>>>>>>> perception I want to > avoid. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Keep the viewpoints coming. We have a >>>>>>>>>>>>>>> crucial > balancing act >>>>>>>>>>>>>>> ahead: users need to trust docs and >>>>>>>>>>>>>>> trust the drivers. > Ultimately the >>>>>>>>>>>>>>> responsibility for the docs is in the >>>>>>>>>>>>>>> hands of the driver contributors > so it >>>>>>>>>>>>>>> seems those should be on a domain name >>>>>>>>>>>>>>> where drivers control > publishing and >>>>>>>>>>>>>>> OpenStack docs are not a gatekeeper, >>>>>>>>>>>>>>> quality checker, reviewer, or > publisher. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> We have documented the status of >>>>>>>>>>>>>>> hypervisor drivers > on an >>>>>>>>>>>>>>> OpenStack wiki page. [1] To me, that >>>>>>>>>>>>>>> type of list could be > maintained on >>>>>>>>>>>>>>> the wiki page better than in the docs >>>>>>>>>>>>>>> themselves. Thoughts? > Feelings? More >>>>>>>>>>>>>>> discussion, please. And thank you for >>>>>>>>>>>>>>> the responses so far. Anne >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> [1] > https://wiki.openstack.org/wiki/HypervisorSupportMatrix >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Fri, Oct 10, 2014 at 1:28 PM, >>>>>>>>>>>>>>>> Vadivel Poonathan >>>>>>>>>>>>>>>> > wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hi Anne, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Thanks for your immediate >>>>>>>>>>>>>>>>> response!... >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Just to clarify... I have developed >>>>>>>>>>>>>>>>> and maintaining a Neutron plug-in >>>>>>>>>>>>>>>>> (ML2 mechanism_driver) since >>>>>>>>>>>>>>>>> Grizzly and now it is up-to-date >>>>>>>>>>>>>>>>> with Icehouse. But it was never >>>>>>>>>>>>>>>>> listed nor part of the main > Openstack >>>>>>>>>>>>>>>>> releases. Now i would like to have >>>>>>>>>>>>>>>>> my plugin mentioned as "supported >>>>>>>>>>>>>>>>> plugin/mechanism_driver for so and >>>>>>>>>>>>>>>>> so vendor equipments" in the > docs.openstack.org , >>>>>>>>>>>>>>>>> but without having the actual >>>>>>>>>>>>>>>>> plugin code to be posted in the >>>>>>>>>>>>>>>>> main > Openstack >>>>>>>>>>>>>>>>> GIT repository. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Reason is that I dont have >>>>>>>>>>>>>>>>> plan/bandwidth to go > thru the >>>>>>>>>>>>>>>>> entire process of new plugin > blue-print/development/review/testing etc as >>>>>>>>>>>>>>>>> required by the Openstack >>>>>>>>>>>>>>>>> development community. Bcos this is >>>>>>>>>>>>>>>>> already developed, tested and >>>>>>>>>>>>>>>>> released to some customers >>>>>>>>>>>>>>>>> directly. Now I just > want to >>>>>>>>>>>>>>>>> get it to the official Openstack >>>>>>>>>>>>>>>>> documentation, so that more > people can >>>>>>>>>>>>>>>>> get this and use. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> The plugin package is made >>>>>>>>>>>>>>>>> available to public > from Ubuntu >>>>>>>>>>>>>>>>> repository along with necessary >>>>>>>>>>>>>>>>> documentation. So people can > directly >>>>>>>>>>>>>>>>> get it from Ubuntu repository and >>>>>>>>>>>>>>>>> use it. All i need is to > get listed >>>>>>>>>>>>>>>>> in the docs.openstack.org >>>>>>>>>>>>>>>>> so > that people knows that it exists and >>>>>>>>>>>>>>>>> can be used with any Openstack. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Pls. confrim whether this is >>>>>>>>>>>>>>>>> something possible?... >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Thanks again!.. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Vad -- >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Fri, Oct 10, 2014 at 12:18 PM, >>>>>>>>>>>>>>>>> Anne Gentle >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Fri, Oct 10, 2014 at 2:11 PM, >>>>>>>>>>>>>>>>>> Vadivel Poonathan >>>>>>>>>>>>>>>>>> > wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> How to include a new vendor >>>>>>>>>>>>>>>>>>> plug-in (aka mechanism_driver >>>>>>>>>>>>>>>>>>> in ML2 framework) into the >>>>>>>>>>>>>>>>>>> Openstack documentation?.. > In other >>>>>>>>>>>>>>>>>>> words, is it possible to >>>>>>>>>>>>>>>>>>> include a new plug-in in the >>>>>>>>>>>>>>>>>>> Openstack documentation page >>>>>>>>>>>>>>>>>>> without having the actual >>>>>>>>>>>>>>>>>>> plug-in code as part > of the >>>>>>>>>>>>>>>>>>> Openstack neutron >>>>>>>>>>>>>>>>>>> repository?... The actual >>>>>>>>>>>>>>>>>>> plug-in is posted and >>>>>>>>>>>>>>>>>>> available for the public to >>>>>>>>>>>>>>>>>>> download as Ubuntu package. But >>>>>>>>>>>>>>>>>>> i need to mention somewhere in >>>>>>>>>>>>>>>>>>> the Openstack documentation >>>>>>>>>>>>>>>>>>> that this new plugin is >>>>>>>>>>>>>>>>>>> available > for the >>>>>>>>>>>>>>>>>>> public to use along with its >>>>>>>>>>>>>>>>>>> documentation. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> We definitely want you to include >>>>>>>>>>>>>>>>>> pointers to vendor documentation >>>>>>>>>>>>>>>>>> in the OpenStack docs, but I'd >>>>>>>>>>>>>>>>>> prefer make sure > they're gate >>>>>>>>>>>>>>>>>> tested before they get listed on >>>>>>>>>>>>>>>>>> docs.openstack.org > . Drivers change enough >>>>>>>>>>>>>>>>>> release-to-release that it's >>>>>>>>>>>>>>>>>> difficult to keep up >>>>>>>>>>>>>>>>>> maintenance. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Lately I've been talking to >>>>>>>>>>>>>>>>>> driver contributors (hypervisor, >>>>>>>>>>>>>>>>>> storage, networking) about the >>>>>>>>>>>>>>>>>> out-of-tree changes > possible. I'd >>>>>>>>>>>>>>>>>> like to encourage even >>>>>>>>>>>>>>>>>> out-of-tree drivers to get >>>>>>>>>>>>>>>>>> listed, but to store their main >>>>>>>>>>>>>>>>>> documents outside of >>>>>>>>>>>>>>>>>> docs.openstack.org > , if they are gate-tested. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Anyone have other ideas here? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Looping in the OpenStack-docs >>>>>>>>>>>>>>>>>> mailing list also. Anne >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Pls. provide some insights into >>>>>>>>>>>>>>>>>>> whether it is possible?.. and >>>>>>>>>>>>>>>>>>> any further info on this?.. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Vad >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> _______________________________________________ > >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> OpenStack-dev mailing list >>>>>>>>>>>>>>>>>>> OpenStack-dev at lists.openstack.org > >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> _______________________________________________ > >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> OpenStack-dev mailing list >>>>>>>>>>>>>>>>>> OpenStack-dev at lists.openstack.org > >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> _______________________________________________ > >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> OpenStack-dev mailing list >>>>>>>>>>>>>>>>> OpenStack-dev at lists.openstack.org > >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- Kevin Benton >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> _______________________________________________ > >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> OpenStack-dev mailing list >>>>>>>>>>>>>>>> OpenStack-dev at lists.openstack.org > >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> _______________________________________________ > >>>>>>>>>>>>>>> >>>>>>>>>>>>>> OpenStack-dev mailing list >>>>>>>>>>>>>>> OpenStack-dev at lists.openstack.org > >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> _______________________________________________ > >>>>>>>>>>>>>> >>>>>>>>>>>>> OpenStack-dev mailing list >>>>>>>>>>>>>> OpenStack-dev at lists.openstack.org > >>>>>>>>>>>>>> >>>>>>>>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- Kevin Benton >>>>>>>>>>>>> >>>>>>>>>>>>> _______________________________________________ > >>>>>>>>>>>>> >>>>>>>>>>>> OpenStack-dev mailing list >>>>>>>>>>>>> OpenStack-dev at lists.openstack.org > >>>>>>>>>>>>> >>>>>>>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ > >>>>>>>>>>>> >>>>>>>>>>> OpenStack-dev mailing list >>>>>>>>>>>> OpenStack-dev at lists.openstack.org > >>>>>>>>>>>> >>>>>>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- Akihiro Motoki > >>>>>>>>>>> >>>>>>>>>>> _______________________________________________ > >>>>>>>>>>> >>>>>>>>>> OpenStack-dev mailing list >>>>>>>>>>> OpenStack-dev at lists.openstack.org > >>>>>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> OpenStack-dev mailing list >>>>>>>>>> OpenStack-dev at lists.openstack.org > >>>>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> OpenStack-dev mailing list >>>>>>>>> OpenStack-dev at lists.openstack.org > >>>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> OpenStack-dev mailing list >>>>>>>> OpenStack-dev at lists.openstack.org > >>>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> OpenStack-dev mailing list >>>>>>> OpenStack-dev at lists.openstack.org > >>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>> >>>>> >>> >>> >>> >>> _______________________________________________ OpenStack-dev >>> mailing list OpenStack-dev at lists.openstack.org > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >> >> > > > > > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUgaSxAAoJEC5aWaUY1u57jIsIAJh93uqzvX0j4mpVAmBCEB/M mei9eQZNCTsWBEMnwP6b2b3Xtc1bjzhSc2nQjAzu+MJeIrckcozgXtPKR2JmUT7u 2H03oS6DZT2ZqnvKLqvend35tcG/p/sNQ88js+C5efFC+x5xy3Taxpj4eFsOfD4L DuzR7Gl5HlblGt9NRNGdmG9SN1rk2/cydgXK05lR+1/prQUkIlnPIs+3ySJmuEKN Tm6qVVP8zQDp8Vrvhy2IyjuSzOq4v63oFjNYPa7biXWRPRA2CSmo/zpRYpOgdo3s BtFKi+MuSKYK9Ox2n0b6jcrkqwebFr14O5sgDk7nIhPgrTjXKdbGndF/zWJK/b8= =t1iZ -----END PGP SIGNATURE----- From sean at dague.net Fri Dec 5 12:45:44 2014 From: sean at dague.net (Sean Dague) Date: Fri, 05 Dec 2014 07:45:44 -0500 Subject: [openstack-dev] [devstack] set -o nounset in devstack? Message-ID: <5481A8F8.5080104@dague.net> I got bit by another bug yesterday where there was a typo between variables in the source tree. So I started down the process of set -o nounset to see how bad it would be to prevent that in the future. There are 2 major classes of issues where the code is functioning fine, but is caught by nounset: FOO=$(trueorfalse True $FOO) if [[ -n "$FOO" ]]; ... The trueorfalse issue can be fixed if we change the function to be: function trueorfalse { local xtrace=$(set +o | grep xtrace) set +o xtrace local default=$1 local testval="${!2+x}" [[ -z "$testval" ]] && { echo "$default"; return; } [[ "0 no No NO false False FALSE" =~ "$testval" ]] && { echo "False"; return; } [[ "1 yes Yes YES true True TRUE" =~ "$testval" ]] && { echo "True"; return; } echo "$default" $xtrace } FOO=$(trueorfalse True FOO) ... then works. the -z and -n bits can be addressed with either FOO=${FOO:-} or an isset function that interpolates. FOO=${FOO:-} actually feels better to me because it's part of the spirit of things. I've found a few bugs already even though I'm probably only about 20% to a complete run working. So... the question is, is this worth it? It's going to have fall out in lesser used parts of the code where we don't catch things (like -o errexit did). However it should help flush out a class of bugs in the process. Opinions from devstack contributors / users welcomed. -Sean -- Sean Dague http://dague.net From joe.gordon0 at gmail.com Fri Dec 5 13:05:56 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Fri, 5 Dec 2014 15:05:56 +0200 Subject: [openstack-dev] [nova] policy on old / virtually abandoned patches In-Reply-To: <20141205120349.GH2383@redhat.com> References: <546B3663.4040502@dague.net> <20141205120349.GH2383@redhat.com> Message-ID: On Dec 5, 2014 7:07 AM, "Daniel P. Berrange" wrote: > > On Tue, Nov 18, 2014 at 07:06:59AM -0500, Sean Dague wrote: > > Nova currently has 197 patches that have seen no activity in the last 4 > > weeks (project:openstack/nova age:4weeks status:open). > > On a somewhat related note, nova-specs currently has 17 specs > open against specs/juno, most with -2 votes. I think we should > just mass-abandon anything still touching the specs/juno directory. > If people cared about them still they would have submitted for > specs/kilo. > > So any objection to killing everything in the list below: +1, makes sense to me. > > +-------------------------------------+-------------------------------------------------------+----------+-------+---------+----------+ > | URL | Subject | Created | Tests | Reviews | Workflow | > +-------------------------------------+-------------------------------------------------------+----------+-------+---------+----------+ > | https://review.openstack.org/86938 | Add tasks to the v3 API | 237 days | 1 | -2 | | > | https://review.openstack.org/88334 | Add support for USB controller | 231 days | 1 | -2 | | > | https://review.openstack.org/89766 | Add useful metrics into utilization based scheduli... | 226 days | 1 | -2 | | > | https://review.openstack.org/90239 | Blueprint for Cinder Multi attach volumes | 224 days | 1 | -2 | | > | https://review.openstack.org/90647 | Add utilization based weighers on top of MetricsWe... | 221 days | 1 | -2 | | > | https://review.openstack.org/96543 | Smart Scheduler (Solver Scheduler) - Constraint ba... | 189 days | 1 | -2 | | > | https://review.openstack.org/97441 | Add nova spec for bp/isnot-operator | 185 days | 1 | -2 | | > | https://review.openstack.org/99476 | Dedicate aggregates for specific tenants | 176 days | 1 | -2 | | > | https://review.openstack.org/99576 | Add client token to CreateServer | 176 days | 1 | -2 | | > | https://review.openstack.org/101921 | Spec for Neutron migration feature | 164 days | 1 | -2 | | > | https://review.openstack.org/103617 | Support Identity V3 API | 157 days | 1 | -1 | | > | https://review.openstack.org/105385 | Leverage the features of IBM GPFS to store cached ... | 150 days | 1 | -2 | | > | https://review.openstack.org/108582 | Add ironic boot mode filters | 136 days | 1 | -2 | | > | https://review.openstack.org/110639 | Blueprint for the implementation of Nested Quota D... | 127 days | 1 | -2 | | > | https://review.openstack.org/111308 | Added VirtProperties object blueprint | 125 days | 1 | -2 | | > | https://review.openstack.org/111745 | Improve instance boot time | 122 days | 1 | -2 | | > | https://review.openstack.org/116280 | Add a new filter to implement project isolation fe... | 104 days | 1 | -2 | | > +-------------------------------------+-------------------------------------------------------+----------+-------+---------+----------+ > > > Regards, > Daniel > -- > |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| > |: http://libvirt.org -o- http://virt-manager.org :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at dague.net Fri Dec 5 13:12:19 2014 From: sean at dague.net (Sean Dague) Date: Fri, 05 Dec 2014 08:12:19 -0500 Subject: [openstack-dev] [nova] policy on old / virtually abandoned patches In-Reply-To: References: <546B3663.4040502@dague.net> <20141205120349.GH2383@redhat.com> Message-ID: <5481AF33.2070509@dague.net> On 12/05/2014 08:05 AM, Joe Gordon wrote: > > On Dec 5, 2014 7:07 AM, "Daniel P. Berrange" > wrote: >> >> On Tue, Nov 18, 2014 at 07:06:59AM -0500, Sean Dague wrote: >> > Nova currently has 197 patches that have seen no activity in the last 4 >> > weeks (project:openstack/nova age:4weeks status:open). >> >> On a somewhat related note, nova-specs currently has 17 specs >> open against specs/juno, most with -2 votes. I think we should >> just mass-abandon anything still touching the specs/juno directory. >> If people cared about them still they would have submitted for >> specs/kilo. >> >> So any objection to killing everything in the list below: > > +1, makes sense to me. Agreed. +1. > >> >> > +-------------------------------------+-------------------------------------------------------+----------+-------+---------+----------+ >> | URL | Subject > | Created | Tests | Reviews | Workflow | >> > +-------------------------------------+-------------------------------------------------------+----------+-------+---------+----------+ >> | https://review.openstack.org/86938 | Add tasks to the v3 API > | 237 days | 1 | -2 | | >> | https://review.openstack.org/88334 | Add support for USB > controller | 231 days | 1 | -2 | | >> | https://review.openstack.org/89766 | Add useful metrics into > utilization based scheduli... | 226 days | 1 | -2 | | >> | https://review.openstack.org/90239 | Blueprint for Cinder Multi > attach volumes | 224 days | 1 | -2 | | >> | https://review.openstack.org/90647 | Add utilization based weighers > on top of MetricsWe... | 221 days | 1 | -2 | | >> | https://review.openstack.org/96543 | Smart Scheduler (Solver > Scheduler) - Constraint ba... | 189 days | 1 | -2 | | >> | https://review.openstack.org/97441 | Add nova spec for > bp/isnot-operator | 185 days | 1 | -2 | > | >> | https://review.openstack.org/99476 | Dedicate aggregates for > specific tenants | 176 days | 1 | -2 | | >> | https://review.openstack.org/99576 | Add client token to > CreateServer | 176 days | 1 | -2 | | >> | https://review.openstack.org/101921 | Spec for Neutron migration > feature | 164 days | 1 | -2 | | >> | https://review.openstack.org/103617 | Support Identity V3 API > | 157 days | 1 | -1 | | >> | https://review.openstack.org/105385 | Leverage the features of IBM > GPFS to store cached ... | 150 days | 1 | -2 | | >> | https://review.openstack.org/108582 | Add ironic boot mode filters > | 136 days | 1 | -2 | | >> | https://review.openstack.org/110639 | Blueprint for the > implementation of Nested Quota D... | 127 days | 1 | -2 | > | >> | https://review.openstack.org/111308 | Added VirtProperties object > blueprint | 125 days | 1 | -2 | | >> | https://review.openstack.org/111745 | Improve instance boot time > | 122 days | 1 | -2 | | >> | https://review.openstack.org/116280 | Add a new filter to implement > project isolation fe... | 104 days | 1 | -2 | | >> > +-------------------------------------+-------------------------------------------------------+----------+-------+---------+----------+ >> >> >> Regards, >> Daniel >> -- >> |: http://berrange.com -o- > http://www.flickr.com/photos/dberrange/ :| >> |: http://libvirt.org -o- > http://virt-manager.org :| >> |: http://autobuild.org -o- > http://search.cpan.org/~danberr/ :| >> |: http://entangle-photo.org -o- > http://live.gnome.org/gtk-vnc :| >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Sean Dague http://dague.net From joehuang at huawei.com Fri Dec 5 13:23:59 2014 From: joehuang at huawei.com (joehuang) Date: Fri, 5 Dec 2014 13:23:59 +0000 Subject: [openstack-dev] =?windows-1252?q?_=5Ball=5D_=5Btc=5D_=5BPTL=5D_Ca?= =?windows-1252?q?scading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com>, <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com> Dear all & TC & PTL, In the 40 minutes cross-project summit session ?Approaches for scaling out?[1], almost 100 peoples attended the meeting, and the conclusion is that cells can not cover the use cases and requirements which the OpenStack cascading solution[2] aim to address, the background including use cases and requirements is also described in the mail. After the summit, we just ported the PoC[3] source code from IceHouse based to Juno based. Now, let's move forward: The major task is to introduce new driver/agent to existing core projects, for the core idea of cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. a). Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases. b). Volunteer as the cross project coordinator. c). Volunteers for implementation and CI. Background of OpenStack cascading vs cells: 1. Use cases a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" ), establishing globally addressable tenants which result in efficient services deployment. b). Telefonica use case[5], create virtual DC( data center) cross multiple physical DCs with seamless experience. c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV cloud, it?s in nature the cloud will be distributed but inter-connected in many data centers. 2.requirements a). The operator has multiple sites cloud; each site can use one or multiple vendor?s OpenStack distributions. b). Each site with its own requirements and upgrade schedule while maintaining standard OpenStack API c). The multi-site cloud must provide unified resource management with global Open API exposed, for example create virtual DC cross multiple physical DCs with seamless experience. Although a prosperity orchestration layer could be developed for the multi-site cloud, but it's prosperity API in the north bound interface. The cloud operators want the ecosystem friendly global open API for the mutli-site cloud for global access. 3. What problems does cascading solve that cells doesn't cover: OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core architecture idea of OpenStack cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks (from different vendor's distribution, or different version ) which may located in different sites (or data centers ) through the OpenStack API, meanwhile the cloud still expose OpenStack API as the north-bound API in the cloud level. 4. Why cells can?t do that: Cells provide the scale out capability to Nova, but from the OpenStack as a whole point of view, it?s still working like one OpenStack instance. a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. This approach provides the multi-site cloud with one unified API endpoint and unified resource management, but consolidation of multi-vendor/multi-version OpenStack instances across one or more data centers cannot be fulfilled. b). Each site installed one child cell and accompanied standalone Cinder, Neutron(or Nova-network), Glance, Ceilometer. This approach makes multi-vendor/multi-version OpenStack distribution co-existence in multi-site seem to be feasible, but the requirement for unified API endpoint and unified resource management cannot be fulfilled. Cross Neutron networking automation is also missing, which should otherwise be done manually or use proprietary orchestration layer. For more information about cascading and cells, please refer to the discussion thread before Paris Summit [7]. [1]Approaches for scaling out: https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack [2]OpenStack cascading solution: https://wiki.openstack.org/wiki/OpenStack_cascading_solution [3]Cascading PoC: https://github.com/stackforge/tricircle [4]Vodafone use case (9'02" to 12'30"): https://www.youtube.com/watch?v=-KOJYvhmxQI [5]Telefonica use case: http://www.telefonica.com/en/descargas/mwc/present_20140224.pdf [6]ETSI NFV use cases: http://www.etsi.org/deliver/etsi_gs/nfv/001_099/001/01.01.01_60/gs_nfv001v010101p.pdf [7]Cascading thread before design summit: http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-td54115.html Best Regards Chaoyi Huang (joehuang) From Aleksei_Chuprin at symantec.com Fri Dec 5 13:39:36 2014 From: Aleksei_Chuprin at symantec.com (Aleksei Chuprin (CS)) Date: Fri, 5 Dec 2014 05:39:36 -0800 Subject: [openstack-dev] [MagnetoDB] Intercycle release package versioning Message-ID: <3E6DF46E67210942B7919419E636D029165310258F@TUS1XCHEVSPIN34.SYMC.SYMANTEC.COM> Hello everyone, Because MagnetoDB project uses more frequent releases than other OpenStack projects, i propose use following versioning strategy for MagnetoDB packages: 1:2014.2-0ubuntu1 1:2014.2~rc2-0ubuntu1 1:2014.2~rc1-0ubuntu1 1:2014.2~b2-0ubuntu1 1:2014.2~b2.dev{YYYYMMDD}_{GIT_SHA1}-0ubuntu1 1:2014.2~b2.dev{YYYYMMDD}_{GIT_SHA1}-0ubuntu1 1:2014.2~b1-0ubuntu1 What do you think about this? From berrange at redhat.com Fri Dec 5 13:41:59 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Fri, 5 Dec 2014 13:41:59 +0000 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: References: Message-ID: <20141205134159.GK2383@redhat.com> On Fri, Dec 05, 2014 at 11:05:28AM +1100, Michael Still wrote: > One of the things that happens over time is that some of our core > reviewers move on to other projects. This is a normal and healthy > thing, especially as nova continues to spin out projects into other > parts of OpenStack. > > However, it is important that our core reviewers be active, as it > keeps them up to date with the current ways we approach development in > Nova. I am therefore removing some no longer sufficiently active cores > from the nova-core group. > > I?d like to thank the following people for their contributions over the years: > > * cbehrens: Chris Behrens > * vishvananda: Vishvananda Ishaya > * dan-prince: Dan Prince > * belliott: Brian Elliott > * p-draigbrady: Padraig Brady > > I?d love to see any of these cores return if they find their available > time for code reviews increases. What stats did you use to decide whether to cull these reviewers ? Looking at the stats over a 6 month period, I think Padraig Brady is still having a significant positive impact on Nova - on a par with both cerberus and alaski who you've not proposing for cut. I think we should keep Padraig on the team, but probably suggest cutting Markmc instead http://russellbryant.net/openstack-stats/nova-reviewers-180.txt +-----------------------------+----------------------------------------+----------------+ | Reviewer | Reviews -2 -1 +1 +2 +A +/- % | Disagreements* | +-----------------------------+----------------------------------------+----------------+ | berrange ** | 1766 26 435 12 1293 357 73.9% | 157 ( 8.9%) | | jaypipes ** | 1359 11 378 436 534 133 71.4% | 109 ( 8.0%) | | jogo ** | 1053 131 326 7 589 353 56.6% | 47 ( 4.5%) | | danms ** | 921 67 381 23 450 167 51.4% | 32 ( 3.5%) | | oomichi ** | 889 4 306 55 524 182 65.1% | 40 ( 4.5%) | | johngarbutt ** | 808 319 227 10 252 145 32.4% | 37 ( 4.6%) | | mriedem ** | 642 27 279 25 311 136 52.3% | 17 ( 2.6%) | | klmitch ** | 606 1 90 2 513 70 85.0% | 67 ( 11.1%) | | ndipanov ** | 588 19 179 10 380 113 66.3% | 62 ( 10.5%) | | mikalstill ** | 564 31 34 3 496 207 88.5% | 20 ( 3.5%) | | cyeoh-0 ** | 546 12 207 30 297 103 59.9% | 35 ( 6.4%) | | sdague ** | 511 23 89 6 393 229 78.1% | 25 ( 4.9%) | | russellb ** | 465 6 83 0 376 158 80.9% | 23 ( 4.9%) | | alaski ** | 415 1 65 21 328 149 84.1% | 24 ( 5.8%) | | cerberus ** | 405 6 25 48 326 102 92.3% | 33 ( 8.1%) | | p-draigbrady ** | 376 2 40 9 325 64 88.8% | 49 ( 13.0%) | | markmc ** | 243 2 54 3 184 69 77.0% | 14 ( 5.8%) | | belliott ** | 231 1 68 5 157 35 70.1% | 19 ( 8.2%) | | dan-prince ** | 178 2 48 9 119 29 71.9% | 11 ( 6.2%) | | cbehrens ** | 132 2 49 2 79 19 61.4% | 6 ( 4.5%) | | vishvananda ** | 54 0 5 3 46 15 90.7% | 5 ( 9.3%) | Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From mriedem at linux.vnet.ibm.com Fri Dec 5 13:44:24 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Fri, 05 Dec 2014 07:44:24 -0600 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <20141205134159.GK2383@redhat.com> References: <20141205134159.GK2383@redhat.com> Message-ID: <5481B6B8.90807@linux.vnet.ibm.com> On 12/5/2014 7:41 AM, Daniel P. Berrange wrote: > On Fri, Dec 05, 2014 at 11:05:28AM +1100, Michael Still wrote: >> One of the things that happens over time is that some of our core >> reviewers move on to other projects. This is a normal and healthy >> thing, especially as nova continues to spin out projects into other >> parts of OpenStack. >> >> However, it is important that our core reviewers be active, as it >> keeps them up to date with the current ways we approach development in >> Nova. I am therefore removing some no longer sufficiently active cores >> from the nova-core group. >> >> I?d like to thank the following people for their contributions over the years: >> >> * cbehrens: Chris Behrens >> * vishvananda: Vishvananda Ishaya >> * dan-prince: Dan Prince >> * belliott: Brian Elliott >> * p-draigbrady: Padraig Brady >> >> I?d love to see any of these cores return if they find their available >> time for code reviews increases. > > What stats did you use to decide whether to cull these reviewers ? Looking > at the stats over a 6 month period, I think Padraig Brady is still having > a significant positive impact on Nova - on a par with both cerberus and > alaski who you've not proposing for cut. I think we should keep Padraig > on the team, but probably suggest cutting Markmc instead > > http://russellbryant.net/openstack-stats/nova-reviewers-180.txt > > +-----------------------------+----------------------------------------+----------------+ > | Reviewer | Reviews -2 -1 +1 +2 +A +/- % | Disagreements* | > +-----------------------------+----------------------------------------+----------------+ > | berrange ** | 1766 26 435 12 1293 357 73.9% | 157 ( 8.9%) | > | jaypipes ** | 1359 11 378 436 534 133 71.4% | 109 ( 8.0%) | > | jogo ** | 1053 131 326 7 589 353 56.6% | 47 ( 4.5%) | > | danms ** | 921 67 381 23 450 167 51.4% | 32 ( 3.5%) | > | oomichi ** | 889 4 306 55 524 182 65.1% | 40 ( 4.5%) | > | johngarbutt ** | 808 319 227 10 252 145 32.4% | 37 ( 4.6%) | > | mriedem ** | 642 27 279 25 311 136 52.3% | 17 ( 2.6%) | > | klmitch ** | 606 1 90 2 513 70 85.0% | 67 ( 11.1%) | > | ndipanov ** | 588 19 179 10 380 113 66.3% | 62 ( 10.5%) | > | mikalstill ** | 564 31 34 3 496 207 88.5% | 20 ( 3.5%) | > | cyeoh-0 ** | 546 12 207 30 297 103 59.9% | 35 ( 6.4%) | > | sdague ** | 511 23 89 6 393 229 78.1% | 25 ( 4.9%) | > | russellb ** | 465 6 83 0 376 158 80.9% | 23 ( 4.9%) | > | alaski ** | 415 1 65 21 328 149 84.1% | 24 ( 5.8%) | > | cerberus ** | 405 6 25 48 326 102 92.3% | 33 ( 8.1%) | > | p-draigbrady ** | 376 2 40 9 325 64 88.8% | 49 ( 13.0%) | > | markmc ** | 243 2 54 3 184 69 77.0% | 14 ( 5.8%) | > | belliott ** | 231 1 68 5 157 35 70.1% | 19 ( 8.2%) | > | dan-prince ** | 178 2 48 9 119 29 71.9% | 11 ( 6.2%) | > | cbehrens ** | 132 2 49 2 79 19 61.4% | 6 ( 4.5%) | > | vishvananda ** | 54 0 5 3 46 15 90.7% | 5 ( 9.3%) | > > > Regards, > Daniel > FWIW, markmc is already off the list [1]. [1] https://review.openstack.org/#/admin/groups/25,members -- Thanks, Matt Riedemann From ndipanov at redhat.com Fri Dec 5 13:47:08 2014 From: ndipanov at redhat.com (=?UTF-8?B?Tmlrb2xhIMSQaXBhbm92?=) Date: Fri, 05 Dec 2014 14:47:08 +0100 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: References: Message-ID: <5481B75C.20208@redhat.com> On 12/05/2014 01:05 AM, Michael Still wrote: > One of the things that happens over time is that some of our core > reviewers move on to other projects. This is a normal and healthy > thing, especially as nova continues to spin out projects into other > parts of OpenStack. > > However, it is important that our core reviewers be active, as it > keeps them up to date with the current ways we approach development in > Nova. I am therefore removing some no longer sufficiently active cores > from the nova-core group. > > I?d like to thank the following people for their contributions over the years: > > * cbehrens: Chris Behrens > * vishvananda: Vishvananda Ishaya > * dan-prince: Dan Prince > * belliott: Brian Elliott > * p-draigbrady: Padraig Brady > I am personally -1 on Padraig and Vish, especially Padraig. As one of the coreutils maintainers - his contribution to Nova is invaluable regardless whatever metrics you apply to his reviews makes him appear on this list (hint - quality should really be the only one). Removing him from core will probably not affect that, but I personally definitely trust him to not vote +2 on the stuff he is not in touch with, and view his +2 when I see them as a sign of thorough reviews. Also he has not exactly been inactive lately by any measure. Vish has not been active for some time now, but he is on IRC and in the community still (as opposed to Chris for example), so not sure why do this now. N. > I?d love to see any of these cores return if they find their available > time for code reviews increases. > > Thanks, > Michael > From sahid.ferdjaoui at redhat.com Fri Dec 5 13:49:27 2014 From: sahid.ferdjaoui at redhat.com (Sahid Orentino Ferdjaoui) Date: Fri, 5 Dec 2014 14:49:27 +0100 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <20141205134159.GK2383@redhat.com> References: <20141205134159.GK2383@redhat.com> Message-ID: <20141205134927.GA11645@redhat.redhat.com> On Fri, Dec 05, 2014 at 01:41:59PM +0000, Daniel P. Berrange wrote: > On Fri, Dec 05, 2014 at 11:05:28AM +1100, Michael Still wrote: > > One of the things that happens over time is that some of our core > > reviewers move on to other projects. This is a normal and healthy > > thing, especially as nova continues to spin out projects into other > > parts of OpenStack. > > > > However, it is important that our core reviewers be active, as it > > keeps them up to date with the current ways we approach development in > > Nova. I am therefore removing some no longer sufficiently active cores > > from the nova-core group. > > > > I?d like to thank the following people for their contributions over the years: > > > > * cbehrens: Chris Behrens > > * vishvananda: Vishvananda Ishaya > > * dan-prince: Dan Prince > > * belliott: Brian Elliott > > * p-draigbrady: Padraig Brady > > > > I?d love to see any of these cores return if they find their available > > time for code reviews increases. > > What stats did you use to decide whether to cull these reviewers ? Looking > at the stats over a 6 month period, I think Padraig Brady is still having > a significant positive impact on Nova - on a par with both cerberus and > alaski who you've not proposing for cut. I think we should keep Padraig > on the team, but probably suggest cutting Markmc instead > > http://russellbryant.net/openstack-stats/nova-reviewers-180.txt > > +-----------------------------+----------------------------------------+----------------+ > | Reviewer | Reviews -2 -1 +1 +2 +A +/- % | Disagreements* | > +-----------------------------+----------------------------------------+----------------+ > | berrange ** | 1766 26 435 12 1293 357 73.9% | 157 ( 8.9%) | > | jaypipes ** | 1359 11 378 436 534 133 71.4% | 109 ( 8.0%) | > | jogo ** | 1053 131 326 7 589 353 56.6% | 47 ( 4.5%) | > | danms ** | 921 67 381 23 450 167 51.4% | 32 ( 3.5%) | > | oomichi ** | 889 4 306 55 524 182 65.1% | 40 ( 4.5%) | > | johngarbutt ** | 808 319 227 10 252 145 32.4% | 37 ( 4.6%) | > | mriedem ** | 642 27 279 25 311 136 52.3% | 17 ( 2.6%) | > | klmitch ** | 606 1 90 2 513 70 85.0% | 67 ( 11.1%) | > | ndipanov ** | 588 19 179 10 380 113 66.3% | 62 ( 10.5%) | > | mikalstill ** | 564 31 34 3 496 207 88.5% | 20 ( 3.5%) | > | cyeoh-0 ** | 546 12 207 30 297 103 59.9% | 35 ( 6.4%) | > | sdague ** | 511 23 89 6 393 229 78.1% | 25 ( 4.9%) | > | russellb ** | 465 6 83 0 376 158 80.9% | 23 ( 4.9%) | > | alaski ** | 415 1 65 21 328 149 84.1% | 24 ( 5.8%) | > | cerberus ** | 405 6 25 48 326 102 92.3% | 33 ( 8.1%) | > | p-draigbrady ** | 376 2 40 9 325 64 88.8% | 49 ( 13.0%) | > | markmc ** | 243 2 54 3 184 69 77.0% | 14 ( 5.8%) | > | belliott ** | 231 1 68 5 157 35 70.1% | 19 ( 8.2%) | > | dan-prince ** | 178 2 48 9 119 29 71.9% | 11 ( 6.2%) | > | cbehrens ** | 132 2 49 2 79 19 61.4% | 6 ( 4.5%) | > | vishvananda ** | 54 0 5 3 46 15 90.7% | 5 ( 9.3%) | > +1 Padraig gave to us several robust reviews on important topics. Lose him will make more difficult the work on nova. s. From berrange at redhat.com Fri Dec 5 13:56:21 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Fri, 5 Dec 2014 13:56:21 +0000 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <5481B6B8.90807@linux.vnet.ibm.com> References: <20141205134159.GK2383@redhat.com> <5481B6B8.90807@linux.vnet.ibm.com> Message-ID: <20141205135621.GL2383@redhat.com> On Fri, Dec 05, 2014 at 07:44:24AM -0600, Matt Riedemann wrote: > > > On 12/5/2014 7:41 AM, Daniel P. Berrange wrote: > >On Fri, Dec 05, 2014 at 11:05:28AM +1100, Michael Still wrote: > >>One of the things that happens over time is that some of our core > >>reviewers move on to other projects. This is a normal and healthy > >>thing, especially as nova continues to spin out projects into other > >>parts of OpenStack. > >> > >>However, it is important that our core reviewers be active, as it > >>keeps them up to date with the current ways we approach development in > >>Nova. I am therefore removing some no longer sufficiently active cores > >>from the nova-core group. > >> > >>I?d like to thank the following people for their contributions over the years: > >> > >>* cbehrens: Chris Behrens > >>* vishvananda: Vishvananda Ishaya > >>* dan-prince: Dan Prince > >>* belliott: Brian Elliott > >>* p-draigbrady: Padraig Brady > >> > >>I?d love to see any of these cores return if they find their available > >>time for code reviews increases. > > > >What stats did you use to decide whether to cull these reviewers ? Looking > >at the stats over a 6 month period, I think Padraig Brady is still having > >a significant positive impact on Nova - on a par with both cerberus and > >alaski who you've not proposing for cut. I think we should keep Padraig > >on the team, but probably suggest cutting Markmc instead > > > > http://russellbryant.net/openstack-stats/nova-reviewers-180.txt > > > >+-----------------------------+----------------------------------------+----------------+ > >| Reviewer | Reviews -2 -1 +1 +2 +A +/- % | Disagreements* | > >+-----------------------------+----------------------------------------+----------------+ > >| berrange ** | 1766 26 435 12 1293 357 73.9% | 157 ( 8.9%) | > >| jaypipes ** | 1359 11 378 436 534 133 71.4% | 109 ( 8.0%) | > >| jogo ** | 1053 131 326 7 589 353 56.6% | 47 ( 4.5%) | > >| danms ** | 921 67 381 23 450 167 51.4% | 32 ( 3.5%) | > >| oomichi ** | 889 4 306 55 524 182 65.1% | 40 ( 4.5%) | > >| johngarbutt ** | 808 319 227 10 252 145 32.4% | 37 ( 4.6%) | > >| mriedem ** | 642 27 279 25 311 136 52.3% | 17 ( 2.6%) | > >| klmitch ** | 606 1 90 2 513 70 85.0% | 67 ( 11.1%) | > >| ndipanov ** | 588 19 179 10 380 113 66.3% | 62 ( 10.5%) | > >| mikalstill ** | 564 31 34 3 496 207 88.5% | 20 ( 3.5%) | > >| cyeoh-0 ** | 546 12 207 30 297 103 59.9% | 35 ( 6.4%) | > >| sdague ** | 511 23 89 6 393 229 78.1% | 25 ( 4.9%) | > >| russellb ** | 465 6 83 0 376 158 80.9% | 23 ( 4.9%) | > >| alaski ** | 415 1 65 21 328 149 84.1% | 24 ( 5.8%) | > >| cerberus ** | 405 6 25 48 326 102 92.3% | 33 ( 8.1%) | > >| p-draigbrady ** | 376 2 40 9 325 64 88.8% | 49 ( 13.0%) | > >| markmc ** | 243 2 54 3 184 69 77.0% | 14 ( 5.8%) | > >| belliott ** | 231 1 68 5 157 35 70.1% | 19 ( 8.2%) | > >| dan-prince ** | 178 2 48 9 119 29 71.9% | 11 ( 6.2%) | > >| cbehrens ** | 132 2 49 2 79 19 61.4% | 6 ( 4.5%) | > >| vishvananda ** | 54 0 5 3 46 15 90.7% | 5 ( 9.3%) | > > > > > >Regards, > >Daniel > > > > FWIW, markmc is already off the list [1]. Ah yes, must be that russell needs to update the config file for his script to stop marking markmc as core. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From davanum at gmail.com Fri Dec 5 13:56:46 2014 From: davanum at gmail.com (Davanum Srinivas) Date: Fri, 5 Dec 2014 08:56:46 -0500 Subject: [openstack-dev] =?utf-8?b?W2FsbF0gW3RjXSBbUFRMXSBDYXNjYWRpbmcg?= =?utf-8?q?vs=2E_Cells_=E2=80=93_summit_recap_and_move_forward?= In-Reply-To: <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com> Message-ID: Joe, Related to this topic, At the summit, there was a session on Cells v2 and following up on that there have been BP(s) filed in Nova championed by Andrew - https://review.openstack.org/#/q/owner:%22Andrew+Laski%22+status:open,n,z thanks, dims On Fri, Dec 5, 2014 at 8:23 AM, joehuang wrote: > Dear all & TC & PTL, > > In the 40 minutes cross-project summit session ?Approaches for scaling out?[1], almost 100 peoples attended the meeting, and the conclusion is that cells can not cover the use cases and requirements which the OpenStack cascading solution[2] aim to address, the background including use cases and requirements is also described in the mail. > > After the summit, we just ported the PoC[3] source code from IceHouse based to Juno based. > > Now, let's move forward: > > The major task is to introduce new driver/agent to existing core projects, for the core idea of cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. > a). Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases. > b). Volunteer as the cross project coordinator. > c). Volunteers for implementation and CI. > > Background of OpenStack cascading vs cells: > > 1. Use cases > a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" ), establishing globally addressable tenants which result in efficient services deployment. > b). Telefonica use case[5], create virtual DC( data center) cross multiple physical DCs with seamless experience. > c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV cloud, it?s in nature the cloud will be distributed but inter-connected in many data centers. > > 2.requirements > a). The operator has multiple sites cloud; each site can use one or multiple vendor?s OpenStack distributions. > b). Each site with its own requirements and upgrade schedule while maintaining standard OpenStack API > c). The multi-site cloud must provide unified resource management with global Open API exposed, for example create virtual DC cross multiple physical DCs with seamless experience. > Although a prosperity orchestration layer could be developed for the multi-site cloud, but it's prosperity API in the north bound interface. The cloud operators want the ecosystem friendly global open API for the mutli-site cloud for global access. > > 3. What problems does cascading solve that cells doesn't cover: > OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core architecture idea of OpenStack cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks (from different vendor's distribution, or different version ) which may located in different sites (or data centers ) through the OpenStack API, meanwhile the cloud still expose OpenStack API as the north-bound API in the cloud level. > > 4. Why cells can?t do that: > Cells provide the scale out capability to Nova, but from the OpenStack as a whole point of view, it?s still working like one OpenStack instance. > a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. This approach provides the multi-site cloud with one unified API endpoint and unified resource management, but consolidation of multi-vendor/multi-version OpenStack instances across one or more data centers cannot be fulfilled. > b). Each site installed one child cell and accompanied standalone Cinder, Neutron(or Nova-network), Glance, Ceilometer. This approach makes multi-vendor/multi-version OpenStack distribution co-existence in multi-site seem to be feasible, but the requirement for unified API endpoint and unified resource management cannot be fulfilled. Cross Neutron networking automation is also missing, which should otherwise be done manually or use proprietary orchestration layer. > > For more information about cascading and cells, please refer to the discussion thread before Paris Summit [7]. > > [1]Approaches for scaling out: https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack > [2]OpenStack cascading solution: https://wiki.openstack.org/wiki/OpenStack_cascading_solution > [3]Cascading PoC: https://github.com/stackforge/tricircle > [4]Vodafone use case (9'02" to 12'30"): https://www.youtube.com/watch?v=-KOJYvhmxQI > [5]Telefonica use case: http://www.telefonica.com/en/descargas/mwc/present_20140224.pdf > [6]ETSI NFV use cases: http://www.etsi.org/deliver/etsi_gs/nfv/001_099/001/01.01.01_60/gs_nfv001v010101p.pdf > [7]Cascading thread before design summit: http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-td54115.html > > Best Regards > Chaoyi Huang (joehuang) > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From fungi at yuggoth.org Fri Dec 5 14:26:46 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 5 Dec 2014 14:26:46 +0000 Subject: [openstack-dev] Session length on wiki.openstack.org In-Reply-To: References: <20141205010348.GY84915@thor.bakeyournoodle.com> Message-ID: <20141205142645.GL2497@yuggoth.org> On 2014-12-04 18:37:48 -0700 (-0700), Carl Baldwin wrote: > +1 I've been meaning to say something like this but never got > around to it. Thanks for speaking up. https://storyboard.openstack.org/#!/story/1172753 I think Ryan said it might be a bug in the OpenID plug-in, but if so he didn't put that comment in the bug. -- Jeremy Stanley From blk at acm.org Fri Dec 5 14:27:29 2014 From: blk at acm.org (Brant Knudson) Date: Fri, 5 Dec 2014 08:27:29 -0600 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <20141205135621.GL2383@redhat.com> References: <20141205134159.GK2383@redhat.com> <5481B6B8.90807@linux.vnet.ibm.com> <20141205135621.GL2383@redhat.com> Message-ID: On Fri, Dec 5, 2014 at 7:56 AM, Daniel P. Berrange wrote: > > > FWIW, markmc is already off the list [1]. > > Ah yes, must be that russell needs to update the config file for his script > to stop marking markmc as core. > > > Regards, > Daniel > -- > Anyone can do it: https://review.openstack.org/#/c/139637/ - Brant > |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ > :| > |: http://libvirt.org -o- http://virt-manager.org > :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ > :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc > :| > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbryant at redhat.com Fri Dec 5 14:39:29 2014 From: rbryant at redhat.com (Russell Bryant) Date: Fri, 05 Dec 2014 09:39:29 -0500 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <20141205134159.GK2383@redhat.com> References: <20141205134159.GK2383@redhat.com> Message-ID: <5481C3A1.5030900@redhat.com> On 12/05/2014 08:41 AM, Daniel P. Berrange wrote: > On Fri, Dec 05, 2014 at 11:05:28AM +1100, Michael Still wrote: >> One of the things that happens over time is that some of our core >> reviewers move on to other projects. This is a normal and healthy >> thing, especially as nova continues to spin out projects into other >> parts of OpenStack. >> >> However, it is important that our core reviewers be active, as it >> keeps them up to date with the current ways we approach development in >> Nova. I am therefore removing some no longer sufficiently active cores >> from the nova-core group. >> >> I?d like to thank the following people for their contributions over the years: >> >> * cbehrens: Chris Behrens >> * vishvananda: Vishvananda Ishaya >> * dan-prince: Dan Prince >> * belliott: Brian Elliott >> * p-draigbrady: Padraig Brady >> >> I?d love to see any of these cores return if they find their available >> time for code reviews increases. > > What stats did you use to decide whether to cull these reviewers ? Looking > at the stats over a 6 month period, I think Padraig Brady is still having > a significant positive impact on Nova - on a par with both cerberus and > alaski who you've not proposing for cut. I think we should keep Padraig > on the team, but probably suggest cutting Markmc instead > > http://russellbryant.net/openstack-stats/nova-reviewers-180.txt > > +-----------------------------+----------------------------------------+----------------+ > | Reviewer | Reviews -2 -1 +1 +2 +A +/- % | Disagreements* | > +-----------------------------+----------------------------------------+----------------+ > | berrange ** | 1766 26 435 12 1293 357 73.9% | 157 ( 8.9%) | > | jaypipes ** | 1359 11 378 436 534 133 71.4% | 109 ( 8.0%) | > | jogo ** | 1053 131 326 7 589 353 56.6% | 47 ( 4.5%) | > | danms ** | 921 67 381 23 450 167 51.4% | 32 ( 3.5%) | > | oomichi ** | 889 4 306 55 524 182 65.1% | 40 ( 4.5%) | > | johngarbutt ** | 808 319 227 10 252 145 32.4% | 37 ( 4.6%) | > | mriedem ** | 642 27 279 25 311 136 52.3% | 17 ( 2.6%) | > | klmitch ** | 606 1 90 2 513 70 85.0% | 67 ( 11.1%) | > | ndipanov ** | 588 19 179 10 380 113 66.3% | 62 ( 10.5%) | > | mikalstill ** | 564 31 34 3 496 207 88.5% | 20 ( 3.5%) | > | cyeoh-0 ** | 546 12 207 30 297 103 59.9% | 35 ( 6.4%) | > | sdague ** | 511 23 89 6 393 229 78.1% | 25 ( 4.9%) | > | russellb ** | 465 6 83 0 376 158 80.9% | 23 ( 4.9%) | > | alaski ** | 415 1 65 21 328 149 84.1% | 24 ( 5.8%) | > | cerberus ** | 405 6 25 48 326 102 92.3% | 33 ( 8.1%) | > | p-draigbrady ** | 376 2 40 9 325 64 88.8% | 49 ( 13.0%) | > | markmc ** | 243 2 54 3 184 69 77.0% | 14 ( 5.8%) | > | belliott ** | 231 1 68 5 157 35 70.1% | 19 ( 8.2%) | > | dan-prince ** | 178 2 48 9 119 29 71.9% | 11 ( 6.2%) | > | cbehrens ** | 132 2 49 2 79 19 61.4% | 6 ( 4.5%) | > | vishvananda ** | 54 0 5 3 46 15 90.7% | 5 ( 9.3%) | > Yeah, I was pretty surprised to see pbrady on this list, as well. The above was 6 months, but even if you drop it to the most recent 3 months, he's still active ... > Reviews for the last 90 days in nova > ** -- nova-core team member > +-----------------------------+---------------------------------------+----------------+ > | Reviewer | Reviews -2 -1 +1 +2 +A +/- % | Disagreements* | > +-----------------------------+---------------------------------------+----------------+ > | berrange ** | 708 13 145 1 549 200 77.7% | 47 ( 6.6%) | > | jogo ** | 594 40 218 4 332 174 56.6% | 27 ( 4.5%) | > | jaypipes ** | 509 10 180 17 302 77 62.7% | 33 ( 6.5%) | > | oomichi ** | 392 1 136 10 245 74 65.1% | 6 ( 1.5%) | > | danms ** | 386 38 155 16 177 77 50.0% | 16 ( 4.1%) | > | ndipanov ** | 345 17 118 7 203 61 60.9% | 32 ( 9.3%) | > | mriedem ** | 304 12 136 12 144 56 51.3% | 12 ( 3.9%) | > | klmitch ** | 281 1 42 0 238 19 84.7% | 32 ( 11.4%) | > | cyeoh-0 ** | 270 11 112 12 135 47 54.4% | 13 ( 4.8%) | > | mikalstill ** | 261 7 8 3 243 106 94.3% | 7 ( 2.7%) | > | sdague ** | 246 19 41 2 184 104 75.6% | 10 ( 4.1%) | > | johngarbutt ** | 216 25 92 7 92 43 45.8% | 8 ( 3.7%) | > | alaski ** | 161 0 17 8 136 81 89.4% | 6 ( 3.7%) | > | cerberus ** | 157 0 9 41 107 41 94.3% | 8 ( 5.1%) | > | p-draigbrady ** | 143 0 21 3 119 26 85.3% | 9 ( 6.3%) | > | russellb ** | 123 1 15 0 107 41 87.0% | 8 ( 6.5%) | > | belliott ** | 66 0 17 2 47 24 74.2% | 5 ( 7.6%) | > | cbehrens ** | 20 0 4 0 16 2 80.0% | 1 ( 5.0%) | > | vishvananda ** | 18 0 3 0 15 6 83.3% | 2 ( 11.1%) | > | dan-prince ** | 16 0 1 0 15 6 93.8% | 5 ( 31.2%) | -- Russell Bryant From dpyzhov at mirantis.com Fri Dec 5 14:58:21 2014 From: dpyzhov at mirantis.com (Dmitry Pyzhov) Date: Fri, 5 Dec 2014 18:58:21 +0400 Subject: [openstack-dev] [Fuel] Change diagnostic snapshot compression algoritm In-Reply-To: References: <7A5052E0-C658-493B-BCE0-4158F7BF9BB6@mirantis.com> <50CAAE06-BBD9-4296-A49C-8C2F8AB1025E@mirantis.com> Message-ID: I've moved the bug to 6.1. And I'm going to add it to our roadmap as a separate item. On Wed, Nov 26, 2014 at 1:31 PM, Mike Scherbakov wrote: > Can we put it as a work item for diagnostic snapshot improvements, so we > won't forget about this in 6.1? > > > On Tuesday, November 25, 2014, Dmitry Pyzhov wrote: > >> Thank you all for your feedback. Request postponed to the next release. >> We will compare available solutions. >> >> On Mon, Nov 24, 2014 at 2:36 PM, Vladimir Kuklin >> wrote: >> >>> guys, there is already pxz utility in ubuntu repos. let's test it >>> >>> On Mon, Nov 24, 2014 at 2:32 PM, Bart?omiej Piotrowski < >>> bpiotrowski at mirantis.com> wrote: >>> >>>> On 24 Nov 2014, at 12:25, Matthew Mosesohn >>>> wrote: >>>> > I did this exercise over many iterations during Docker container >>>> > packing and found that as long as the data is under 1gb, it's going to >>>> > compress really well with xz. Over 1gb and lrzip looks more attractive >>>> > (but only on high memory systems). In reality, we're looking at log >>>> > footprints from OpenStack environments on the order of 500mb to 2gb. >>>> > >>>> > xz is very slow on single-core systems with 1.5gb of memory, but it's >>>> > quite a bit faster if you run it on a more powerful system. I've found >>>> > level 4 compression to be the best compromise that works well enough >>>> > that it's still far better than gzip. If increasing compression time >>>> > by 3-5x is too much for you guys, why not just go to bzip? You'll >>>> > still improve compression but be able to cut back on time. >>>> > >>>> > Best Regards, >>>> > Matthew Mosesohn >>>> >>>> Alpha release of xz supports multithreading via -T (or ?threads) >>>> parameter. >>>> We could also use pbzip2 instead of regular bzip to cut some time on >>>> multi-core >>>> systems. >>>> >>>> Regards, >>>> Bart?omiej Piotrowski >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> >>> >>> -- >>> Yours Faithfully, >>> Vladimir Kuklin, >>> Fuel Library Tech Lead, >>> Mirantis, Inc. >>> +7 (495) 640-49-04 >>> +7 (926) 702-39-68 >>> Skype kuklinvv >>> 45bk3, Vorontsovskaya Str. >>> Moscow, Russia, >>> www.mirantis.com >>> www.mirantis.ru >>> vkuklin at mirantis.com >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> > > -- > Mike Scherbakov > #mihgen > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Dec 5 15:04:51 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 5 Dec 2014 10:04:51 -0500 Subject: [openstack-dev] [oslo] bug triage and review backlog sprint retrospective Message-ID: The Oslo team spent yesterday working on our backlog. We triaged all of our ?New? bugs and re-triaged many of the other existing open bugs, closing some as obsolete or fixed. We reviewed many patches in the backlog as well, and landed quite a few. We did not clear as much of the backlog for our ?big? libraries as I had hoped, but we still made good progress on the other libraries. The oslo.db, oslo.messaging, and taskflow libraries all have quite large backlogs of reviews to be done. These libraries require some specialized knowledge, and so it was a little more difficult to include them in the bulk review sprint. I would like to schedule separate review days each of these libraries in turn. We will talk about this at the next team meeting and see if we can schedule the first for a few weeks from now. We used https://etherpad.openstack.org/p/oslo-kilo-sprint for coordinating, and you?ll find more detailed notes there if you?re interested. Thanks to everyone who participated! Doug From kurt.r.taylor at gmail.com Fri Dec 5 15:08:59 2014 From: kurt.r.taylor at gmail.com (Kurt Taylor) Date: Fri, 5 Dec 2014 09:08:59 -0600 Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: References: <547CCA69.4000909@anteaya.info> <837B116B6E5B934DA06D9AD0FD79C6A3018E9C60@SHSMSX104.ccr.corp.intel.com> <547EB5C9.8020008@rackspace.com> <547F8DC1.5000202@anteaya.info> Message-ID: In my opinion, further discussion is needed. The proposal on the table is to have 2 weekly meetings, one at the existing time of 1800UTC on Monday and, also in the same week, to have another meeting at 0800 UTC on Tuesday. Here are some of the problems that I see with this approach: 1. Meeting content: Having 2 meetings per week is more than is needed at this stage of the working group. There just isn't enough meeting content to justify having two meetings every week. 2. Decisions: Any decision made at one meeting will potentially be undone at the next, or at least not fully explained. It will be difficult to keep consistent direction with the overall work group. 3. Meeting chair(s): Currently we do not have a commitment for a long-term chair of this new second weekly meeting. I will not be able to attend this new meeting at the proposed time. 4. Current meeting time: I am not aware of anyone that likes the current time of 1800 UTC on Monday. The current time is the main reason it is hard for EU and APAC CI Operators to attend. My proposal was to have only 1 meeting per week at alternating times, just as other work groups have done to solve this problem. (See examples at: https://wiki.openstack.org/wiki/Meetings) I volunteered to chair, then ask other CI Operators to chair as the meetings evolved. The meeting times could be any between 1300-0300 UTC. That way, one week we are good for US and Europe, the next week for APAC. Kurt Taylor (krtaylor) On Wed, Dec 3, 2014 at 11:10 PM, trinath.somanchi at freescale.com < trinath.somanchi at freescale.com> wrote: > +1. > > -- > Trinath Somanchi - B39208 > trinath.somanchi at freescale.com | extn: 4048 > > -----Original Message----- > From: Anita Kuno [mailto:anteaya at anteaya.info] > Sent: Thursday, December 04, 2014 3:55 AM > To: openstack-infra at lists.openstack.org > Subject: Re: [OpenStack-Infra] [openstack-dev] [third-party]Time for > Additional Meeting for third-party > > On 12/03/2014 03:15 AM, Omri Marcovitch wrote: > > Hello Anteaya, > > > > A meeting between 8:00 - 16:00 UTC time will be great (Israel). > > > > > > Thanks > > Omri > > > > -----Original Message----- > > From: Joshua Hesketh [mailto:joshua.hesketh at rackspace.com] > > Sent: Wednesday, December 03, 2014 9:04 AM > > To: He, Yongli; OpenStack Development Mailing List (not for usage > > questions); openstack-infra at lists.openstack.org > > Subject: Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for > > Additional Meeting for third-party > > > > Hey, > > > > 0700 -> 1000 UTC would work for me most weeks fwiw. > > > > Cheers, > > Josh > > > > Rackspace Australia > > > > On 12/3/14 11:17 AM, He, Yongli wrote: > >> anteaya, > >> > >> UTC 7:00 AM to UTC9:00, or UTC11:30 to UTC13:00 is ideal time for china. > >> > >> if there is no time slot there, just pick up any time between UTC > >> 7:00 AM to UCT 13:00. ( UTC9:00 to UTC 11:30 is on road to home and > >> dinner.) > >> > >> Yongi He > >> -----Original Message----- > >> From: Anita Kuno [mailto:anteaya at anteaya.info] > >> Sent: Tuesday, December 02, 2014 4:07 AM > >> To: openstack Development Mailing List; > >> openstack-infra at lists.openstack.org > >> Subject: [openstack-dev] [third-party]Time for Additional Meeting for > >> third-party > >> > >> One of the actions from the Kilo Third-Party CI summit session was to > start up an additional meeting for CI operators to participate from > non-North American time zones. > >> > >> Please reply to this email with times/days that would work for you. The > current third party meeting is on Mondays at 1800 utc which works well > since Infra meetings are on Tuesdays. If we could find a time that works > for Europe and APAC that is also on Monday that would be ideal. > >> > >> Josh Hesketh has said he will try to be available for these meetings, > he is in Australia. > >> > >> Let's get a sense of what days and timeframes work for those interested > and then we can narrow it down and pick a channel. > >> > >> Thanks everyone, > >> Anita. > >> > > Okay first of all thanks to everyone who replied. > > Again, to clarify, the purpose of this thread has been to find a suitable > additional third-party meeting time geared towards folks in EU and APAC. We > live on a sphere, there is no time that will suit everyone. > > It looks like we are converging on 0800 UTC as a time and I am going to > suggest Tuesdays. We have very little competition for space at that date > + time combination so we can use #openstack-meeting (I have already > booked the space on the wikipage). > > So barring further discussion, see you then! > > Thanks everyone, > Anita. > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amit.gandhi at RACKSPACE.COM Fri Dec 5 15:40:45 2014 From: amit.gandhi at RACKSPACE.COM (Amit Gandhi) Date: Fri, 5 Dec 2014 15:40:45 +0000 Subject: [openstack-dev] [poppy] Kilo-2 Priorities Message-ID: <946B2D4D-A2B5-48C4-A822-2A0E9D2DEF40@rackspace.com> Thanks everyone who attended the weekly meeting yesterday. Great job in delivering on the kilo-1 deliverables for Poppy CDN. https://launchpad.net/poppy/+milestone/kilo-1 We were able to be dev complete the following items: - The ability to configure a service containing domains, origins, caching rules, and restrictions with a CDN provider - The ability to purge content from a CDN provider - The ability to define flavors - The ability to check the health of the system - The following Transport drivers - Pecan - The following Storage drivers - Cassandra - The following DNS drivers - Rackspace Cloud DNS - The following CDN providers - Akamai, Fastly, MaxCDN, Amazon CloudFront As we move focus on to the Kilo-2 milestone, lets put emphasis around testing, bug fixing, refactoring, and making the work done up to now reliable and well tested, before we move on to the next set of major features. I have updated the kilo-2 milestone deliverables to reflect these goals. https://launchpad.net/poppy/+milestone/kilo-2 Thanks Amit Gandhi -------------- next part -------------- An HTML attachment was scrubbed... URL: From dprince at redhat.com Fri Dec 5 16:19:38 2014 From: dprince at redhat.com (Dan Prince) Date: Fri, 05 Dec 2014 11:19:38 -0500 Subject: [openstack-dev] [tripleo] Managing no-mergepy template duplication In-Reply-To: <547F135D.6060805@redhat.com> References: <20141203101059.GA23017@t430slt.redhat.com> <547F135D.6060805@redhat.com> Message-ID: <1417796378.28080.2.camel@dovetail.localdomain> On Wed, 2014-12-03 at 14:42 +0100, Tomas Sedovic wrote: > On 12/03/2014 11:11 AM, Steven Hardy wrote: > > Hi all, > > > > Lately I've been spending more time looking at tripleo and doing some > > reviews. I'm particularly interested in helping the no-mergepy and > > subsequent puppet-software-config implementations mature (as well as > > improving overcloud updates via heat). > > > > Since Tomas's patch landed[1] to enable --no-mergepy in > > tripleo-heat-templates, it's become apparent that frequently patches are > > submitted which only update overcloud-source.yaml, so I've been trying to > > catch these and ask for a corresponding change to e.g controller.yaml. > > > > You beat me to this. Thanks for writing it up! > > > This raises the following questions: > > > > 1. Is it reasonable to -1 a patch and ask folks to update in both places? > > I'm in favour. > > > 2. How are we going to handle this duplication and divergence? To follow this up we are getting in really bad shape with divergence already. I found 3 missing sets of Rabbit, Keystone, and Neutron DVR parameters which due to the merge window were properly ported into overcloud-without-mergepy.yaml yet. https://review.openstack.org/#/c/139649/ (missing Rabbit parameters) https://review.openstack.org/#/c/139656/ (missing Keystone parameters) https://review.openstack.org/#/c/139671/ (missing Neutron DVR parameters) We need to be very careful at this point not to continue merging things into overcloud-source.yaml which don't have the equivalent bits for overcloud-without-mergepy.yaml. Dan > > I'm not sure we can. get_file doesn't handle structured data and I don't > know what else we can do. Maybe we could split out all SoftwareConfig > resources to separate files (like Dan did in [nova config])? But the > SoftwareDeployments, nova servers, etc. have a different structure. > > [nova config] https://review.openstack.org/#/c/130303/ > > > 3. What's the status of getting gating CI on the --no-mergepy templates? > > Derek, can we add a job that's identical to > "check-tripleo-ironic-overcloud-{f20,precise}-nonha" except it passes > "--no-mergepy" to devtest.sh? > > > 4. What barriers exist (now that I've implemented[2] the eliding functionality > > requested[3] for ResourceGroup) to moving to the --no-mergepy > > implementation by default? > > I'm about to post a patch that moves us from ResourceGroup to > AutoScalingGroup (for rolling updates), which is going to complicate > this a bit. > > Barring that, I think you've identified all the requirements: CI job, > parity between the merge/non-merge templates and a process that > maintains it going forward (or puts the old ones in a maintanence-only > mode). > > Anyone have anything else that's missing? > > > > > Thanks for any clarification you can provide! :) > > > > Steve > > > > [1] https://review.openstack.org/#/c/123100/ > > [2] https://review.openstack.org/#/c/128365/ > > [3] https://review.openstack.org/#/c/123713/ > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From joe.gordon0 at gmail.com Fri Dec 5 16:23:32 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Fri, 5 Dec 2014 18:23:32 +0200 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <5481C3A1.5030900@redhat.com> References: <20141205134159.GK2383@redhat.com> <5481C3A1.5030900@redhat.com> Message-ID: On Fri, Dec 5, 2014 at 4:39 PM, Russell Bryant wrote: > On 12/05/2014 08:41 AM, Daniel P. Berrange wrote: > > On Fri, Dec 05, 2014 at 11:05:28AM +1100, Michael Still wrote: > >> One of the things that happens over time is that some of our core > >> reviewers move on to other projects. This is a normal and healthy > >> thing, especially as nova continues to spin out projects into other > >> parts of OpenStack. > >> > >> However, it is important that our core reviewers be active, as it > >> keeps them up to date with the current ways we approach development in > >> Nova. I am therefore removing some no longer sufficiently active cores > >> from the nova-core group. > >> > >> I?d like to thank the following people for their contributions over the > years: > >> > >> * cbehrens: Chris Behrens > >> * vishvananda: Vishvananda Ishaya > >> * dan-prince: Dan Prince > >> * belliott: Brian Elliott > >> * p-draigbrady: Padraig Brady > >> > >> I?d love to see any of these cores return if they find their available > >> time for code reviews increases. > > > > What stats did you use to decide whether to cull these reviewers ? > Looking > > at the stats over a 6 month period, I think Padraig Brady is still having > > a significant positive impact on Nova - on a par with both cerberus and > > alaski who you've not proposing for cut. I think we should keep Padraig > > on the team, but probably suggest cutting Markmc instead > > > > http://russellbryant.net/openstack-stats/nova-reviewers-180.txt > > > > > +-----------------------------+----------------------------------------+----------------+ > > | Reviewer | Reviews -2 -1 +1 +2 +A +/- % | > Disagreements* | > > > +-----------------------------+----------------------------------------+----------------+ > > | berrange ** | 1766 26 435 12 1293 357 73.9% > | 157 ( 8.9%) | > > | jaypipes ** | 1359 11 378 436 534 133 71.4% > | 109 ( 8.0%) | > > | jogo ** | 1053 131 326 7 589 353 56.6% > | 47 ( 4.5%) | > > | danms ** | 921 67 381 23 450 167 51.4% > | 32 ( 3.5%) | > > | oomichi ** | 889 4 306 55 524 182 65.1% > | 40 ( 4.5%) | > > | johngarbutt ** | 808 319 227 10 252 145 32.4% > | 37 ( 4.6%) | > > | mriedem ** | 642 27 279 25 311 136 52.3% > | 17 ( 2.6%) | > > | klmitch ** | 606 1 90 2 513 70 85.0% > | 67 ( 11.1%) | > > | ndipanov ** | 588 19 179 10 380 113 66.3% > | 62 ( 10.5%) | > > | mikalstill ** | 564 31 34 3 496 207 88.5% > | 20 ( 3.5%) | > > | cyeoh-0 ** | 546 12 207 30 297 103 59.9% > | 35 ( 6.4%) | > > | sdague ** | 511 23 89 6 393 229 78.1% > | 25 ( 4.9%) | > > | russellb ** | 465 6 83 0 376 158 80.9% > | 23 ( 4.9%) | > > | alaski ** | 415 1 65 21 328 149 84.1% > | 24 ( 5.8%) | > > | cerberus ** | 405 6 25 48 326 102 92.3% > | 33 ( 8.1%) | > > | p-draigbrady ** | 376 2 40 9 325 64 88.8% > | 49 ( 13.0%) | > > | markmc ** | 243 2 54 3 184 69 77.0% > | 14 ( 5.8%) | > > | belliott ** | 231 1 68 5 157 35 70.1% > | 19 ( 8.2%) | > > | dan-prince ** | 178 2 48 9 119 29 71.9% > | 11 ( 6.2%) | > > | cbehrens ** | 132 2 49 2 79 19 61.4% > | 6 ( 4.5%) | > > | vishvananda ** | 54 0 5 3 46 15 90.7% > | 5 ( 9.3%) | > > > > Yeah, I was pretty surprised to see pbrady on this list, as well. The > above was 6 months, but even if you drop it to the most recent 3 months, > he's still active ... > As you are more then aware of, our policy for removing people from core is to leave that up the the PTL (I believe you wrote that) [0]. And I don't think numbers alone are a good metric for sorting out who to remove. That being said no matter what happens, with our fast track policy, if pbrady is dropped it shouldn't be hard to re-add him. [0] https://wiki.openstack.org/wiki/Nova/CoreTeam#Adding_or_Removing_Members > > > > Reviews for the last 90 days in nova > > ** -- nova-core team member > > > +-----------------------------+---------------------------------------+----------------+ > > | Reviewer | Reviews -2 -1 +1 +2 +A +/- % | > Disagreements* | > > > +-----------------------------+---------------------------------------+----------------+ > > | berrange ** | 708 13 145 1 549 200 77.7% | > 47 ( 6.6%) | > > | jogo ** | 594 40 218 4 332 174 56.6% | > 27 ( 4.5%) | > > | jaypipes ** | 509 10 180 17 302 77 62.7% | > 33 ( 6.5%) | > > | oomichi ** | 392 1 136 10 245 74 65.1% | > 6 ( 1.5%) | > > | danms ** | 386 38 155 16 177 77 50.0% | > 16 ( 4.1%) | > > | ndipanov ** | 345 17 118 7 203 61 60.9% | > 32 ( 9.3%) | > > | mriedem ** | 304 12 136 12 144 56 51.3% | > 12 ( 3.9%) | > > | klmitch ** | 281 1 42 0 238 19 84.7% | > 32 ( 11.4%) | > > | cyeoh-0 ** | 270 11 112 12 135 47 54.4% | > 13 ( 4.8%) | > > | mikalstill ** | 261 7 8 3 243 106 94.3% | > 7 ( 2.7%) | > > | sdague ** | 246 19 41 2 184 104 75.6% | > 10 ( 4.1%) | > > | johngarbutt ** | 216 25 92 7 92 43 45.8% | > 8 ( 3.7%) | > > | alaski ** | 161 0 17 8 136 81 89.4% | > 6 ( 3.7%) | > > | cerberus ** | 157 0 9 41 107 41 94.3% | > 8 ( 5.1%) | > > | p-draigbrady ** | 143 0 21 3 119 26 85.3% | > 9 ( 6.3%) | > > | russellb ** | 123 1 15 0 107 41 87.0% | > 8 ( 6.5%) | > > | belliott ** | 66 0 17 2 47 24 74.2% | > 5 ( 7.6%) | > > | cbehrens ** | 20 0 4 0 16 2 80.0% | > 1 ( 5.0%) | > > | vishvananda ** | 18 0 3 0 15 6 83.3% | > 2 ( 11.1%) | > > | dan-prince ** | 16 0 1 0 15 6 93.8% | > 5 ( 31.2%) | > > > -- > Russell Bryant > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbryant at redhat.com Fri Dec 5 16:36:37 2014 From: rbryant at redhat.com (Russell Bryant) Date: Fri, 05 Dec 2014 11:36:37 -0500 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: References: <20141205134159.GK2383@redhat.com> <5481C3A1.5030900@redhat.com> Message-ID: <5481DF15.4070301@redhat.com> On 12/05/2014 11:23 AM, Joe Gordon wrote: > As you are more then aware of, our policy for removing people from core > is to leave that up the the PTL (I believe you wrote that) [0]. And I > don't think numbers alone are a good metric for sorting out who to > remove. That being said no matter what happens, with our fast track > policy, if pbrady is dropped it shouldn't be hard to re-add him. Yes, I'm aware of and not questioning the policy. Usually drops are pretty obvious. This one wasn't. It seems reasonable to discuss. Maybe we don't have a common set of expectations. Anyway, I'll follow up in private. -- Russell Bryant From matthew.gilliard at gmail.com Fri Dec 5 16:37:17 2014 From: matthew.gilliard at gmail.com (Matthew Gilliard) Date: Fri, 5 Dec 2014 16:37:17 +0000 Subject: [openstack-dev] [nova] global or per-project specific ssl config options, or both? In-Reply-To: References: <547FE9BA.1010700@linux.vnet.ibm.com> <5480D75A.2010607@linux.vnet.ibm.com> Message-ID: I just put up a quick pre-weekend POC at https://review.openstack.org/#/c/139672/ - comments welcome on that patch. Thanks :) On Fri, Dec 5, 2014 at 10:07 AM, Matthew Gilliard wrote: > Hi Matt, Nova, > > I'll look into this. > > Gilliard > > On Thu, Dec 4, 2014 at 9:51 PM, Matt Riedemann > wrote: >> >> >> On 12/4/2014 6:02 AM, Davanum Srinivas wrote: >>> >>> +1 to @markmc's "default is global value and override for project >>> specific key" suggestion. >>> >>> -- dims >>> >>> >>> >>> On Wed, Dec 3, 2014 at 11:57 PM, Matt Riedemann >>> wrote: >>>> >>>> I've posted this to the 12/4 nova meeting agenda but figured I'd >>>> socialize >>>> it here also. >>>> >>>> SSL options - do we make them per-project or global, or both? Neutron and >>>> Cinder have config-group specific SSL options in nova, Glance is using >>>> oslo >>>> sslutils global options since Juno which was contentious for a time in a >>>> separate review in Icehouse [1]. >>>> >>>> Now [2] wants to break that out for Glance, but we also have a patch [3] >>>> for >>>> Keystone to use the global oslo SSL options, we should be consistent, but >>>> does that require a blueprint now? >>>> >>>> In the Icehouse patch, markmc suggested using a DictOpt where the default >>>> value is the global value, which could be coming from the oslo [ssl] >>>> group >>>> and then you could override that with a project-specific key, e.g. >>>> cinder, >>>> neutron, glance, keystone. >>>> >>>> [1] https://review.openstack.org/#/c/84522/ >>>> [2] https://review.openstack.org/#/c/131066/ >>>> [3] https://review.openstack.org/#/c/124296/ >>>> >>>> -- >>>> >>>> Thanks, >>>> >>>> Matt Riedemann >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> >> >> The consensus in the nova meeting today, I think, was that we generally like >> the idea of the DictOpt with global oslo ssl as the default and then be able >> to configure that per-service if needed. >> >> Does anyone want to put up a POC on how that would work to see how ugly >> and/or usable that would be? I haven't dug into the DictOpt stuff yet and >> am kind of time-constrained at the moment. >> >> >> -- >> >> Thanks, >> >> Matt Riedemann >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From joe.gordon0 at gmail.com Fri Dec 5 17:18:42 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Fri, 5 Dec 2014 19:18:42 +0200 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <5481DF15.4070301@redhat.com> References: <20141205134159.GK2383@redhat.com> <5481C3A1.5030900@redhat.com> <5481DF15.4070301@redhat.com> Message-ID: On Dec 5, 2014 11:39 AM, "Russell Bryant" wrote: > > On 12/05/2014 11:23 AM, Joe Gordon wrote: > > As you are more then aware of, our policy for removing people from core > > is to leave that up the the PTL (I believe you wrote that) [0]. And I > > don't think numbers alone are a good metric for sorting out who to > > remove. That being said no matter what happens, with our fast track > > policy, if pbrady is dropped it shouldn't be hard to re-add him. > > Yes, I'm aware of and not questioning the policy. Usually drops are > pretty obvious. This one wasn't. It seems reasonable to discuss. > Maybe we don't have a common set of expectations. Anyway, I'll follow > up in private. > Agreed > -- > Russell Bryant > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From chuckjcarlino at gmail.com Fri Dec 5 17:23:50 2014 From: chuckjcarlino at gmail.com (Chuck Carlino) Date: Fri, 05 Dec 2014 09:23:50 -0800 Subject: [openstack-dev] [neutron] - the setup of a DHCP sub-group In-Reply-To: References: Message-ID: <5481EA26.7040003@gmail.com> On 11/26/2014 08:55 PM, Don Kehn wrote: > Sure, will try and gen to it over the holiday, do you have a link to > the spec repo? > Hi Don, Has there been any progress on a DHCP sub-group? Regards, Chuck > On Mon, Nov 24, 2014 at 3:27 PM, Carl Baldwin > wrote: > > Don, > > Could the spec linked to your BP be moved to the specs repository? > I'm hesitant to start reading it as a google doc when I know I'm going > to want to make comments and ask questions. > > Carl > > On Thu, Nov 13, 2014 at 9:19 AM, Don Kehn > wrote: > > If this shows up twice sorry for the repeat: > > > > Armando, Carl: > > During the Summit, Armando and I had a very quick conversation > concern a > > blue print that I submitted, > > > https://blueprints.launchpad.net/neutron/+spec/dhcp-cpnr-integration > and > > Armando had mention the possibility of getting together a > sub-group tasked > > with DHCP Neutron concerns. I have talk with Infoblox folks (see > > https://blueprints.launchpad.net/neutron/+spec/neutron-ipam), > and everyone > > seems to be in agreement that there is synergy especially > concerning the > > development of a relay and potentially looking into how DHCP is > handled. In > > addition during the Fridays meetup session on DHCP that I gave > there seems > > to be some general interest by some of the operators as well. > > > > So what would be the formality in going forth to start a > sub-group and > > getting this underway? > > > > DeKehn > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > Don Kehn > 303-442-0060 > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From german.eichberger at hp.com Fri Dec 5 17:36:19 2014 From: german.eichberger at hp.com (Eichberger, German) Date: Fri, 5 Dec 2014 17:36:19 +0000 Subject: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. In-Reply-To: References: <1416867738.3960.19.camel@localhost> <1417737958.4577.25.camel@localhost> Message-ID: Hi Brandon + Stephen, Having all those permutations (and potentially testing them) made us lean against the sharing case in the first place. It?s just a lot of extra work for only a small number of our customers. German From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] Sent: Thursday, December 04, 2014 9:17 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. Hi Brandon, Yeah, in your example, member1 could potentially have 8 different statuses (and this is a small example!)... If that member starts flapping, it means that every time it flaps there are 8 notifications being passed upstream. Note that this problem actually doesn't get any better if we're not sharing objects but are just duplicating them (ie. not sharing objects but the user makes references to the same back-end machine as 8 different members.) To be honest, I don't see sharing entities at many levels like this being the rule for most of our installations-- maybe a few percentage points of installations will do an excessive sharing of members, but I doubt it. So really, even though reporting status like this is likely to generate a pretty big tree of data, I don't think this is actually a problem, eh. And I don't see sharing entities actually reducing the workload of what needs to happen behind the scenes. (It just allows us to conceal more of this work from the user.) Stephen On Thu, Dec 4, 2014 at 4:05 PM, Brandon Logan > wrote: Sorry it's taken me a while to respond to this. So I wasn't thinking about this correctly. I was afraid you would have to pass in a full tree of parent child representations to /loadbalancers to update anything a load balancer it is associated to (including down to members). However, after thinking about it, a user would just make an association call on each object. For Example, associate member1 with pool1, associate pool1 with listener1, then associate loadbalancer1 with listener1. Updating is just as simple as updating each entity. This does bring up another problem though. If a listener can live on many load balancers, and a pool can live on many listeners, and a member can live on many pools, there's lot of permutations to keep track of for status. you can't just link a member's status to a load balancer bc a member can exist on many pools under that load balancer, and each pool can exist under many listeners under that load balancer. For example, say I have these: lb1 lb2 listener1 listener2 pool1 pool2 member1 member2 lb1 -> [listener1, listener2] lb2 -> [listener1] listener1 -> [pool1, pool2] listener2 -> [pool1] pool1 -> [member1, member2] pool2 -> [member1] member1 can now have a different statuses under pool1 and pool2. since listener1 and listener2 both have pool1, this means member1 will now have a different status for listener1 -> pool1 and listener2 -> pool2 combination. And so forth for load balancers. Basically there's a lot of permutations and combinations to keep track of with this model for statuses. Showing these in the body of load balancer details can get quite large. I hope this makes sense because my brain is ready to explode. Thanks, Brandon On Thu, 2014-11-27 at 08:52 +0000, Samuel Bercovici wrote: > Brandon, can you please explain further (1) bellow? > > -----Original Message----- > From: Brandon Logan [mailto:brandon.logan at RACKSPACE.COM] > Sent: Tuesday, November 25, 2014 12:23 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. > > My impression is that the statuses of each entity will be shown on a detailed info request of a loadbalancer. The root level objects would not have any statuses. For example a user makes a GET request to /loadbalancers/{lb_id} and the status of every child of that load balancer is show in a "status_tree" json object. For example: > > {"name": "loadbalancer1", > "status_tree": > {"listeners": > [{"name": "listener1", "operating_status": "ACTIVE", > "default_pool": > {"name": "pool1", "status": "ACTIVE", > "members": > [{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}} > > Sam, correct me if I am wrong. > > I generally like this idea. I do have a few reservations with this: > > 1) Creating and updating a load balancer requires a full tree configuration with the current extension/plugin logic in neutron. Since updates will require a full tree, it means the user would have to know the full tree configuration just to simply update a name. Solving this would require nested child resources in the URL, which the current neutron extension/plugin does not allow. Maybe the new one will. > > 2) The status_tree can get quite large depending on the number of listeners and pools being used. This is a minor issue really as it will make horizon's (or any other UI tool's) job easier to show statuses. > > Thanks, > Brandon > > On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote: > > Hi Samuel, > > > > > > We've actually been avoiding having a deeper discussion about status > > in Neutron LBaaS since this can get pretty hairy as the back-end > > implementations get more complicated. I suspect managing that is > > probably one of the bigger reasons we have disagreements around object > > sharing. Perhaps it's time we discussed representing state "correctly" > > (whatever that means), instead of a round-a-bout discussion about > > object sharing (which, I think, is really just avoiding this issue)? > > > > > > Do you have a proposal about how status should be represented > > (possibly including a description of the state machine) if we collapse > > everything down to be logical objects except the loadbalancer object? > > (From what you're proposing, I suspect it might be too general to, for > > example, represent the UP/DOWN status of members of a given pool.) > > > > > > Also, from an haproxy perspective, sharing pools within a single > > listener actually isn't a problem. That is to say, having the same > > L7Policy pointing at the same pool is OK, so I personally don't have a > > problem allowing sharing of objects within the scope of parent > > objects. What do the rest of y'all think? > > > > > > Stephen > > > > > > > > On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici > > > wrote: > > Hi Stephen, > > > > > > > > 1. The issue is that if we do 1:1 and allow status/state > > to proliferate throughout all objects we will then get an > > issue to fix it later, hence even if we do not do sharing, I > > would still like to have all objects besides LB be treated as > > logical. > > > > 2. The 3rd use case bellow will not be reasonable without > > pool sharing between different policies. Specifying different > > pools which are the same for each policy make it non-started > > to me. > > > > > > > > -Sam. > > > > > > > > > > > > > > > > From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] > > Sent: Friday, November 21, 2014 10:26 PM > > To: OpenStack Development Mailing List (not for usage > > questions) > > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects > > in LBaaS - Use Cases that led us to adopt this. > > > > > > > > I think the idea was to implement 1:1 initially to reduce the > > amount of code and operational complexity we'd have to deal > > with in initial revisions of LBaaS v2. Many to many can be > > simulated in this scenario, though it does shift the burden of > > maintenance to the end user. It does greatly simplify the > > initial code for v2, in any case, though. > > > > > > > > > > > > Did we ever agree to allowing listeners to be shared among > > load balancers? I think that still might be a N:1 > > relationship even in our latest models. > > > > > > > > > > There's also the difficulty introduced by supporting different > > flavors: Since flavors are essentially an association between > > a load balancer object and a driver (with parameters), once > > flavors are introduced, any sub-objects of a given load > > balancer objects must necessarily be purely logical until they > > are associated with a load balancer. I know there was talk of > > forcing these objects to be sub-objects of a load balancer > > which can't be accessed independently of the load balancer > > (which would have much the same effect as what you discuss: > > State / status only make sense once logical objects have an > > instantiation somewhere.) However, the currently proposed API > > treats most objects as root objects, which breaks this > > paradigm. > > > > > > > > > > > > How we handle status and updates once there's an instantiation > > of these logical objects is where we start getting into real > > complexity. > > > > > > > > > > > > It seems to me there's a lot of complexity introduced when we > > allow a lot of many to many relationships without a whole lot > > of benefit in real-world deployment scenarios. In most cases, > > objects are not going to be shared, and in those cases with > > sufficiently complicated deployments in which shared objects > > could be used, the user is likely to be sophisticated enough > > and skilled enough to manage updating what are essentially > > "copies" of objects, and would likely have an opinion about > > how individual failures should be handled which wouldn't > > necessarily coincide with what we developers of the system > > would assume. That is to say, allowing too many many to many > > relationships feels like a solution to a problem that doesn't > > really exist, and introduces a lot of unnecessary complexity. > > > > > > > > > > > > In any case, though, I feel like we should walk before we run: > > Implementing 1:1 initially is a good idea to get us rolling. > > Whether we then implement 1:N or M:N after that is another > > question entirely. But in any case, it seems like a bad idea > > to try to start with M:N. > > > > > > > > > > > > Stephen > > > > > > > > > > > > > > > > On Thu, Nov 20, 2014 at 4:52 AM, Samuel Bercovici > > > wrote: > > > > Hi, > > > > Per discussion I had at OpenStack Summit/Paris with Brandon > > and Doug, I would like to remind everyone why we choose to > > follow a model where pools and listeners are shared (many to > > many relationships). > > > > Use Cases: > > 1. The same application is being exposed via different LB > > objects. > > For example: users coming from the internal "private" > > organization network, have an LB1(private_VIP) --> > > Listener1(TLS) -->Pool1 and user coming from the "internet", > > have LB2(public_vip)-->Listener1(TLS)-->Pool1. > > This may also happen to support ipv4 and ipv6: LB_v4(ipv4_VIP) > > --> Listener1(TLS) -->Pool1 and LB_v6(ipv6_VIP) --> > > Listener1(TLS) -->Pool1 > > The operator would like to be able to manage the pool > > membership in cases of updates and error in a single place. > > > > 2. The same group of servers is being used via different > > listeners optionally also connected to different LB objects. > > For example: users coming from the internal "private" > > organization network, have an LB1(private_VIP) --> > > Listener1(HTTP) -->Pool1 and user coming from the "internet", > > have LB2(public_vip)-->Listener2(TLS)-->Pool1. > > The LBs may use different flavors as LB2 needs TLS termination > > and may prefer a different "stronger" flavor. > > The operator would like to be able to manage the pool > > membership in cases of updates and error in a single place. > > > > 3. The same group of servers is being used in several > > different L7_Policies connected to a listener. Such listener > > may be reused as in use case 1. > > For example: LB1(VIP1)-->Listener_L7(TLS) > > | > > > > +-->L7_Policy1(rules..)-->Pool1 > > | > > > > +-->L7_Policy2(rules..)-->Pool2 > > | > > > > +-->L7_Policy3(rules..)-->Pool1 > > | > > > > +-->L7_Policy3(rules..)-->Reject > > > > > > I think that the "key" issue handling correctly the > > "provisioning" state and the operation state in a many to many > > model. > > This is an issue as we have attached status fields to each and > > every object in the model. > > A side effect of the above is that to understand the > > "provisioning/operation" status one needs to check many > > different objects. > > > > To remedy this, I would like to turn all objects besides the > > LB to be logical objects. This means that the only place to > > manage the status/state will be on the LB object. > > Such status should be hierarchical so that logical object > > attached to an LB, would have their status consumed out of the > > LB object itself (in case of an error). > > We also need to discuss how modifications of a logical object > > will be "rendered" to the concrete LB objects. > > You may want to revisit > > https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r the "Logical Model + Provisioning Status + Operation Status + Statistics" for a somewhat more detailed explanation albeit it uses the LBaaS v1 model as a reference. > > > > Regards, > > -Sam. > > > > > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > -- > > > > Stephen Balukoff > > Blue Box Group, LLC > > (800)613-4305 x807 > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > -- > > Stephen Balukoff > > Blue Box Group, LLC > > (800)613-4305 x807 > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Dec 5 17:43:20 2014 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 5 Dec 2014 11:43:20 -0600 Subject: [openstack-dev] [all] bugs with paste pipelines and multiple projects and upgrading In-Reply-To: <547F39A6.6050204@dague.net> References: <547F29B8.9050309@dague.net> <547F39A6.6050204@dague.net> Message-ID: A review has been posted allowing proper upgrades to the Keystone paste file in grenade, and the XML references have been removed for the upgrade case [1]. There is also documentation in the Kilo Release Notes detailing the upgrade process for XML removal from Juno to Kilo [2]. [1] https://review.openstack.org/#/c/139051/ [2] https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Upgrade_Notes On Wed, Dec 3, 2014 at 10:26 AM, Sean Dague wrote: > On 12/03/2014 10:57 AM, Lance Bragstad wrote: > > > > > > On Wed, Dec 3, 2014 at 9:18 AM, Sean Dague > > wrote: > > > > We've hit two interesting issues this week around multiple projects > > installing into the paste pipeline of a server. > > > > 1) the pkg_resources explosion in grenade. Basically ceilometer > modified > > swift paste.ini to add it's own code into swift (that's part of > normal > > ceilometer install in devstack - > > > https://github.com/openstack-dev/devstack/blob/master/lib/swift#L376-L381 > > > > This meant when we upgraded and started swift, it turns out that > we're > > actually running old ceilometer code. A requirements mismatch caused > an > > explosion (which we've since worked around), however demonstrates a > > clear problem with installing code in another project's pipeline. > > > > 2) keystone is having issues dropping XML api support. It turns out > that > > parts of it's paste pipeline are actually provided by keystone > > middleware, which means that keystone can't provide a sane "this is > not > > supported" message in a proxy class for older paste config files. > > > > > > I made an attempt to capture some of the information on the specific > > grenade case we were hitting for XML removal in a bug report [1]. We can > > keep the classes in keystone/middleware/core.py as a workaround for now > > with essentially another deprecation message, but at some point we > > should pull the plug on defining XmlBodyMiddleware in our > > keystone-paste.ini [2] as it won't do anything anyway and shouldn't be > > in the configuration. Since this deals with a configuration change, this > > could "always" break a customer. What criteria should we follow for > > cases like this? > > > > From visiting with Sean in -qa, typically service configurations don't > > change for the grenade target on upgrade, but if we have to make a > > change on upgrade (to clean out old cruft for example), how do we go > > about that? > > Add content here - > https://github.com/openstack-dev/grenade/tree/master/from-juno > > Note: you'll get a -2 unless you provide a link to Release Notes > somewhere that highlights this as an Upgrade Impact for users for the > next release. > > -Sean > > -- > Sean Dague > http://dague.net > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Fri Dec 5 17:50:39 2014 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 05 Dec 2014 09:50:39 -0800 Subject: [openstack-dev] Session length on wiki.openstack.org In-Reply-To: <20141205142645.GL2497@yuggoth.org> References: <20141205010348.GY84915@thor.bakeyournoodle.com> <20141205142645.GL2497@yuggoth.org> Message-ID: <1417801839.4046797.199334109.686351C1@webmail.messagingengine.com> On Fri, Dec 5, 2014, at 06:26 AM, Jeremy Stanley wrote: > On 2014-12-04 18:37:48 -0700 (-0700), Carl Baldwin wrote: > > +1 I've been meaning to say something like this but never got > > around to it. Thanks for speaking up. > > https://storyboard.openstack.org/#!/story/1172753 > > I think Ryan said it might be a bug in the OpenID plug-in, but if so > he didn't put that comment in the bug. > -- > Jeremy Stanley > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I dug up an upstream bug for this [0] unfortunately it is quite terse and not very helpful. Anyone with existing mediwiki account credentials want to ping that thread and see if we can get more info? [0] http://www.mediawiki.org/w/index.php?title=Thread:Extension_talk:OpenID/Is_it_possible_to_change_the_expiration_of_the_session/cookie_for_logged-in_users%3F&lqt_method=thread_history Clark From dtroyer at gmail.com Fri Dec 5 17:58:31 2014 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 5 Dec 2014 11:58:31 -0600 Subject: [openstack-dev] [devstack] set -o nounset in devstack? In-Reply-To: <5481A8F8.5080104@dague.net> References: <5481A8F8.5080104@dague.net> Message-ID: On Fri, Dec 5, 2014 at 6:45 AM, Sean Dague wrote: > I got bit by another bug yesterday where there was a typo between > variables in the source tree. So I started down the process of set -o > nounset to see how bad it would be to prevent that in the future. > [...] > The trueorfalse issue can be fixed if we change the function to be: > > function trueorfalse { > local xtrace=$(set +o | grep xtrace) > set +o xtrace > local default=$1 > local testval="${!2+x}" > > [[ -z "$testval" ]] && { echo "$default"; return; } > There should be an $xtrace in that return path > [[ "0 no No NO false False FALSE" =~ "$testval" ]] && { echo > "False"; return; } > [[ "1 yes Yes YES true True TRUE" =~ "$testval" ]] && { echo "True"; > return; } > echo "$default" > $xtrace > } > > > FOO=$(trueorfalse True FOO) > > ... then works. > I'm good with this. > the -z and -n bits can be addressed with either FOO=${FOO:-} or an isset > function that interpolates. FOO=${FOO:-} actually feels better to me > because it's part of the spirit of things. > I think I agree, but we have a lot og is_*() functions so that wouldn't be to far of a departure, I could be convinced either way I suppose. This is going to be the hard part of the cleanup and ongoing enforcement. > So... the question is, is this worth it? It's going to have fall out in > lesser used parts of the code where we don't catch things (like -o > errexit did). However it should help flush out a class of bugs in the > process. > This is going to be a long process to do the change, I think we will need to bracket parts of the code as they get cleaned up to avoid regressions slipping in. dt -- Dean Troyer dtroyer at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbalukoff at bluebox.net Fri Dec 5 17:59:15 2014 From: sbalukoff at bluebox.net (Stephen Balukoff) Date: Fri, 5 Dec 2014 09:59:15 -0800 Subject: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. In-Reply-To: References: <1416867738.3960.19.camel@localhost> <1417737958.4577.25.camel@localhost> Message-ID: German-- but the point is that sharing apparently has no effect on the number of permutations for status information. The only difference here is that without sharing it's more work for the user to maintain and modify trees of objects. On Fri, Dec 5, 2014 at 9:36 AM, Eichberger, German wrote: > Hi Brandon + Stephen, > > > > Having all those permutations (and potentially testing them) made us lean > against the sharing case in the first place. It?s just a lot of extra work > for only a small number of our customers. > > > > German > > > > *From:* Stephen Balukoff [mailto:sbalukoff at bluebox.net] > *Sent:* Thursday, December 04, 2014 9:17 PM > > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - > Use Cases that led us to adopt this. > > > > Hi Brandon, > > > > Yeah, in your example, member1 could potentially have 8 different statuses > (and this is a small example!)... If that member starts flapping, it means > that every time it flaps there are 8 notifications being passed upstream. > > > > Note that this problem actually doesn't get any better if we're not > sharing objects but are just duplicating them (ie. not sharing objects but > the user makes references to the same back-end machine as 8 different > members.) > > > > To be honest, I don't see sharing entities at many levels like this being > the rule for most of our installations-- maybe a few percentage points of > installations will do an excessive sharing of members, but I doubt it. So > really, even though reporting status like this is likely to generate a > pretty big tree of data, I don't think this is actually a problem, eh. And > I don't see sharing entities actually reducing the workload of what needs > to happen behind the scenes. (It just allows us to conceal more of this > work from the user.) > > > > Stephen > > > > > > > > On Thu, Dec 4, 2014 at 4:05 PM, Brandon Logan > wrote: > > Sorry it's taken me a while to respond to this. > > So I wasn't thinking about this correctly. I was afraid you would have > to pass in a full tree of parent child representations to /loadbalancers > to update anything a load balancer it is associated to (including down > to members). However, after thinking about it, a user would just make > an association call on each object. For Example, associate member1 with > pool1, associate pool1 with listener1, then associate loadbalancer1 with > listener1. Updating is just as simple as updating each entity. > > This does bring up another problem though. If a listener can live on > many load balancers, and a pool can live on many listeners, and a member > can live on many pools, there's lot of permutations to keep track of for > status. you can't just link a member's status to a load balancer bc a > member can exist on many pools under that load balancer, and each pool > can exist under many listeners under that load balancer. For example, > say I have these: > > lb1 > lb2 > listener1 > listener2 > pool1 > pool2 > member1 > member2 > > lb1 -> [listener1, listener2] > lb2 -> [listener1] > listener1 -> [pool1, pool2] > listener2 -> [pool1] > pool1 -> [member1, member2] > pool2 -> [member1] > > member1 can now have a different statuses under pool1 and pool2. since > listener1 and listener2 both have pool1, this means member1 will now > have a different status for listener1 -> pool1 and listener2 -> pool2 > combination. And so forth for load balancers. > > Basically there's a lot of permutations and combinations to keep track > of with this model for statuses. Showing these in the body of load > balancer details can get quite large. > > I hope this makes sense because my brain is ready to explode. > > Thanks, > Brandon > > > On Thu, 2014-11-27 at 08:52 +0000, Samuel Bercovici wrote: > > Brandon, can you please explain further (1) bellow? > > > > -----Original Message----- > > From: Brandon Logan [mailto:brandon.logan at RACKSPACE.COM] > > Sent: Tuesday, November 25, 2014 12:23 AM > > To: openstack-dev at lists.openstack.org > > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - > Use Cases that led us to adopt this. > > > > My impression is that the statuses of each entity will be shown on a > detailed info request of a loadbalancer. The root level objects would not > have any statuses. For example a user makes a GET request to > /loadbalancers/{lb_id} and the status of every child of that load balancer > is show in a "status_tree" json object. For example: > > > > {"name": "loadbalancer1", > > "status_tree": > > {"listeners": > > [{"name": "listener1", "operating_status": "ACTIVE", > > "default_pool": > > {"name": "pool1", "status": "ACTIVE", > > "members": > > [{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}} > > > > Sam, correct me if I am wrong. > > > > I generally like this idea. I do have a few reservations with this: > > > > 1) Creating and updating a load balancer requires a full tree > configuration with the current extension/plugin logic in neutron. Since > updates will require a full tree, it means the user would have to know the > full tree configuration just to simply update a name. Solving this would > require nested child resources in the URL, which the current neutron > extension/plugin does not allow. Maybe the new one will. > > > > 2) The status_tree can get quite large depending on the number of > listeners and pools being used. This is a minor issue really as it will > make horizon's (or any other UI tool's) job easier to show statuses. > > > > Thanks, > > Brandon > > > > On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote: > > > Hi Samuel, > > > > > > > > > We've actually been avoiding having a deeper discussion about status > > > in Neutron LBaaS since this can get pretty hairy as the back-end > > > implementations get more complicated. I suspect managing that is > > > probably one of the bigger reasons we have disagreements around object > > > sharing. Perhaps it's time we discussed representing state "correctly" > > > (whatever that means), instead of a round-a-bout discussion about > > > object sharing (which, I think, is really just avoiding this issue)? > > > > > > > > > Do you have a proposal about how status should be represented > > > (possibly including a description of the state machine) if we collapse > > > everything down to be logical objects except the loadbalancer object? > > > (From what you're proposing, I suspect it might be too general to, for > > > example, represent the UP/DOWN status of members of a given pool.) > > > > > > > > > Also, from an haproxy perspective, sharing pools within a single > > > listener actually isn't a problem. That is to say, having the same > > > L7Policy pointing at the same pool is OK, so I personally don't have a > > > problem allowing sharing of objects within the scope of parent > > > objects. What do the rest of y'all think? > > > > > > > > > Stephen > > > > > > > > > > > > On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici > > > wrote: > > > Hi Stephen, > > > > > > > > > > > > 1. The issue is that if we do 1:1 and allow status/state > > > to proliferate throughout all objects we will then get an > > > issue to fix it later, hence even if we do not do sharing, I > > > would still like to have all objects besides LB be treated as > > > logical. > > > > > > 2. The 3rd use case bellow will not be reasonable without > > > pool sharing between different policies. Specifying different > > > pools which are the same for each policy make it non-started > > > to me. > > > > > > > > > > > > -Sam. > > > > > > > > > > > > > > > > > > > > > > > > From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] > > > Sent: Friday, November 21, 2014 10:26 PM > > > To: OpenStack Development Mailing List (not for usage > > > questions) > > > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects > > > in LBaaS - Use Cases that led us to adopt this. > > > > > > > > > > > > I think the idea was to implement 1:1 initially to reduce the > > > amount of code and operational complexity we'd have to deal > > > with in initial revisions of LBaaS v2. Many to many can be > > > simulated in this scenario, though it does shift the burden of > > > maintenance to the end user. It does greatly simplify the > > > initial code for v2, in any case, though. > > > > > > > > > > > > > > > > > > Did we ever agree to allowing listeners to be shared among > > > load balancers? I think that still might be a N:1 > > > relationship even in our latest models. > > > > > > > > > > > > > > > There's also the difficulty introduced by supporting different > > > flavors: Since flavors are essentially an association between > > > a load balancer object and a driver (with parameters), once > > > flavors are introduced, any sub-objects of a given load > > > balancer objects must necessarily be purely logical until they > > > are associated with a load balancer. I know there was talk of > > > forcing these objects to be sub-objects of a load balancer > > > which can't be accessed independently of the load balancer > > > (which would have much the same effect as what you discuss: > > > State / status only make sense once logical objects have an > > > instantiation somewhere.) However, the currently proposed API > > > treats most objects as root objects, which breaks this > > > paradigm. > > > > > > > > > > > > > > > > > > How we handle status and updates once there's an instantiation > > > of these logical objects is where we start getting into real > > > complexity. > > > > > > > > > > > > > > > > > > It seems to me there's a lot of complexity introduced when we > > > allow a lot of many to many relationships without a whole lot > > > of benefit in real-world deployment scenarios. In most cases, > > > objects are not going to be shared, and in those cases with > > > sufficiently complicated deployments in which shared objects > > > could be used, the user is likely to be sophisticated enough > > > and skilled enough to manage updating what are essentially > > > "copies" of objects, and would likely have an opinion about > > > how individual failures should be handled which wouldn't > > > necessarily coincide with what we developers of the system > > > would assume. That is to say, allowing too many many to many > > > relationships feels like a solution to a problem that doesn't > > > really exist, and introduces a lot of unnecessary complexity. > > > > > > > > > > > > > > > > > > In any case, though, I feel like we should walk before we run: > > > Implementing 1:1 initially is a good idea to get us rolling. > > > Whether we then implement 1:N or M:N after that is another > > > question entirely. But in any case, it seems like a bad idea > > > to try to start with M:N. > > > > > > > > > > > > > > > > > > Stephen > > > > > > > > > > > > > > > > > > > > > > > > On Thu, Nov 20, 2014 at 4:52 AM, Samuel Bercovici > > > wrote: > > > > > > Hi, > > > > > > Per discussion I had at OpenStack Summit/Paris with Brandon > > > and Doug, I would like to remind everyone why we choose to > > > follow a model where pools and listeners are shared (many to > > > many relationships). > > > > > > Use Cases: > > > 1. The same application is being exposed via different LB > > > objects. > > > For example: users coming from the internal "private" > > > organization network, have an LB1(private_VIP) --> > > > Listener1(TLS) -->Pool1 and user coming from the "internet", > > > have LB2(public_vip)-->Listener1(TLS)-->Pool1. > > > This may also happen to support ipv4 and ipv6: LB_v4(ipv4_VIP) > > > --> Listener1(TLS) -->Pool1 and LB_v6(ipv6_VIP) --> > > > Listener1(TLS) -->Pool1 > > > The operator would like to be able to manage the pool > > > membership in cases of updates and error in a single place. > > > > > > 2. The same group of servers is being used via different > > > listeners optionally also connected to different LB objects. > > > For example: users coming from the internal "private" > > > organization network, have an LB1(private_VIP) --> > > > Listener1(HTTP) -->Pool1 and user coming from the "internet", > > > have LB2(public_vip)-->Listener2(TLS)-->Pool1. > > > The LBs may use different flavors as LB2 needs TLS termination > > > and may prefer a different "stronger" flavor. > > > The operator would like to be able to manage the pool > > > membership in cases of updates and error in a single place. > > > > > > 3. The same group of servers is being used in several > > > different L7_Policies connected to a listener. Such listener > > > may be reused as in use case 1. > > > For example: LB1(VIP1)-->Listener_L7(TLS) > > > | > > > > > > +-->L7_Policy1(rules..)-->Pool1 > > > | > > > > > > +-->L7_Policy2(rules..)-->Pool2 > > > | > > > > > > +-->L7_Policy3(rules..)-->Pool1 > > > | > > > > > > +-->L7_Policy3(rules..)-->Reject > > > > > > > > > I think that the "key" issue handling correctly the > > > "provisioning" state and the operation state in a many to many > > > model. > > > This is an issue as we have attached status fields to each and > > > every object in the model. > > > A side effect of the above is that to understand the > > > "provisioning/operation" status one needs to check many > > > different objects. > > > > > > To remedy this, I would like to turn all objects besides the > > > LB to be logical objects. This means that the only place to > > > manage the status/state will be on the LB object. > > > Such status should be hierarchical so that logical object > > > attached to an LB, would have their status consumed out of the > > > LB object itself (in case of an error). > > > We also need to discuss how modifications of a logical object > > > will be "rendered" to the concrete LB objects. > > > You may want to revisit > > > > https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r > the "Logical Model + Provisioning Status + Operation Status + Statistics" > for a somewhat more detailed explanation albeit it uses the LBaaS v1 model > as a reference. > > > > > > Regards, > > > -Sam. > > > > > > > > > > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Stephen Balukoff > > > Blue Box Group, LLC > > > (800)613-4305 x807 > > > > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > > > -- > > > Stephen Balukoff > > > Blue Box Group, LLC > > > (800)613-4305 x807 > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > > Stephen Balukoff > Blue Box Group, LLC > (800)613-4305 x807 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Fri Dec 5 18:00:56 2014 From: clint at fewbar.com (Clint Byrum) Date: Fri, 05 Dec 2014 10:00:56 -0800 Subject: [openstack-dev] [TripleO] Alternate meeting time In-Reply-To: <54803A17.3040403@redhat.com> References: <547DD0A2.3030102@redhat.com> <547DD6E1.8070103@redhat.com> <547DD9F5.6010108@redhat.com> <54803A17.3040403@redhat.com> Message-ID: <1417802400-sup-2353@fewbar.com> Excerpts from marios's message of 2014-12-04 02:40:23 -0800: > On 04/12/14 11:40, James Polley wrote: > > Just taking a look at http://doodle.com/27ffgkdm5gxzr654 again - we've > > had 10 people respond so far. The winning time so far is Monday 2100UTC > > - 7 "yes" and one "If I have to". > > for me it currently shows 1200 UTC as the preferred time. > > So to be clear, we are voting here for the alternate meeting. The > 'original' meeting is at 1900UTC. If in fact 2100UTC ends up being the > most popular, what would be the point of an alternating meeting that is > only 2 hours apart in time? > Actually that's a good point. I didn't really think about it before I voted, but the regular time is perfect for me, so perhaps I should remove my vote, and anyone else who does not need the alternate time should consider doing so as well. From clint at fewbar.com Fri Dec 5 18:49:20 2014 From: clint at fewbar.com (Clint Byrum) Date: Fri, 05 Dec 2014 10:49:20 -0800 Subject: [openstack-dev] [tripleo] Managing no-mergepy template duplication In-Reply-To: <20141204090918.GA8949@t430slt.redhat.com> References: <20141203101059.GA23017@t430slt.redhat.com> <1417660515.836.3.camel@dovetail.localdomain> <1417661497-sup-1964@fewbar.com> <20141204090918.GA8949@t430slt.redhat.com> Message-ID: <1417804955-sup-403@fewbar.com> Excerpts from Steven Hardy's message of 2014-12-04 01:09:18 -0800: > On Wed, Dec 03, 2014 at 06:54:48PM -0800, Clint Byrum wrote: > > Excerpts from Dan Prince's message of 2014-12-03 18:35:15 -0800: > > > On Wed, 2014-12-03 at 10:11 +0000, Steven Hardy wrote: > > > > Hi all, > > > > > > > > Lately I've been spending more time looking at tripleo and doing some > > > > reviews. I'm particularly interested in helping the no-mergepy and > > > > subsequent puppet-software-config implementations mature (as well as > > > > improving overcloud updates via heat). > > > > > > > > Since Tomas's patch landed[1] to enable --no-mergepy in > > > > tripleo-heat-templates, it's become apparent that frequently patches are > > > > submitted which only update overcloud-source.yaml, so I've been trying to > > > > catch these and ask for a corresponding change to e.g controller.yaml. > > > > > > > > This raises the following questions: > > > > > > > > 1. Is it reasonable to -1 a patch and ask folks to update in both places? > > > > > > Yes! In fact until we abandon merge.py we shouldn't land anything that > > > doesn't make the change in both places. Probably more important to make > > > sure things go into the new (no-mergepy) templates though. > > > > > > > 2. How are we going to handle this duplication and divergence? > > > > > > Move as quickly as possible to the new without-mergepy varients? That is > > > my vote anyways. > > > > > > > 3. What's the status of getting gating CI on the --no-mergepy templates? > > > > > > Devtest already supports it by simply setting an option (which sets an > > > ENV variable). Just need to update tripleo-ci to do that and then make > > > the switch. > > > > > > > 4. What barriers exist (now that I've implemented[2] the eliding functionality > > > > requested[3] for ResourceGroup) to moving to the --no-mergepy > > > > implementation by default? > > > > > > None that I know of. > > > > > > > I concur with Dan. Elide was the last reason not to use this. > > That's great news! :) > > > One thing to consider is that there is no actual upgrade path from > > non-autoscaling-group based clouds, to auto-scaling-group based > > templates. We should consider how we'll do that before making it the > > default. So, I suggest we discuss possible upgrade paths and then move > > forward with switching one of the CI jobs to using the new templates. > > This is probably going to be really hard :( > > The sort of pattern which might work is: > > 1. Abandon mergepy based stack > 2. Have helper script to reformat abandon data into nomergepy based adopt > data > 3. Adopt stack > > Unforunately there are several abandon/adopt bugs we'll have to fix if we > decide this is the way to go (original author hasn't maintained it, but we > can pick up the slack if it's on the critical path for TripleO). > > An alternative could be the external resource feature Angus is looking at: > > https://review.openstack.org/#/c/134848/ > > This would be more limited (we just reference rather than manage the > existing resources), but potentially safer. > > The main risk here is import (or subsequent update) operations becoming > destructive and replacing things, but I guess to some extent this is a risk > with any change to tripleo-heat-templates. > So you and I talked on IRC, but I want to socialize what we talked about more. The abandon/adopt pipeline is a bit broken in Heat and hasn't proven to be as useful as I'd hoped when it was first specced out. It seems too broad, and relies on any tools understanding how to morph a whole new format (the abandon json). With external_reference, the external upgrade process just needs to know how to morph the template. So if we're combining 8 existing servers into an autoscaling group, we just need to know how to make an autoscaling group with 8 servers as the external reference ids. This is, I think, the shortest path to a working solution, as I feel the external reference work in Heat is relatively straight forward and the spec has widescale agreement. There was another approach I mentioned, which is that we can teach Heat how to morph resources. So we could teach Heat that servers can be made into autoscaling groups, and vice-versa. This is a whole new feature though, and IMO, something that should be tackled _after_ we make it work with the external_reference feature, as this is basically a superset of what we'll do externally. > Has any thought been given to upgrade CI testing? I'm thinking grenade or > grenade-style testing here where we test maintaing a deployed overcloud > over an upgrade of (some subset of) changes. > > I know the upgrade testing thing will be hard, but to me it's a key > requirement to mature heat-driven updates vs those driven by external > tooling. Upgrade testing is vital to the future of the project IMO. We really haven't validated the image based update method upstream yet. In Helion, we're using tripleo-ansible for updates, and that works great, but we need to get that or something similar into the pipeline for the gate, or every user who adopts will be left with a ton of work if they want to do upgrades. The approach we've used for testing in Helion is to deploy the new commit, then generate a new image with a new file, and upgrade (thanks JP for your amazing work on this btw, hopefully we can realize this upstream soon. :) That is not ideal though. What we need to do is test upgrading from the last commit to the new one, and arguably, also from the last stable release to the new commit (ala grenade). From mriedem at linux.vnet.ibm.com Fri Dec 5 18:50:37 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Fri, 05 Dec 2014 12:50:37 -0600 Subject: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support Message-ID: <5481FE7D.7020804@linux.vnet.ibm.com> In Juno we effectively disabled live snapshots with libvirt due to bug 1334398 [1] failing the gate about 25% of the time. I was going through the Juno release notes today and saw this as a known issue, which reminded me of it and was wondering if there is anything being done about it? As I recall, it *works* but it wasn't working under the stress our check/gate system puts on that code path. One thing I'm thinking is, couldn't we make this an experimental config option and by default it's disabled but we could run it in the experimental queue, or people could use it without having to patch the code to remove the artificial minimum version constraint put in the code. Something like: if CONF.libvirt.live_snapshot_supported: # do your thing [1] https://bugs.launchpad.net/nova/+bug/1334398 -- Thanks, Matt Riedemann From sean at dague.net Fri Dec 5 19:32:17 2014 From: sean at dague.net (Sean Dague) Date: Fri, 05 Dec 2014 14:32:17 -0500 Subject: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support In-Reply-To: <5481FE7D.7020804@linux.vnet.ibm.com> References: <5481FE7D.7020804@linux.vnet.ibm.com> Message-ID: <54820841.6080200@dague.net> On 12/05/2014 01:50 PM, Matt Riedemann wrote: > In Juno we effectively disabled live snapshots with libvirt due to bug > 1334398 [1] failing the gate about 25% of the time. > > I was going through the Juno release notes today and saw this as a known > issue, which reminded me of it and was wondering if there is anything > being done about it? > > As I recall, it *works* but it wasn't working under the stress our > check/gate system puts on that code path. > > One thing I'm thinking is, couldn't we make this an experimental config > option and by default it's disabled but we could run it in the > experimental queue, or people could use it without having to patch the > code to remove the artificial minimum version constraint put in the code. > > Something like: > > if CONF.libvirt.live_snapshot_supported: > # do your thing > > [1] https://bugs.launchpad.net/nova/+bug/1334398 So, it works. If you aren't booting / shutting down guests at exactly the same time as snapshotting. I believe cburgess said in IRC yesterday he was going to take another look at it next week. I'm happy to put this into dansmith's pattented [workarounds] config group (coming soon to fix the qemu-convert bug). But I don't think this should be a normal libvirt option. -Sean -- Sean Dague http://dague.net From imain at redhat.com Fri Dec 5 19:43:16 2014 From: imain at redhat.com (Ian Main) Date: Fri, 05 Dec 2014 11:43:16 -0800 Subject: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches In-Reply-To: <5480E534.2050601@dague.net> References: <5480D2CF.8050301@linux.vnet.ibm.com> <5480E275.8020005@linux.vnet.ibm.com> <5480E534.2050601@dague.net> Message-ID: <54820ad4d5c3d_2da2df3e8cb9@ovo.mains.priv.notmuch> Sean Dague wrote: > On 12/04/2014 05:38 PM, Matt Riedemann wrote: > > > > > > On 12/4/2014 4:06 PM, Michael Still wrote: > >> +Eric and Ian > >> > >> On Fri, Dec 5, 2014 at 8:31 AM, Matt Riedemann > >> wrote: > >>> This came up in the nova meeting today, I've opened a bug [1] for it. > >>> Since > >>> this isn't maintained by infra we don't have log indexing so I can't use > >>> logstash to see how pervasive it us, but multiple people are > >>> reporting the > >>> same thing in IRC. > >>> > >>> Who is maintaining the nova-docker CI and can look at this? > >>> > >>> It also looks like the log format for the nova-docker CI is a bit > >>> weird, can > >>> that be cleaned up to be more consistent with other CI log results? > >>> > >>> [1] https://bugs.launchpad.net/nova-docker/+bug/1399443 > >>> > >>> -- > >>> > >>> Thanks, > >>> > >>> Matt Riedemann > >>> > >>> > >>> _______________________________________________ > >>> OpenStack-dev mailing list > >>> OpenStack-dev at lists.openstack.org > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> > > > > Also, according to the 3rd party CI requirements [1] I should see > > nova-docker CI in the third party wiki page [2] so I can get details on > > who to contact when this fails but that's not done. > > > > [1] http://ci.openstack.org/third_party.html#requirements > > [2] https://wiki.openstack.org/wiki/ThirdPartySystems > > It's not the 3rd party CI job we are talking about, it's the one in the > check queue which is run by infra. > > But, more importantly, jobs in those queues need shepards that will fix > them. Otherwise they will get deleted. > > Clarkb provided the fix for the log structure right now - > https://review.openstack.org/#/c/139237/1 so at least it will look > vaguely sane on failures > > -Sean This is one of the reasons we might like to have this in nova core. Otherwise we will just keep addressing issues as they come up. We would likely be involved doing this if it were part of nova core anyway. Ian > -- > Sean Dague > http://dague.net > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From andrew.laski at rackspace.com Fri Dec 5 19:45:48 2014 From: andrew.laski at rackspace.com (Andrew Laski) Date: Fri, 5 Dec 2014 14:45:48 -0500 Subject: [openstack-dev] [Nova] sqlalchemy-migrate vs alembic for new database Message-ID: <54820B6C.4060901@rackspace.com> The cells v2 effort is going to be introducing a new database into Nova. This has been an opportunity to rethink and approach a few things differently, including how we should handle migrations. There have been discussions for a long time now about switching over to alembic for migrations so I want to ask, should we start using alembic from the start for this new database? The question was first raised by Dan Smith on https://review.openstack.org/#/c/135424/ I do have some concern about having two databases managed in two different ways, but if the details are well hidden behind a nova-manage command I'm not sure it will actually matter in practice. From mriedem at linux.vnet.ibm.com Fri Dec 5 20:12:44 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Fri, 05 Dec 2014 14:12:44 -0600 Subject: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support In-Reply-To: <54820841.6080200@dague.net> References: <5481FE7D.7020804@linux.vnet.ibm.com> <54820841.6080200@dague.net> Message-ID: <548211BC.6070900@linux.vnet.ibm.com> On 12/5/2014 1:32 PM, Sean Dague wrote: > On 12/05/2014 01:50 PM, Matt Riedemann wrote: >> In Juno we effectively disabled live snapshots with libvirt due to bug >> 1334398 [1] failing the gate about 25% of the time. >> >> I was going through the Juno release notes today and saw this as a known >> issue, which reminded me of it and was wondering if there is anything >> being done about it? >> >> As I recall, it *works* but it wasn't working under the stress our >> check/gate system puts on that code path. >> >> One thing I'm thinking is, couldn't we make this an experimental config >> option and by default it's disabled but we could run it in the >> experimental queue, or people could use it without having to patch the >> code to remove the artificial minimum version constraint put in the code. >> >> Something like: >> >> if CONF.libvirt.live_snapshot_supported: >> # do your thing >> >> [1] https://bugs.launchpad.net/nova/+bug/1334398 > > So, it works. If you aren't booting / shutting down guests at exactly > the same time as snapshotting. I believe cburgess said in IRC yesterday > he was going to take another look at it next week. > > I'm happy to put this into dansmith's pattented [workarounds] config > group (coming soon to fix the qemu-convert bug). But I don't think this > should be a normal libvirt option. > > -Sean > Yeah the [workarounds] group is what got me thinking about it too as a config option, otherwise I think the idea of an [experimental] config group has come up before as a place to put 'not tested, here be dragons' type stuff. -- Thanks, Matt Riedemann From mriedem at linux.vnet.ibm.com Fri Dec 5 20:14:37 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Fri, 05 Dec 2014 14:14:37 -0600 Subject: [openstack-dev] [Nova] sqlalchemy-migrate vs alembic for new database In-Reply-To: <54820B6C.4060901@rackspace.com> References: <54820B6C.4060901@rackspace.com> Message-ID: <5482122D.3060308@linux.vnet.ibm.com> On 12/5/2014 1:45 PM, Andrew Laski wrote: > The cells v2 effort is going to be introducing a new database into > Nova. This has been an opportunity to rethink and approach a few things > differently, including how we should handle migrations. There have been > discussions for a long time now about switching over to alembic for > migrations so I want to ask, should we start using alembic from the > start for this new database? > > The question was first raised by Dan Smith on > https://review.openstack.org/#/c/135424/ > > I do have some concern about having two databases managed in two > different ways, but if the details are well hidden behind a nova-manage > command I'm not sure it will actually matter in practice. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > I don't have experience with Alembic but I'd think we should use Alembic for the new database unless there is a compelling reason not to. Maybe we need Mike Bayer (or other oslo.db people) to give us an idea of what kinds of problems we might have with managing two databases with two different migration schemes. But the last part you said is key for me, if we can abstract it well then hopefully it's not very painful. -- Thanks, Matt Riedemann From johannes at erdfelt.com Fri Dec 5 20:42:11 2014 From: johannes at erdfelt.com (Johannes Erdfelt) Date: Fri, 5 Dec 2014 12:42:11 -0800 Subject: [openstack-dev] [Nova] sqlalchemy-migrate vs alembic for new database In-Reply-To: <54820B6C.4060901@rackspace.com> References: <54820B6C.4060901@rackspace.com> Message-ID: <20141205204211.GY26706@sventech.com> On Fri, Dec 05, 2014, Andrew Laski wrote: > The cells v2 effort is going to be introducing a new database into > Nova. This has been an opportunity to rethink and approach a few > things differently, including how we should handle migrations. There > have been discussions for a long time now about switching over to > alembic for migrations so I want to ask, should we start using > alembic from the start for this new database? > > The question was first raised by Dan Smith on > https://review.openstack.org/#/c/135424/ > > I do have some concern about having two databases managed in two > different ways, but if the details are well hidden behind a > nova-manage command I'm not sure it will actually matter in > practice. This would be a good time for people to review my proposed spec: https://review.openstack.org/#/c/102545/ Not only does it help operators but it also helps developers since all they would need to do in the future is update the model and DDL statements are generated based on comparing the running schema with the model. BTW, it uses Alembic under the hood for most of the heavy lifting. JE From sbauza at redhat.com Fri Dec 5 20:52:23 2014 From: sbauza at redhat.com (Sylvain Bauza) Date: Fri, 05 Dec 2014 21:52:23 +0100 Subject: [openstack-dev] [Nova] sqlalchemy-migrate vs alembic for new database In-Reply-To: <5482122D.3060308@linux.vnet.ibm.com> References: <54820B6C.4060901@rackspace.com> <5482122D.3060308@linux.vnet.ibm.com> Message-ID: <54821B07.9000104@redhat.com> Le 05/12/2014 21:14, Matt Riedemann a ?crit : > > > On 12/5/2014 1:45 PM, Andrew Laski wrote: >> The cells v2 effort is going to be introducing a new database into >> Nova. This has been an opportunity to rethink and approach a few things >> differently, including how we should handle migrations. There have been >> discussions for a long time now about switching over to alembic for >> migrations so I want to ask, should we start using alembic from the >> start for this new database? >> >> The question was first raised by Dan Smith on >> https://review.openstack.org/#/c/135424/ >> >> I do have some concern about having two databases managed in two >> different ways, but if the details are well hidden behind a nova-manage >> command I'm not sure it will actually matter in practice. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > I don't have experience with Alembic but I'd think we should use > Alembic for the new database unless there is a compelling reason not > to. Maybe we need Mike Bayer (or other oslo.db people) to give us an > idea of what kinds of problems we might have with managing two > databases with two different migration schemes. > > But the last part you said is key for me, if we can abstract it well > then hopefully it's not very painful. > I had some experience with Alembic in a previous Stackforge project and I'm definitely +1 on using it for the Cells V2 database. We can just provide a nova-manage cell-db service that would facade the migration backend, whatever it is. From mikal at stillhq.com Fri Dec 5 20:56:21 2014 From: mikal at stillhq.com (Michael Still) Date: Sat, 6 Dec 2014 07:56:21 +1100 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <5481C3A1.5030900@redhat.com> References: <20141205134159.GK2383@redhat.com> <5481C3A1.5030900@redhat.com> Message-ID: I used Russell's 60 day stats in making this decision. I can't find a documented historical precedent on what period the stats should be generated over, however 60 days seems entirely reasonable to me. 2014-12-05 15:41:11.212927 Reviews for the last 60 days in nova ** -- nova-core team member +-----------------------------+---------------------------------------+----------------+ | Reviewer | Reviews -2 -1 +1 +2 +A +/- % | Disagreements* | +-----------------------------+---------------------------------------+----------------+ | berrange ** | 669 13 134 1 521 194 78.0% | 47 ( 7.0%) | | jogo ** | 431 38 161 2 230 117 53.8% | 19 ( 4.4%) | | oomichi ** | 309 1 106 4 198 58 65.4% | 3 ( 1.0%) | | danms ** | 293 34 133 15 111 43 43.0% | 12 ( 4.1%) | | jaypipes ** | 290 10 108 14 158 42 59.3% | 15 ( 5.2%) | | ndipanov ** | 192 10 78 6 98 24 54.2% | 24 ( 12.5%) | | klmitch ** | 190 1 22 0 167 12 87.9% | 21 ( 11.1%) | | cyeoh-0 ** | 184 0 70 10 104 41 62.0% | 9 ( 4.9%) | | mriedem ** | 173 3 86 8 76 31 48.6% | 8 ( 4.6%) | | johngarbutt ** | 164 19 79 6 60 24 40.2% | 7 ( 4.3%) | | cerberus ** | 151 0 9 40 102 38 94.0% | 7 ( 4.6%) | | mikalstill ** | 145 2 8 1 134 48 93.1% | 3 ( 2.1%) | | alaski ** | 104 0 7 6 91 54 93.3% | 5 ( 4.8%) | | sdague ** | 98 6 21 2 69 40 72.4% | 4 ( 4.1%) | | russellb ** | 86 1 10 0 75 29 87.2% | 5 ( 5.8%) | | p-draigbrady ** | 60 0 12 1 47 10 80.0% | 4 ( 6.7%) | | belliott ** | 32 0 8 1 23 15 75.0% | 4 ( 12.5%) | | vishvananda ** | 8 0 2 0 6 1 75.0% | 2 ( 25.0%) | | dan-prince ** | 7 0 0 0 7 3 100.0% | 4 ( 57.1%) | | cbehrens ** | 4 0 2 0 2 0 50.0% | 1 ( 25.0%) | The previously held standard for core reviewer activity has been an _average_ of two reviews per day, which is why I used the 60 days stats (to eliminate vacations and so forth). It should be noted that the top ten or so reviewers are doing at lot more than that. All of the reviewers I dropped are valued members of the team, and I am sad to see all of them go. However, it is important that reviewers remain active. It should also be noted that with the exception of one person (who hasn't been under discussion in this thread) I discussed doing this will all of these people on 12 June 2014. This was not a sudden move, and shouldn't be a surprise to the reviewers involved. One final point to reiterate -- we have always said as a project that former cores can be re-added if their review rate picks up again. This isn't a punishment, its a recognition that those people have gone off to work on other things and that nova is no longer their focus. I'd welcome an increased review rate from all involved. Michael On Sat, Dec 6, 2014 at 1:39 AM, Russell Bryant wrote: > On 12/05/2014 08:41 AM, Daniel P. Berrange wrote: >> On Fri, Dec 05, 2014 at 11:05:28AM +1100, Michael Still wrote: >>> One of the things that happens over time is that some of our core >>> reviewers move on to other projects. This is a normal and healthy >>> thing, especially as nova continues to spin out projects into other >>> parts of OpenStack. >>> >>> However, it is important that our core reviewers be active, as it >>> keeps them up to date with the current ways we approach development in >>> Nova. I am therefore removing some no longer sufficiently active cores >>> from the nova-core group. >>> >>> I?d like to thank the following people for their contributions over the years: >>> >>> * cbehrens: Chris Behrens >>> * vishvananda: Vishvananda Ishaya >>> * dan-prince: Dan Prince >>> * belliott: Brian Elliott >>> * p-draigbrady: Padraig Brady >>> >>> I?d love to see any of these cores return if they find their available >>> time for code reviews increases. >> >> What stats did you use to decide whether to cull these reviewers ? Looking >> at the stats over a 6 month period, I think Padraig Brady is still having >> a significant positive impact on Nova - on a par with both cerberus and >> alaski who you've not proposing for cut. I think we should keep Padraig >> on the team, but probably suggest cutting Markmc instead >> >> http://russellbryant.net/openstack-stats/nova-reviewers-180.txt >> >> +-----------------------------+----------------------------------------+----------------+ >> | Reviewer | Reviews -2 -1 +1 +2 +A +/- % | Disagreements* | >> +-----------------------------+----------------------------------------+----------------+ >> | berrange ** | 1766 26 435 12 1293 357 73.9% | 157 ( 8.9%) | >> | jaypipes ** | 1359 11 378 436 534 133 71.4% | 109 ( 8.0%) | >> | jogo ** | 1053 131 326 7 589 353 56.6% | 47 ( 4.5%) | >> | danms ** | 921 67 381 23 450 167 51.4% | 32 ( 3.5%) | >> | oomichi ** | 889 4 306 55 524 182 65.1% | 40 ( 4.5%) | >> | johngarbutt ** | 808 319 227 10 252 145 32.4% | 37 ( 4.6%) | >> | mriedem ** | 642 27 279 25 311 136 52.3% | 17 ( 2.6%) | >> | klmitch ** | 606 1 90 2 513 70 85.0% | 67 ( 11.1%) | >> | ndipanov ** | 588 19 179 10 380 113 66.3% | 62 ( 10.5%) | >> | mikalstill ** | 564 31 34 3 496 207 88.5% | 20 ( 3.5%) | >> | cyeoh-0 ** | 546 12 207 30 297 103 59.9% | 35 ( 6.4%) | >> | sdague ** | 511 23 89 6 393 229 78.1% | 25 ( 4.9%) | >> | russellb ** | 465 6 83 0 376 158 80.9% | 23 ( 4.9%) | >> | alaski ** | 415 1 65 21 328 149 84.1% | 24 ( 5.8%) | >> | cerberus ** | 405 6 25 48 326 102 92.3% | 33 ( 8.1%) | >> | p-draigbrady ** | 376 2 40 9 325 64 88.8% | 49 ( 13.0%) | >> | markmc ** | 243 2 54 3 184 69 77.0% | 14 ( 5.8%) | >> | belliott ** | 231 1 68 5 157 35 70.1% | 19 ( 8.2%) | >> | dan-prince ** | 178 2 48 9 119 29 71.9% | 11 ( 6.2%) | >> | cbehrens ** | 132 2 49 2 79 19 61.4% | 6 ( 4.5%) | >> | vishvananda ** | 54 0 5 3 46 15 90.7% | 5 ( 9.3%) | >> > > Yeah, I was pretty surprised to see pbrady on this list, as well. The > above was 6 months, but even if you drop it to the most recent 3 months, > he's still active ... > > >> Reviews for the last 90 days in nova >> ** -- nova-core team member >> +-----------------------------+---------------------------------------+----------------+ >> | Reviewer | Reviews -2 -1 +1 +2 +A +/- % | Disagreements* | >> +-----------------------------+---------------------------------------+----------------+ >> | berrange ** | 708 13 145 1 549 200 77.7% | 47 ( 6.6%) | >> | jogo ** | 594 40 218 4 332 174 56.6% | 27 ( 4.5%) | >> | jaypipes ** | 509 10 180 17 302 77 62.7% | 33 ( 6.5%) | >> | oomichi ** | 392 1 136 10 245 74 65.1% | 6 ( 1.5%) | >> | danms ** | 386 38 155 16 177 77 50.0% | 16 ( 4.1%) | >> | ndipanov ** | 345 17 118 7 203 61 60.9% | 32 ( 9.3%) | >> | mriedem ** | 304 12 136 12 144 56 51.3% | 12 ( 3.9%) | >> | klmitch ** | 281 1 42 0 238 19 84.7% | 32 ( 11.4%) | >> | cyeoh-0 ** | 270 11 112 12 135 47 54.4% | 13 ( 4.8%) | >> | mikalstill ** | 261 7 8 3 243 106 94.3% | 7 ( 2.7%) | >> | sdague ** | 246 19 41 2 184 104 75.6% | 10 ( 4.1%) | >> | johngarbutt ** | 216 25 92 7 92 43 45.8% | 8 ( 3.7%) | >> | alaski ** | 161 0 17 8 136 81 89.4% | 6 ( 3.7%) | >> | cerberus ** | 157 0 9 41 107 41 94.3% | 8 ( 5.1%) | >> | p-draigbrady ** | 143 0 21 3 119 26 85.3% | 9 ( 6.3%) | >> | russellb ** | 123 1 15 0 107 41 87.0% | 8 ( 6.5%) | >> | belliott ** | 66 0 17 2 47 24 74.2% | 5 ( 7.6%) | >> | cbehrens ** | 20 0 4 0 16 2 80.0% | 1 ( 5.0%) | >> | vishvananda ** | 18 0 3 0 15 6 83.3% | 2 ( 11.1%) | >> | dan-prince ** | 16 0 1 0 15 6 93.8% | 5 ( 31.2%) | > > > -- > Russell Bryant > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Rackspace Australia From lohitv at gwmail.gwu.edu Fri Dec 5 21:12:50 2014 From: lohitv at gwmail.gwu.edu (Lohit Valleru) Date: Fri, 5 Dec 2014 16:12:50 -0500 Subject: [openstack-dev] [[Openstack-dev] [Ironic] Ironic-conductor fails to start - "AttributeError '_keepalive_evt'" Message-ID: Hello All, I am trying to deploy bare-metal nodes using openstack-ironic. It is a 2 - node architecture with controller/keystone/mysql on a virtual machine, and cinder/compute/nova network on a physical machine on a CentOS 7 environment. openstack-ironic-common-2014.2-2.el7.centos.noarch openstack-ironic-api-2014.2-2.el7.centos.noarch openstack-ironic-conductor-2014.2-2.el7.centos.noarch I have followed this document, http://docs.openstack.org/developer/ironic/deploy/install-guide.html#ipmi-support and installed ironic. But when i start ironic-conductor, i get the below error : ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE ironic.common.service ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 ERROR ironic.common.service [-] Service error occurred when cleaning up the RPC manager. Error: 'ConductorManager' object has no attribute '_keepalive_evt' ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE ironic.common.service Traceback (most recent call last): ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE ironic.common.service File "/usr/lib/python2.7/site-packages/ironic/common/service.py", line 91, in stop ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE ironic.common.service self.manager.del_host() ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE ironic.common.service File "/usr/lib/python2.7/site-packages/ironic/conductor/manager.py", line 235, in del_host ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE ironic.common.service self._keepalive_evt.set() hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE ironic.common.service AttributeError: 'ConductorManager' object has no attribute '_keepalive_evt' hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE ironic.common.service hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 INFO ironic.common.service [-] Stopped RPC server for service ironic.conductor_manager on host hc004. A look at the source code, tells me that it is something related to RPC service being started/stopped. Also, I cannot debug this more as - I do not see any logs being created with respect to ironic. Do i have to explicitly enable the logging properties in ironic.conf, or are they expected to be working by default? Here is the configuration from ironic.conf ############################# [DEFAULT] verbose=true rabbit_host=172.18.246.104 auth_strategy=keystone debug=true [keystone_authtoken] auth_host=172.18.246.104 auth_uri=http://172.18.246.104:5000/v2.0 admin_user=ironic admin_password=xxxx admin_tenant_name=service [database] connection = mysql://ironic:xxxxx at 172.18.246.104/ironic?charset=utf8 [glance] glance_host=172.18.246.104 ############################# I understand that i did not give neutron URL as required by the documentation. The reason : that i have architecture limitations to install neutron networking and would like to experiment if nova-network and dhcp pxe server will server the purpose although i highly doubt that. However, i wish to know if the above issue is anyway related to non-existent neutron network, or if it is related to something else. Please do let me know. Thank you, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From devananda.vdv at gmail.com Fri Dec 5 21:28:09 2014 From: devananda.vdv at gmail.com (Devananda van der Veen) Date: Fri, 05 Dec 2014 21:28:09 +0000 Subject: [openstack-dev] [[Openstack-dev] [Ironic] Ironic-conductor fails to start - "AttributeError '_keepalive_evt'" References: Message-ID: Hi Lohit, In the future, please do not cross-post or copy-and-paste usage questions on the development list. Since you posted this question on the general list (*) -- which is exactly where you should post it -- I will respond there. Regards, Devananda (*) http://lists.openstack.org/pipermail/openstack/2014-December/010698.html On Fri Dec 05 2014 at 1:15:44 PM Lohit Valleru wrote: > Hello All, > > I am trying to deploy bare-metal nodes using openstack-ironic. It is a 2 - > node architecture with controller/keystone/mysql on a virtual machine, and > cinder/compute/nova network on a physical machine on a CentOS 7 environment. > > openstack-ironic-common-2014.2-2.el7.centos.noarch > openstack-ironic-api-2014.2-2.el7.centos.noarch > openstack-ironic-conductor-2014.2-2.el7.centos.noarch > > I have followed this document, > > http://docs.openstack.org/developer/ironic/deploy/install-guide.html#ipmi-support > > and installed ironic. But when i start ironic-conductor, i get the below > error : > > ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE > ironic.common.service > ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 ERROR > ironic.common.service [-] Service error occurred when cleaning up the RPC > manager. Error: 'ConductorManager' object has no attribute '_keepalive_evt' > ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE > ironic.common.service Traceback (most recent call last): > ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE > ironic.common.service File > "/usr/lib/python2.7/site-packages/ironic/common/service.py", line 91, in > stop > ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE > ironic.common.service self.manager.del_host() > ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE > ironic.common.service File > "/usr/lib/python2.7/site-packages/ironic/conductor/manager.py", line 235, > in del_host > ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE > ironic.common.service self._keepalive_evt.set() > hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE > ironic.common.service AttributeError: 'ConductorManager' object has no > attribute '_keepalive_evt' > hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE > ironic.common.service > hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 INFO > ironic.common.service [-] Stopped RPC server for service > ironic.conductor_manager on host hc004. > > A look at the source code, tells me that it is something related to RPC > service being started/stopped. > > Also, I cannot debug this more as - I do not see any logs being created > with respect to ironic. > Do i have to explicitly enable the logging properties in ironic.conf, or > are they expected to be working by default? > > Here is the configuration from ironic.conf > > ############################# > > [DEFAULT] > verbose=true > rabbit_host=172.18.246.104 > auth_strategy=keystone > debug=true > > [keystone_authtoken] > auth_host=172.18.246.104 > auth_uri=http://172.18.246.104:5000/v2.0 > admin_user=ironic > admin_password=xxxx > admin_tenant_name=service > > [database] > connection = mysql://ironic:xxxxx at 172.18.246.104/ironic?charset=utf8 > > [glance] > glance_host=172.18.246.104 > > ############################# > > I understand that i did not give neutron URL as required by the > documentation. The reason : that i have architecture limitations to install > neutron networking and would like to experiment if nova-network and dhcp > pxe server will server the purpose although i highly doubt that. > > However, i wish to know if the above issue is anyway related to > non-existent neutron network, or if it is related to something else. > > Please do let me know. > > Thank you, > > Lohit > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbayer at redhat.com Fri Dec 5 22:17:41 2014 From: mbayer at redhat.com (Mike Bayer) Date: Fri, 5 Dec 2014 17:17:41 -0500 Subject: [openstack-dev] [Nova] sqlalchemy-migrate vs alembic for new database In-Reply-To: <5482122D.3060308@linux.vnet.ibm.com> References: <54820B6C.4060901@rackspace.com> <5482122D.3060308@linux.vnet.ibm.com> Message-ID: <3DD54A65-EA3A-44EA-B0FF-A5121B9796AA@redhat.com> > On Dec 5, 2014, at 3:14 PM, Matt Riedemann wrote: > > > > On 12/5/2014 1:45 PM, Andrew Laski wrote: >> The cells v2 effort is going to be introducing a new database into >> Nova. This has been an opportunity to rethink and approach a few things >> differently, including how we should handle migrations. There have been >> discussions for a long time now about switching over to alembic for >> migrations so I want to ask, should we start using alembic from the >> start for this new database? >> >> The question was first raised by Dan Smith on >> https://review.openstack.org/#/c/135424/ >> >> I do have some concern about having two databases managed in two >> different ways, but if the details are well hidden behind a nova-manage >> command I'm not sure it will actually matter in practice. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > I don't have experience with Alembic but I'd think we should use Alembic for the new database unless there is a compelling reason not to. Maybe we need Mike Bayer (or other oslo.db people) to give us an idea of what kinds of problems we might have with managing two databases with two different migration schemes. > > But the last part you said is key for me, if we can abstract it well then hopefully it's not very painful. sqlalchemy-migrate doesn?t really have a dedicated maintainer anymore, AFAICT. It?s pretty much on stackforge life support. So while the issue of merging together a project with migrate and alembic at the same time seems to be something for which there are some complexity and some competing ideas (I have one that?s pretty fancy, but I haven?t spec?ed or implemented it yet, so for now there are ?wrappers? that run both), it sort of has to happen regardless. From mbayer at redhat.com Fri Dec 5 22:43:20 2014 From: mbayer at redhat.com (Mike Bayer) Date: Fri, 5 Dec 2014 17:43:20 -0500 Subject: [openstack-dev] [all] [ha] potential issue with implicit async-compatible mysql drivers Message-ID: <783B3168-BD43-42CB-90EC-B55D10EAB5EA@redhat.com> Hey list - I?m posting this here just to get some ideas on what might be happening here, as it may or may not have some impact on Openstack if and when we move to MySQL drivers that are async-patchable, like MySQL-connector or PyMySQL. I had a user post this issue a few days ago which I?ve since distilled into test cases for PyMySQL and MySQL-connector separately. It uses gevent, not eventlet, so I?m not really sure if this applies. But there?s plenty of very smart people here so if anyone can shed some light on what is actually happening here, that would help. The program essentially illustrates code that performs several steps upon a connection, however if the greenlet is suddenly killed, the state from the connection, while damaged, is still being allowed to continue on in some way, and what?s super-catastrophic here is that you see a transaction actually being committed *without* all the statements proceeding on it. In my work with MySQL drivers, I?ve noted for years that they are all very, very bad at dealing with concurrency-related issues. The whole ?MySQL has gone away? and ?commands out of sync? errors are ones that we?ve all just drowned in, and so often these are due to the driver getting mixed up due to concurrent use of a connection. However this one seems more insidious. Though at the same time, the script has some complexity happening (like a simplistic connection pool) and I?m not really sure where the core of the issue lies. The script is at https://gist.github.com/zzzeek/d196fa91c40cb515365e and also below. If you run it for a few seconds, go over to your MySQL command line and run this query: SELECT * FROM table_b WHERE a_id not in (SELECT id FROM table_a) ORDER BY a_id DESC; and what you?ll see is tons of rows in table_b where the ?a_id? is zero (because cursor.lastrowid fails), but the *rows are committed*. If you read the segment of code that does this, it should be impossible: connection = pool.get() rowid = execute_sql( connection, "INSERT INTO table_a (data) VALUES (%s)", ("a",) ) gevent.sleep(random.random() * 0.2) try: execute_sql( connection, "INSERT INTO table_b (a_id, data) VALUES (%s, %s)", (rowid, "b",) ) connection.commit() pool.return_conn(connection) except Exception: connection.rollback() pool.return_conn(connection) so if the gevent.sleep() throws a timeout error, somehow we are getting thrown back in there, with the connection in an invalid state, but not invalid enough to commit. If a simple check for ?SELECT connection_id()? is added, this query fails and the whole issue is prevented. Additionally, if you put a foreign key constraint on that b_table.a_id, then the issue is prevented, and you see that the constraint violation is happening all over the place within the commit() call. The connection is being used such that its state just started after the gevent.sleep() call. Now, there?s also a very rudimental connection pool here. That is also part of what?s going on. If i try to run without the pool, the whole script just runs out of connections, fast, which suggests that this gevent timeout cleans itself up very, very badly. However, SQLAlchemy?s pool works a lot like this one, so if folks here can tell me if the connection pool is doing something bad, then that?s key, because I need to make a comparable change in SQLAlchemy?s pool. Otherwise I worry our eventlet use could have big problems under high load. # -*- coding: utf-8 -*- import gevent.monkey gevent.monkey.patch_all() import collections import threading import time import random import sys import logging logging.basicConfig() log = logging.getLogger('foo') log.setLevel(logging.DEBUG) #import pymysql as dbapi from mysql import connector as dbapi class SimplePool(object): def __init__(self): self.checkedin = collections.deque([ self._connect() for i in range(50) ]) self.checkout_lock = threading.Lock() self.checkin_lock = threading.Lock() def _connect(self): return dbapi.connect( user="scott", passwd="tiger", host="localhost", db="test") def get(self): with self.checkout_lock: while not self.checkedin: time.sleep(.1) return self.checkedin.pop() def return_conn(self, conn): try: conn.rollback() except: log.error("Exception during rollback", exc_info=True) try: conn.close() except: log.error("Exception during close", exc_info=True) # recycle to a new connection conn = self._connect() with self.checkin_lock: self.checkedin.append(conn) def verify_connection_id(conn): cursor = conn.cursor() try: cursor.execute("select connection_id()") row = cursor.fetchone() return row[0] except: return None finally: cursor.close() def execute_sql(conn, sql, params=()): cursor = conn.cursor() cursor.execute(sql, params) lastrowid = cursor.lastrowid cursor.close() return lastrowid pool = SimplePool() # SELECT * FROM table_b WHERE a_id not in # (SELECT id FROM table_a) ORDER BY a_id DESC; PREPARE_SQL = [ "DROP TABLE IF EXISTS table_b", "DROP TABLE IF EXISTS table_a", """CREATE TABLE table_a ( id INT NOT NULL AUTO_INCREMENT, data VARCHAR (256) NOT NULL, PRIMARY KEY (id) ) engine='InnoDB'""", """CREATE TABLE table_b ( id INT NOT NULL AUTO_INCREMENT, a_id INT NOT NULL, data VARCHAR (256) NOT NULL, -- uncomment this to illustrate where the driver is attempting -- to INSERT the row during ROLLBACK -- FOREIGN KEY (a_id) REFERENCES table_a(id), PRIMARY KEY (id) ) engine='InnoDB' """] connection = pool.get() for sql in PREPARE_SQL: execute_sql(connection, sql) connection.commit() pool.return_conn(connection) print("Table prepared...") def transaction_kill_worker(): while True: try: connection = None with gevent.Timeout(0.1): connection = pool.get() rowid = execute_sql( connection, "INSERT INTO table_a (data) VALUES (%s)", ("a",)) gevent.sleep(random.random() * 0.2) try: execute_sql( connection, "INSERT INTO table_b (a_id, data) VALUES (%s, %s)", (rowid, "b",)) # this version prevents the commit from # proceeding on a bad connection # if verify_connection_id(connection): # connection.commit() # this version does not. It will commit the # row for table_b without the table_a being present. connection.commit() pool.return_conn(connection) except Exception: connection.rollback() pool.return_conn(connection) sys.stdout.write("$") except gevent.Timeout: # try to return the connection anyway if connection is not None: pool.return_conn(connection) sys.stdout.write("#") except Exception: # logger.exception(e) sys.stdout.write("@") else: sys.stdout.write(".") finally: if connection is not None: pool.return_conn(connection) def main(): for i in range(50): gevent.spawn(transaction_kill_worker) gevent.sleep(3) while True: gevent.sleep(5) if __name__ == "__main__": main() From lohitv at gwmail.gwu.edu Fri Dec 5 22:53:23 2014 From: lohitv at gwmail.gwu.edu (Lohit Valleru) Date: Fri, 5 Dec 2014 17:53:23 -0500 Subject: [openstack-dev] [[Openstack-dev] [Ironic] Ironic-conductor fails to start - "AttributeError '_keepalive_evt'" In-Reply-To: References: Message-ID: I apologize. I was not sure about where to post the errors. I will post to the general list from next time. Thank you, Lohit On Friday, December 5, 2014, Devananda van der Veen wrote: > Hi Lohit, > > In the future, please do not cross-post or copy-and-paste usage questions > on the development list. Since you posted this question on the general list > (*) -- which is exactly where you should post it -- I will respond there. > > Regards, > Devananda > > (*) > http://lists.openstack.org/pipermail/openstack/2014-December/010698.html > > > > On Fri Dec 05 2014 at 1:15:44 PM Lohit Valleru > wrote: > >> Hello All, >> >> I am trying to deploy bare-metal nodes using openstack-ironic. It is a 2 >> - node architecture with controller/keystone/mysql on a virtual machine, >> and cinder/compute/nova network on a physical machine on a CentOS 7 >> environment. >> >> openstack-ironic-common-2014.2-2.el7.centos.noarch >> openstack-ironic-api-2014.2-2.el7.centos.noarch >> openstack-ironic-conductor-2014.2-2.el7.centos.noarch >> >> I have followed this document, >> >> http://docs.openstack.org/developer/ironic/deploy/install-guide.html#ipmi-support >> >> and installed ironic. But when i start ironic-conductor, i get the below >> error : >> >> ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE >> ironic.common.service >> ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 ERROR >> ironic.common.service [-] Service error occurred when cleaning up the RPC >> manager. Error: 'ConductorManager' object has no attribute '_keepalive_evt' >> ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE >> ironic.common.service Traceback (most recent call last): >> ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE >> ironic.common.service File >> "/usr/lib/python2.7/site-packages/ironic/common/service.py", line 91, in >> stop >> ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE >> ironic.common.service self.manager.del_host() >> ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE >> ironic.common.service File >> "/usr/lib/python2.7/site-packages/ironic/conductor/manager.py", line 235, >> in del_host >> ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE >> ironic.common.service self._keepalive_evt.set() >> hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE >> ironic.common.service AttributeError: 'ConductorManager' object has no >> attribute '_keepalive_evt' >> hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE >> ironic.common.service >> hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 INFO >> ironic.common.service [-] Stopped RPC server for service >> ironic.conductor_manager on host hc004. >> >> A look at the source code, tells me that it is something related to RPC >> service being started/stopped. >> >> Also, I cannot debug this more as - I do not see any logs being created >> with respect to ironic. >> Do i have to explicitly enable the logging properties in ironic.conf, or >> are they expected to be working by default? >> >> Here is the configuration from ironic.conf >> >> ############################# >> >> [DEFAULT] >> verbose=true >> rabbit_host=172.18.246.104 >> auth_strategy=keystone >> debug=true >> >> [keystone_authtoken] >> auth_host=172.18.246.104 >> auth_uri=http://172.18.246.104:5000/v2.0 >> admin_user=ironic >> admin_password=xxxx >> admin_tenant_name=service >> >> [database] >> connection = mysql://ironic:xxxxx at 172.18.246.104/ironic?charset=utf8 >> >> [glance] >> glance_host=172.18.246.104 >> >> ############################# >> >> I understand that i did not give neutron URL as required by the >> documentation. The reason : that i have architecture limitations to install >> neutron networking and would like to experiment if nova-network and dhcp >> pxe server will server the purpose although i highly doubt that. >> >> However, i wish to know if the above issue is anyway related to >> non-existent neutron network, or if it is related to something else. >> >> Please do let me know. >> >> Thank you, >> >> Lohit >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From armamig at gmail.com Fri Dec 5 23:04:15 2014 From: armamig at gmail.com (Armando M.) Date: Fri, 5 Dec 2014 15:04:15 -0800 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition Message-ID: Hi folks, For a few weeks now the Neutron team has worked tirelessly on [1]. This initiative stems from the fact that as the project matures, evolution of processes and contribution guidelines need to evolve with it. This is to ensure that the project can keep on thriving in order to meet the needs of an ever growing community. The effort of documenting intentions, and fleshing out the various details of the proposal is about to reach an end, and we'll soon kick the tires to put the proposal into practice. Since the spec has grown pretty big, I'll try to capture the tl;dr below. If you have any comment please do not hesitate to raise them here and/or reach out to us. tl;dr >>> >From the Kilo release, we'll initiate a set of steps to change the following areas: - Code structure: every plugin or driver that exists or wants to exist as part of Neutron project is decomposed in an slim vendor integration (which lives in the Neutron repo), plus a bulkier vendor library (which lives in an independent publicly available repo); - Contribution process: this extends to the following aspects: - Design and Development: the process is largely unchanged for the part that pertains the vendor integration; the maintainer team is fully auto governed for the design and development of the vendor library; - Testing and Continuous Integration: maintainers will be required to support their vendor integration with 3rd CI testing; the requirements for 3rd CI testing are largely unchanged; - Defect management: the process is largely unchanged, issues affecting the vendor library can be tracked with whichever tool/process the maintainer see fit. In cases where vendor library fixes need to be reflected in the vendor integration, the usual OpenStack defect management apply. - Documentation: there will be some changes to the way plugins and drivers are documented with the intention of promoting discoverability of the integrated solutions. - Adoption and transition plan: we strongly advise maintainers to stay abreast of the developments of this effort, as their code, their CI, etc will be affected. The core team will provide guidelines and support throughout this cycle the ensure a smooth transition. To learn more, please refer to [1]. Many thanks, Armando [1] https://review.openstack.org/#/c/134680 -------------- next part -------------- An HTML attachment was scrubbed... URL: From armamig at gmail.com Fri Dec 5 23:07:20 2014 From: armamig at gmail.com (Armando M.) Date: Fri, 5 Dec 2014 15:07:20 -0800 Subject: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository? In-Reply-To: <5481A4B1.3010405@redhat.com> References: <5481A4B1.3010405@redhat.com> Message-ID: For anyone who had an interest in following this thread, they might want to have a look at [1], and [2] (which is the tl;dr version [1]). HTH Armando [1] https://review.openstack.org/#/c/134680 [2] http://lists.openstack.org/pipermail/openstack-dev/2014-December/052346.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From devananda.vdv at gmail.com Fri Dec 5 23:10:33 2014 From: devananda.vdv at gmail.com (Devananda van der Veen) Date: Fri, 05 Dec 2014 23:10:33 +0000 Subject: [openstack-dev] [Ironic] reminder: alternate meeting time Message-ID: This is a friendly reminder that our weekly IRC meetings have begun alternating times every week to try to accommodate more of our contributors. Next week's meeting will be at 0500 UTC Tuesday (9pm PST Monday) in the #openstack-meeting-3 channel. Details, as always, are on the wiki [0]. Regards, Devananda [0] https://wiki.openstack.org/wiki/Meetings/Ironic -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbayer at redhat.com Fri Dec 5 23:13:19 2014 From: mbayer at redhat.com (Mike Bayer) Date: Fri, 5 Dec 2014 18:13:19 -0500 Subject: [openstack-dev] Mike Bayer 20141205 Message-ID: <01D0284A-D81C-402E-893D-CB13DED53AAA@redhat.com> 1. Alembic release - I worked through some regressions introduced by Alembic 0.7.0 and the subsequent 0.7.1 with the Neutron folks. This started on Monday with https://review.openstack.org/#/c/137989/, and by Wednesday I had identified enough small regressions in 0.7.0 that I had to put 0.7.1 out, so that review got expedited with https://review.openstack.org/#/c/138998/ following from Neutron devs to continue fixing. Version 0.7.1 includes the foreign key autogenerate support first proposed by Ann Kamyshnikova. Changelog at http://alembic.readthedocs.org/en/latest/changelog.html#change-0.7.1. 2. MySQL driver stuff. I have a SQLAlchemy user who is running some kind of heavy load with gevent and PyMySQL. While this user is not openstack-specific, the thing he is doing is a lot like what we might be doing if and when we move our MySQL drivers to MySQL-connector-Python, which is compatible with eventlet in that it is pure Python and can be monkeypatched. The issue observed by this user applies to both PyMySQL and MySQL-connector, and I can reproduce it *without* using SQLAlchemy, though it does use a very makeshift connection pool designed to approximate what SQLAlchemy?s does. The issue is scary because it illustrates Python code that should have been killed being invoked on a database connection that should have been dead, calling commit(), and then actually *succeeding* in committing only *part* of the data. This is not an issue that impacts Openstack right now but if the same thing applies to eventlet, then this would definitely be something we?d need to worry about if we start using MySQL-connector in a high load scenario (which has been the plan) so I?ve forwarded my findings onto openstack-dev to see if anyone can help me understand it. The intro + test case for this issue starts at http://lists.openstack.org/pipermail/openstack-dev/2014-December/052344.html. 3. enginefacade - The engine facade as I described in https://review.openstack.org/#/c/125181/, which we also talked about on the Nova compute call this week, is now built! I spent monday and tuesday on the buildout for this, and that can be seen and reviewed here: https://review.openstack.org/#/c/138215/ As of today I?m still nursing it through CI, as even with projects using the ?legacy? APIs, they are still finding lots of little silly things that I keep having to fix (people calling the old EngineFacade with arguments I didn?t expect, people importing from oslo.db in an order I did not expect, etc). While these consuming projects could be fixed to not have these little issues, for now I am trying to push everything to work as identically as possible to how it was earlier, when the new API is not explicitly invoked. I?ll be continuing to get this to pass all tempest runs through next week. For enginefacade I?d like the folks from the call to take a look, and in particular if Matthew Booth wants to look into it, this is ready to start being used for prototyping Nova with it. 4. Connectivity stuff - today I worked a bunch with Viktor Sergeyev who has been trying to fix an issue with MySQL OperationalErrors that are raised when the database is shut off entirely; in oslo.db we have logic that wraps all exceptions unconditionally, including that it identifies disconnect exceptions. In the case where the DB throws a disconnect, and we loop around to ?retry? this query in order to get it to reconnect, then that reconnect continues to fail, the second run doesn?t get wrapped. So today I?ve fixed both the upstream issue for SQLAlchemy 1.0, and also made a series of adjustments to oslo.db to accommodate SQLAlchemy 1.0?s system correctly as well as to work around the issue when SQLAlchemy < 1.0 is present. That?s a three-series of patches that are unsurprisingly going to take some nursing to get through the gate, so I?ll be continuing with that next week. This series starts at https://review.openstack.org/139725 https://review.openstack.org/139733 https://review.openstack.org/139738 . 5. SQLA 1.0 stuff. - getting SQLAlchemy 1.0 close to release is becoming critical so I?ve been moving around issues and priorities to expedite this. There?s many stability enhancements oslo.db would benefit from as well as some major performance-related features that I?ve been planning all along to introduce to projects. 1.0 is very full of lots of changes that aren?t really being tested outside of my own CI, so getting something out the door on it is key, otherwise it will just be too different from 0.9 in order for people to have smooth upgrades. I do run SQLA 1.0 in CI against a subset of Neutron, Nova, Keystone and Oslo tests so we should be in OK shape, but there is still a lot to go. Work completed so far can be seen at http://docs.sqlalchemy.org/en/latest/changelog/migration_10.html. From mbayer at redhat.com Sat Dec 6 00:13:48 2014 From: mbayer at redhat.com (Mike Bayer) Date: Fri, 5 Dec 2014 19:13:48 -0500 Subject: [openstack-dev] Mike Bayer 20141205 In-Reply-To: <01D0284A-D81C-402E-893D-CB13DED53AAA@redhat.com> References: <01D0284A-D81C-402E-893D-CB13DED53AAA@redhat.com> Message-ID: <2B8597E2-CE82-4512-BEA8-FF38B5847CAB@redhat.com> this was sent to the wrong list! please ignore. (or if you find it interesting, then great!) > On Dec 5, 2014, at 6:13 PM, Mike Bayer wrote: > > 1. Alembic release - I worked through some regressions introduced by Alembic 0.7.0 and the subsequent 0.7.1 with the Neutron folks. This started on Monday with https://review.openstack.org/#/c/137989/, and by Wednesday I had identified enough small regressions in 0.7.0 that I had to put 0.7.1 out, so that review got expedited with https://review.openstack.org/#/c/138998/ following from Neutron devs to continue fixing. Version 0.7.1 includes the foreign key autogenerate support first proposed by Ann Kamyshnikova. Changelog at http://alembic.readthedocs.org/en/latest/changelog.html#change-0.7.1. > > 2. MySQL driver stuff. I have a SQLAlchemy user who is running some kind of heavy load with gevent and PyMySQL. While this user is not openstack-specific, the thing he is doing is a lot like what we might be doing if and when we move our MySQL drivers to MySQL-connector-Python, which is compatible with eventlet in that it is pure Python and can be monkeypatched. The issue observed by this user applies to both PyMySQL and MySQL-connector, and I can reproduce it *without* using SQLAlchemy, though it does use a very makeshift connection pool designed to approximate what SQLAlchemy?s does. The issue is scary because it illustrates Python code that should have been killed being invoked on a database connection that should have been dead, calling commit(), and then actually *succeeding* in committing only *part* of the data. This is not an issue that impacts Openstack right now but if the same thing applies to eventlet, then this would definitely be something we?d need to worry about if we start using MySQL-connector in a high load scenario (which has been the plan) so I?ve forwarded my findings onto openstack-dev to see if anyone can help me understand it. The intro + test case for this issue starts at http://lists.openstack.org/pipermail/openstack-dev/2014-December/052344.html. > > 3. enginefacade - The engine facade as I described in https://review.openstack.org/#/c/125181/, which we also talked about on the Nova compute call this week, is now built! I spent monday and tuesday on the buildout for this, and that can be seen and reviewed here: https://review.openstack.org/#/c/138215/ As of today I?m still nursing it through CI, as even with projects using the ?legacy? APIs, they are still finding lots of little silly things that I keep having to fix (people calling the old EngineFacade with arguments I didn?t expect, people importing from oslo.db in an order I did not expect, etc). While these consuming projects could be fixed to not have these little issues, for now I am trying to push everything to work as identically as possible to how it was earlier, when the new API is not explicitly invoked. I?ll be continuing to get this to pass all tempest runs through next week. > > For enginefacade I?d like the folks from the call to take a look, and in particular if Matthew Booth wants to look into it, this is ready to start being used for prototyping Nova with it. > > 4. Connectivity stuff - today I worked a bunch with Viktor Sergeyev who has been trying to fix an issue with MySQL OperationalErrors that are raised when the database is shut off entirely; in oslo.db we have logic that wraps all exceptions unconditionally, including that it identifies disconnect exceptions. In the case where the DB throws a disconnect, and we loop around to ?retry? this query in order to get it to reconnect, then that reconnect continues to fail, the second run doesn?t get wrapped. So today I?ve fixed both the upstream issue for SQLAlchemy 1.0, and also made a series of adjustments to oslo.db to accommodate SQLAlchemy 1.0?s system correctly as well as to work around the issue when SQLAlchemy < 1.0 is present. That?s a three-series of patches that are unsurprisingly going to take some nursing to get through the gate, so I?ll be continuing with that next week. This series starts at https://review.openstack.org/139725 https://review.openstack.org/139733 https://review.openstack.org/139738 . > > 5. SQLA 1.0 stuff. - getting SQLAlchemy 1.0 close to release is becoming critical so I?ve been moving around issues and priorities to expedite this. There?s many stability enhancements oslo.db would benefit from as well as some major performance-related features that I?ve been planning all along to introduce to projects. 1.0 is very full of lots of changes that aren?t really being tested outside of my own CI, so getting something out the door on it is key, otherwise it will just be too different from 0.9 in order for people to have smooth upgrades. I do run SQLA 1.0 in CI against a subset of Neutron, Nova, Keystone and Oslo tests so we should be in OK shape, but there is still a lot to go. Work completed so far can be seen at http://docs.sqlalchemy.org/en/latest/changelog/migration_10.html. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From joehuang at huawei.com Sat Dec 6 01:47:13 2014 From: joehuang at huawei.com (joehuang) Date: Sat, 6 Dec 2014 01:47:13 +0000 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> Hello, Davanum, Thanks for your reply. Cells can't meet the demand for the use cases and requirements described in the mail. > 1. Use cases > a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" ), establishing globally addressable tenants which result in efficient services deployment. > b). Telefonica use case[5], create virtual DC( data center) cross multiple physical DCs with seamless experience. > c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV cloud, it?s in nature the cloud will be distributed but inter-connected in many data centers. > > 2.requirements > a). The operator has multiple sites cloud; each site can use one or multiple vendor?s OpenStack distributions. > b). Each site with its own requirements and upgrade schedule while maintaining standard OpenStack API > c). The multi-site cloud must provide unified resource management with global Open API exposed, for example create virtual DC cross multiple physical DCs with seamless experience. > Although a prosperity orchestration layer could be developed for the multi-site cloud, but it's prosperity API in the north bound interface. The cloud operators want the ecosystem friendly global open API for the mutli-site cloud for global access. Best Regards Chaoyi Huang ( joehuang ) ________________________________________ From: Davanum Srinivas [davanum at gmail.com] Sent: 05 December 2014 21:56 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward Joe, Related to this topic, At the summit, there was a session on Cells v2 and following up on that there have been BP(s) filed in Nova championed by Andrew - https://review.openstack.org/#/q/owner:%22Andrew+Laski%22+status:open,n,z thanks, dims On Fri, Dec 5, 2014 at 8:23 AM, joehuang wrote: > Dear all & TC & PTL, > > In the 40 minutes cross-project summit session ?Approaches for scaling out?[1], almost 100 peoples attended the meeting, and the conclusion is that cells can not cover the use cases and requirements which the OpenStack cascading solution[2] aim to address, the background including use cases and requirements is also described in the mail. > > After the summit, we just ported the PoC[3] source code from IceHouse based to Juno based. > > Now, let's move forward: > > The major task is to introduce new driver/agent to existing core projects, for the core idea of cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. > a). Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases. > b). Volunteer as the cross project coordinator. > c). Volunteers for implementation and CI. > > Background of OpenStack cascading vs cells: > > 1. Use cases > a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" ), establishing globally addressable tenants which result in efficient services deployment. > b). Telefonica use case[5], create virtual DC( data center) cross multiple physical DCs with seamless experience. > c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV cloud, it?s in nature the cloud will be distributed but inter-connected in many data centers. > > 2.requirements > a). The operator has multiple sites cloud; each site can use one or multiple vendor?s OpenStack distributions. > b). Each site with its own requirements and upgrade schedule while maintaining standard OpenStack API > c). The multi-site cloud must provide unified resource management with global Open API exposed, for example create virtual DC cross multiple physical DCs with seamless experience. > Although a prosperity orchestration layer could be developed for the multi-site cloud, but it's prosperity API in the north bound interface. The cloud operators want the ecosystem friendly global open API for the mutli-site cloud for global access. > > 3. What problems does cascading solve that cells doesn't cover: > OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core architecture idea of OpenStack cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks (from different vendor's distribution, or different version ) which may located in different sites (or data centers ) through the OpenStack API, meanwhile the cloud still expose OpenStack API as the north-bound API in the cloud level. > > 4. Why cells can?t do that: > Cells provide the scale out capability to Nova, but from the OpenStack as a whole point of view, it?s still working like one OpenStack instance. > a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. This approach provides the multi-site cloud with one unified API endpoint and unified resource management, but consolidation of multi-vendor/multi-version OpenStack instances across one or more data centers cannot be fulfilled. > b). Each site installed one child cell and accompanied standalone Cinder, Neutron(or Nova-network), Glance, Ceilometer. This approach makes multi-vendor/multi-version OpenStack distribution co-existence in multi-site seem to be feasible, but the requirement for unified API endpoint and unified resource management cannot be fulfilled. Cross Neutron networking automation is also missing, which should otherwise be done manually or use proprietary orchestration layer. > > For more information about cascading and cells, please refer to the discussion thread before Paris Summit [7]. > > [1]Approaches for scaling out: https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack > [2]OpenStack cascading solution: https://wiki.openstack.org/wiki/OpenStack_cascading_solution > [3]Cascading PoC: https://github.com/stackforge/tricircle > [4]Vodafone use case (9'02" to 12'30"): https://www.youtube.com/watch?v=-KOJYvhmxQI > [5]Telefonica use case: http://www.telefonica.com/en/descargas/mwc/present_20140224.pdf > [6]ETSI NFV use cases: http://www.etsi.org/deliver/etsi_gs/nfv/001_099/001/01.01.01_60/gs_nfv001v010101p.pdf > [7]Cascading thread before design summit: http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-td54115.html > > Best Regards > Chaoyi Huang (joehuang) > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Neil.Jerram at metaswitch.com Sat Dec 6 02:48:17 2014 From: Neil.Jerram at metaswitch.com (Neil Jerram) Date: Sat, 6 Dec 2014 02:48:17 +0000 Subject: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup? In-Reply-To: (Ian Wells's message of "Thu, 4 Dec 2014 11:34:13 -0800") References: <87ppc090ne.fsf@metaswitch.com> <87d27z8kxj.fsf@metaswitch.com> Message-ID: <87a931trwu.fsf@metaswitch.com> Ian Wells writes: > On 4 December 2014 at 08:00, Neil Jerram > wrote: > > Kevin Benton writes: > I was actually floating a slightly more radical option than that: > the > idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does > absolutely _nothing_, not even create the TAP device. > > > Nova always does something, and that something amounts to 'attaches > the VM to where it believes the endpoint to be'. Effectively you > should view the VIF type as the form that's decided on during > negotiation between Neutron and Nova - Neutron says 'I will do this > much and you have to take it from there'. (In fact, I would prefer > that it was *more* of a negotiation, in the sense that the hypervisor > driver had a say to Neutron of what VIF types it supported and > preferred, and Neutron could choose from a selection, but I don't > think it adds much value at the moment and I didn't want to propose a > change just for the sake of it.) I think you're just proposing that > the hypervisor driver should do less of the grunt work of connection. > > Also, libvirt is not the only hypervisor driver and I've found it > interesting to nose through the others for background reading, even if > you're not using them much. > > For example, suppose someone came along and wanted to implement a > new > OVS-like networking infrastructure? In principle could they do > that > without having to enhance the Nova VIF driver code? I think at the > moment they couldn't, but that they would be able to if > VIF_TYPE_NOOP > (or possibly VIF_TYPE_TAP) was already in place. In principle I > think > it would then be possible for the new implementation to specify > VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does > the kind > of configuration and vSwitch plugging that you've described above. > > > At the moment, the rule is that *if* you create a new type of > infrastructure then *at that point* you create your new VIF plugging > type to support it - vhostuser being a fine example, having been > rejected on the grounds that it was, at the end of Juno, speculative. > I'm not sure I particularly like this approach but that's how things > are at the moment - largely down to not wanting to add code that isn;t > used and therefore tested. > > None of this is criticism of your proposal, which sounds reasonable; I > was just trying to provide a bit of context. Many thanks for your explanations; I think I'm understanding this more fully now. For example, I now see that, when using libvirt, Nova has to generate config that describes all aspects of the VM to launch, including how the VNIC is implemented and how it's bound to networking on the host. Also different hypervisors, or layers like libvirt, may go to different lengths as regards how far they connect the VNIC to some form of networking on the host, and I can see that Nova would want to normalize that, i.e. to ensure that a predictable level of connectivity has always been achieved, regardless of hypervisor, by the time that Nova hands over to someone else such as Neutron. Therefore I see now that Nova _must_ be involved to some extent in VIF plugging, and hence that VIF_TYPE_NOOP doesn't fly. For a minimal, generic implementation of an unbridged TAP interface, then, we're back to VIF_TYPE_TAP as I've proposed in https://review.openstack.org/#/c/130732/. I've just revised and reuploaded this, based on the insight provided by this ML thread, and hope people will take a look. Many thanks, Neil From Neil.Jerram at metaswitch.com Sat Dec 6 02:51:16 2014 From: Neil.Jerram at metaswitch.com (Neil Jerram) Date: Sat, 6 Dec 2014 02:51:16 +0000 Subject: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup? In-Reply-To: (Kevin Benton's message of "Fri, 5 Dec 2014 00:23:09 -0800") References: <87ppc090ne.fsf@metaswitch.com> <87d27z8kxj.fsf@metaswitch.com> Message-ID: <8761dptrrv.fsf@metaswitch.com> Kevin Benton writes: > I see the difference now. > The main concern I see with the NOOP type is that creating the virtual > interface could require different logic for certain hypervisors. In > that case Neutron would now have to know things about nova and to me > it seems like that's slightly too far the other direction. Many thanks, Kevin. I see this now too, as I've just written more fully in my response to Ian. Based on your and others' insight, I've revised and reuploaded my VIF_TYPE_TAP spec, and hope it's a lot clearer now. Regards, Neil From pradip.interra at gmail.com Sat Dec 6 04:06:24 2014 From: pradip.interra at gmail.com (Pradip Mukhopadhyay) Date: Sat, 6 Dec 2014 09:36:24 +0530 Subject: [openstack-dev] [cinder] Code pointer for processing cinder backend config Message-ID: Hello, Suppose I have a backend specification in cinder.conf as follows: [nfs_pradipm] volume_backend_name=nfs_pradipm volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_server_hostname=IP netapp_server_port=80 netapp_storage_protocol=nfs netapp_storage_family=ontap_cluster netapp_login=admin netapp_password=password netapp_vserver=my_vs1 nfs_shares_config=/home/ubuntu/nfs.shares Where this config info is getting parsed out in the cinder code? Thanks, Pradip -------------- next part -------------- An HTML attachment was scrubbed... URL: From ijw.ubuntu at cack.org.uk Sat Dec 6 05:03:05 2014 From: ijw.ubuntu at cack.org.uk (Ian Wells) Date: Fri, 5 Dec 2014 21:03:05 -0800 Subject: [openstack-dev] [Neutron] Edge-VPN and Edge-Id In-Reply-To: References: <119A1974-380C-461E-9937-65B9763E39E6@brocade.com> Message-ID: I have no problem with standardising the API, and I would suggest that a service that provided nothing but endpoints could be begun as the next phase of 'advanced services' broken out projects to standardise that API. I just don't want it in Neutron itself. On 5 December 2014 at 00:33, Erik Moe wrote: > > > One reason for trying to get an more complete API into Neutron is to have > a standardized API. So users know what to expect and for providers to have > something to comply to. Do you suggest we bring this standardization work > to some other forum, OPNFV for example? Neutron provides low level hooks > and the rest is defined elsewhere. Maybe this could work, but there would > probably be other issues if the actual implementation is not on the edge or > outside Neutron. > > > > /Erik > > > > > > *From:* Ian Wells [mailto:ijw.ubuntu at cack.org.uk] > *Sent:* den 4 december 2014 20:19 > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id > > > > On 1 December 2014 at 21:26, Mohammad Hanif wrote: > > I hope we all understand how edge VPN works and what interactions are > introduced as part of this spec. I see references to neutron-network > mapping to the tunnel which is not at all case and the edge-VPN spec > doesn?t propose it. At a very high level, there are two main concepts: > > 1. Creation of a per tenant VPN ?service? on a PE (physical router) > which has a connectivity to other PEs using some tunnel (not known to > tenant or tenant-facing). An attachment circuit for this VPN service is > also created which carries a ?list" of tenant networks (the list is > initially empty) . > 2. Tenant ?updates? the list of tenant networks in the attachment > circuit which essentially allows the VPN ?service? to add or remove the > network from being part of that VPN. > > A service plugin implements what is described in (1) and provides an API > which is called by what is described in (2). The Neutron driver only > ?updates? the attachment circuit using an API (attachment circuit is also > part of the service plugin? data model). I don?t see where we are > introducing large data model changes to Neutron? > > > > Well, you have attachment types, tunnels, and so on - these are all > objects with data models, and your spec is on Neutron so I'm assuming you > plan on putting them into the Neutron database - where they are, for ever > more, a Neutron maintenance overhead both on the dev side and also on the > ops side, specifically at upgrade. > > > > How else one introduces a network service in OpenStack if it is not > through a service plugin? > > > > Again, I've missed something here, so can you define 'service plugin' for > me? How similar is it to a Neutron extension - which we agreed at the > summit we should take pains to avoid, per Salvatore's session? > > And the answer to that is to stop talking about plugins or trying to > integrate this into the Neutron API or the Neutron DB, and make it an > independent service with a small and well defined interaction with Neutron, > which is what the edge-id proposal suggests. If we do incorporate it into > Neutron then there are probably 90% of Openstack users and developers who > don't want or need it but care a great deal if it breaks the tests. If it > isn't in Neutron they simply don't install it. > > > > As we can see, tenant needs to communicate (explicit or otherwise) to > add/remove its networks to/from the VPN. There has to be a channel and the > APIs to achieve this. > > > > Agreed. I'm suggesting it should be a separate service endpoint. > -- > > Ian. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Sat Dec 6 05:38:52 2014 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sat, 6 Dec 2014 16:38:52 +1100 Subject: [openstack-dev] [nova] Fixing the console.log grows forever bug. Message-ID: <20141206053852.GB11931@thor.bakeyournoodle.com> Hi All, In the most recent team meeting we briefly discussed: [1] where the console.log grows indefinitely, eventually causing guest stalls. I mentioned that I was working on a spec to fix this issue. My original plan was fairly similar to [2] In that we'd switch libvirt/qemu to using a unix domain socket and write a simple helper to read from that socket and write to disk. That helper would close and reopen the on disk file upon receiving a HUP (so logrotate just works). Life would be good. and we could all move on. However I was encouraged to investigate fixing this in qemu, such that qemu could process the HUP and make life better for all. This is certainly doable and I'm happy[3] to do this work. I've floated the idea past qemu-devel and they seem okay with the idea. My main concern is in lag and supporting qemu/libvirt that can't handle this option. For the sake of discussion I'll lay out my best guess right now on fixing this in qemu. qemu 2.2.0 /should/ release this year the ETA is 2014-12-09[4] so the fix I'm proposing would be available in qemu 2.3.0 which I think will be available in June/July 2015. So we'd be into 'L' development before this fix is available and possibly 'M' before the community distros (Fedora and Ubuntu)[5] include and almost certainly longer for Enterprise distros. Along with the qemu development I expect there to be some libvirt development as well but right now I don't think that's critical to the feature or this discussion. So if that timeline is approximately correct: - Can we wait this long to fix the bug? As opposed to having it squashed in Kilo. - What do we do in nova for the next ~12 months while know there isn't a qemu to fix this? - Then once there is a qemu that fixes the issue, do we just say 'thou must use qemu 2.3.0' or would nova still need to support old and new qemu's ? [1] https://bugs.launchpad.net/nova/+bug/832507 [2] https://review.openstack.org/#/c/80865/ [3] For some value of happy ;P [4] From http://wiki.qemu.org/Planning/2.2 [5] Debian and Gentoo are a little harder to quantify in this scenario but no less important. Yours Tony. PS: If any of you have a secret laundry list of things qemu should do to make life easier for nova. Put them on a wiki page so we can discuss them. PPS: If this is going to be a thing we do (write features and fixes in qemu) we're going to need a consistent plan on how we cope with that. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From tony at bakeyournoodle.com Sat Dec 6 05:45:34 2014 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sat, 6 Dec 2014 16:45:34 +1100 Subject: [openstack-dev] Session length on wiki.openstack.org In-Reply-To: <20141205142645.GL2497@yuggoth.org> References: <20141205010348.GY84915@thor.bakeyournoodle.com> <20141205142645.GL2497@yuggoth.org> Message-ID: <20141206054534.GC11931@thor.bakeyournoodle.com> On Fri, Dec 05, 2014 at 02:26:46PM +0000, Jeremy Stanley wrote: > On 2014-12-04 18:37:48 -0700 (-0700), Carl Baldwin wrote: > > +1 I've been meaning to say something like this but never got > > around to it. Thanks for speaking up. > > https://storyboard.openstack.org/#!/story/1172753 > > I think Ryan said it might be a bug in the OpenID plug-in, but if so > he didn't put that comment in the bug. Thanks. I'll try to track that. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From jaypipes at gmail.com Sat Dec 6 17:30:05 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Sat, 06 Dec 2014 09:30:05 -0800 Subject: [openstack-dev] [nova] NUMA Cells In-Reply-To: <20141204124600.GH16269@redhat.com> References: <54801A5C.5070006@redhat.com> <20141204124600.GH16269@redhat.com> Message-ID: <54833D1D.2020304@gmail.com> On 12/04/2014 04:46 AM, Daniel P. Berrange wrote: > On Thu, Dec 04, 2014 at 09:25:00AM +0100, Nikola ?ipanov wrote: >> On 12/04/2014 05:30 AM, Michael Still wrote: >>> Hi, >>> >>> so just having read a bunch of the libvirt driver numa code, I have a >>> concern. At first I thought it was a little thing, but I am starting >>> to think its more of a big deal... >>> >>> We use the term "cells" to describe numa cells. However, that term has >>> a specific meaning in nova, and I worry that overloading the term is >>> confusing. >>> >>> (Yes, I know the numa people had it first, but hey). >>> >>> So, what do people think about trying to move the numa code to use >>> something like "numa cell" or "numacell" based on context? >>> >> >> Seeing that "node" is also not exactly unambiguous in this space - I am >> fine with both with either "numanode" or "numacell" with a slight >> preference for "numacell". >> >> A small issue will be renaming it in objects though - as this will >> require adding a new field for use in Kilo while still remaining >> backwards compatible with Juno, resulting in even more compatibility >> code (we already added some for the slightly different data format). The >> whole name is quite in context there, but we would use it like: >> >> for cell in numa_topology.cells: >> # awesome algo here with cell :( >> >> but if we were to rename it just in places where it's used to: >> >> for numacell in numa_topology.cells: >> # awesome algo here with numacell :) > > I think renaming local variables like this is really a solution > in search of a problem. It is pretty blindingly obvious the 'cell' > variable refers to a NUMA cell here, without having to spell it > out as 'numacell'. Likewise I think the object property name is > just fine as 'cell' because the context again makes it obvious > what it is referring to +1 I don't think this is a problem in the current code. -jay From jaypipes at gmail.com Sat Dec 6 17:42:31 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Sat, 06 Dec 2014 09:42:31 -0800 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: References: Message-ID: <54834007.5070007@gmail.com> On 12/04/2014 04:05 PM, Michael Still wrote: > One of the things that happens over time is that some of our core > reviewers move on to other projects. This is a normal and healthy > thing, especially as nova continues to spin out projects into other > parts of OpenStack. > > However, it is important that our core reviewers be active, as it > keeps them up to date with the current ways we approach development in > Nova. I am therefore removing some no longer sufficiently active cores > from the nova-core group. > > I?d like to thank the following people for their contributions over the years: > > * cbehrens: Chris Behrens > * vishvananda: Vishvananda Ishaya > * dan-prince: Dan Prince > * belliott: Brian Elliott > * p-draigbrady: Padraig Brady +1 on Chris, Dan, Vish and Brian, who I believe have all moved on to new adventures. -1 on pixelbeat, since he's been active in reviews on various things AFAICT in the last 60-90 days and seems to be still a considerate reviewer in various areas. Best, -jay From apevec at gmail.com Sat Dec 6 21:38:21 2014 From: apevec at gmail.com (Alan Pevec) Date: Sat, 6 Dec 2014 22:38:21 +0100 Subject: [openstack-dev] [stable] Re: Stable check of openstack/cinder failed Message-ID: 2014-12-06 7:14 GMT+01:00 A mailing list for the OpenStack Stable Branch test reports. : > Build failed. > > - periodic-cinder-python26-icehouse http://logs.openstack.org/periodic-stable/periodic-cinder-python26-icehouse/d19db38 : FAILURE in 6m 50s > - periodic-cinder-python27-icehouse http://logs.openstack.org/periodic-stable/periodic-cinder-python27-icehouse/eeb5864 : FAILURE in 5m 48s Fixed by https://review.openstack.org/139791 - please review/approve. Cheers, Alan From dannchoi at cisco.com Sun Dec 7 01:08:03 2014 From: dannchoi at cisco.com (Danny Choi (dannchoi)) Date: Sun, 7 Dec 2014 01:08:03 +0000 Subject: [openstack-dev] [qa] How to delete a VM which is in ERROR state? Message-ID: Hi, I have a VM which is in ERROR state. +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR | - | NOSTATE | | I tried in both CLI ?nova delete? and Horizon ?terminate instance?. Both accepted the delete command without any error. However, the VM never got deleted. Is there a way to remove the VM? Thanks, Danny -------------- next part -------------- An HTML attachment was scrubbed... URL: From xchenum at gmail.com Sun Dec 7 02:25:30 2014 From: xchenum at gmail.com (Xu (Simon) Chen) Date: Sat, 6 Dec 2014 21:25:30 -0500 Subject: [openstack-dev] [qa] How to delete a VM which is in ERROR state? In-Reply-To: References: Message-ID: Try "nova reset-state --active uuid". On Saturday, December 6, 2014, Danny Choi (dannchoi) wrote: > Hi, > > I have a VM which is in ERROR state. > > > +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ > > | ID | Name > | Status | Task State | Power State | Networks | > > > +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ > > | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | > cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR | - | > NOSTATE | | > > I tried in both CLI ?nova delete? and Horizon ?terminate instance?. > Both accepted the delete command without any error. > However, the VM never got deleted. > > Is there a way to remove the VM? > > Thanks, > Danny > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Sun Dec 7 08:03:10 2014 From: gkotton at vmware.com (Gary Kotton) Date: Sun, 7 Dec 2014 08:03:10 +0000 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <54834007.5070007@gmail.com> References: <54834007.5070007@gmail.com> Message-ID: I agree -1 for Padraig On 12/6/14, 7:42 PM, "Jay Pipes" wrote: >On 12/04/2014 04:05 PM, Michael Still wrote: >> One of the things that happens over time is that some of our core >> reviewers move on to other projects. This is a normal and healthy >> thing, especially as nova continues to spin out projects into other >> parts of OpenStack. >> >> However, it is important that our core reviewers be active, as it >> keeps them up to date with the current ways we approach development in >> Nova. I am therefore removing some no longer sufficiently active cores >> from the nova-core group. >> >> I?d like to thank the following people for their contributions over the >>years: >> >> * cbehrens: Chris Behrens >> * vishvananda: Vishvananda Ishaya >> * dan-prince: Dan Prince >> * belliott: Brian Elliott >> * p-draigbrady: Padraig Brady > >+1 on Chris, Dan, Vish and Brian, who I believe have all moved on to new >adventures. -1 on pixelbeat, since he's been active in reviews on >various things AFAICT in the last 60-90 days and seems to be still a >considerate reviewer in various areas. > >Best, >-jay > > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gkotton at vmware.com Sun Dec 7 08:08:14 2014 From: gkotton at vmware.com (Gary Kotton) Date: Sun, 7 Dec 2014 08:08:14 +0000 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: Hi, I have raised my concerns on the proposal. I think that all plugins should be treated on an equal footing. My main concern is having the ML2 plugin in tree whilst the others will be moved out of tree will be problematic. I think that the model will be complete if the ML2 was also out of tree. This will help crystalize the idea and make sure that the model works correctly. Thanks Gary From: "Armando M." > Reply-To: OpenStack List > Date: Saturday, December 6, 2014 at 1:04 AM To: OpenStack List >, "openstack at lists.openstack.org" > Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition Hi folks, For a few weeks now the Neutron team has worked tirelessly on [1]. This initiative stems from the fact that as the project matures, evolution of processes and contribution guidelines need to evolve with it. This is to ensure that the project can keep on thriving in order to meet the needs of an ever growing community. The effort of documenting intentions, and fleshing out the various details of the proposal is about to reach an end, and we'll soon kick the tires to put the proposal into practice. Since the spec has grown pretty big, I'll try to capture the tl;dr below. If you have any comment please do not hesitate to raise them here and/or reach out to us. tl;dr >>> >From the Kilo release, we'll initiate a set of steps to change the following areas: * Code structure: every plugin or driver that exists or wants to exist as part of Neutron project is decomposed in an slim vendor integration (which lives in the Neutron repo), plus a bulkier vendor library (which lives in an independent publicly available repo); * Contribution process: this extends to the following aspects: * Design and Development: the process is largely unchanged for the part that pertains the vendor integration; the maintainer team is fully auto governed for the design and development of the vendor library; * Testing and Continuous Integration: maintainers will be required to support their vendor integration with 3rd CI testing; the requirements for 3rd CI testing are largely unchanged; * Defect management: the process is largely unchanged, issues affecting the vendor library can be tracked with whichever tool/process the maintainer see fit. In cases where vendor library fixes need to be reflected in the vendor integration, the usual OpenStack defect management apply. * Documentation: there will be some changes to the way plugins and drivers are documented with the intention of promoting discoverability of the integrated solutions. * Adoption and transition plan: we strongly advise maintainers to stay abreast of the developments of this effort, as their code, their CI, etc will be affected. The core team will provide guidelines and support throughout this cycle the ensure a smooth transition. To learn more, please refer to [1]. Many thanks, Armando [1] https://review.openstack.org/#/c/134680 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Sun Dec 7 08:47:28 2014 From: Tim.Bell at cern.ch (Tim Bell) Date: Sun, 7 Dec 2014 08:47:28 +0000 Subject: [openstack-dev] [nova] Fixing the console.log grows forever bug. In-Reply-To: <20141206053852.GB11931@thor.bakeyournoodle.com> References: <20141206053852.GB11931@thor.bakeyournoodle.com> Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E50102585779@CERNXCHG43.cern.ch> > -----Original Message----- > From: Tony Breeds [mailto:tony at bakeyournoodle.com] > Sent: 06 December 2014 06:39 > To: openstack-dev at lists.openstack.org > Subject: [openstack-dev] [nova] Fixing the console.log grows forever bug. > ... > > However I was encouraged to investigate fixing this in qemu, such that qemu > could process the HUP and make life better for all. This is certainly doable and > I'm happy[3] to do this work. I've floated the idea past qemu-devel and they > seem okay with the idea. My main concern is in lag and supporting qemu/libvirt > that can't handle this option. > Would the nova view console be able to see the older versions also ? Ideally, we'd also improve on the current situation where the console contents are limited to the current file which causes problems around hard reboot operations such as watchdog restarts. Thus, if qemu is logrotating the log files, the view console OpenStack operations would ideally be able to count all the rotated files as part of the console output. > For the sake of discussion I'll lay out my best guess right now on fixing this in > qemu. > > qemu 2.2.0 /should/ release this year the ETA is 2014-12-09[4] so the fix I'm > proposing would be available in qemu 2.3.0 which I think will be available in > June/July 2015. So we'd be into 'L' development before this fix is available and > possibly 'M' before the community distros (Fedora and Ubuntu)[5] include and > almost certainly longer for Enterprise distros. Along with the qemu > development I expect there to be some libvirt development as well but right now > I don't think that's critical to the feature or this discussion. > > So if that timeline is approximately correct: > > - Can we wait this long to fix the bug? As opposed to having it squashed in Kilo. > - What do we do in nova for the next ~12 months while know there isn't a qemu > to fix this? > - Then once there is a qemu that fixes the issue, do we just say 'thou must use > qemu 2.3.0' or would nova still need to support old and new qemu's ? > Can we just say that the console for qemu 2.2 would remain as currently and for the new functionality, you need qemu 2.3 ? > [1] https://bugs.launchpad.net/nova/+bug/832507 > [2] https://review.openstack.org/#/c/80865/ > [3] For some value of happy ;P > [4] From http://wiki.qemu.org/Planning/2.2 [5] Debian and Gentoo are a little > harder to quantify in this scenario but no > less important. > > Yours Tony. > > PS: If any of you have a secret laundry list of things qemu should do to make > life easier for nova. Put them on a wiki page so we can discuss them. > PPS: If this is going to be a thing we do (write features and fixes in qemu) > we're going to need a consistent plan on how we cope with that. From mikal at stillhq.com Sun Dec 7 09:19:54 2014 From: mikal at stillhq.com (Michael Still) Date: Sun, 7 Dec 2014 20:19:54 +1100 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: References: <54834007.5070007@gmail.com> Message-ID: On Sun, Dec 7, 2014 at 7:03 PM, Gary Kotton wrote: > On 12/6/14, 7:42 PM, "Jay Pipes" wrote: [snip] >>-1 on pixelbeat, since he's been active in reviews on >>various things AFAICT in the last 60-90 days and seems to be still a >>considerate reviewer in various areas. > > I agree -1 for Padraig I'm going to be honest and say I'm confused here. We've always said we expect cores to maintain an average of two reviews per day. That's not new, nor a rule created by me. Padraig is a great guy, but has been working on other things -- he's done 60 reviews in the last 60 days -- which is about half of what we expect from a core. Are we talking about removing the two reviews a day requirement? If so, how do we balance that with the widespread complaints that core isn't keeping up with its workload? We could add more people to core, but there is also a maximum practical size to the group if we're going to keep everyone on the same page, especially when the less active cores don't generally turn up to our IRC meetings and are therefore more "expensive" to keep up to date. How can we say we are doing our best to keep up with the incoming review workload if all reviewers aren't doing at least the minimum level of reviews? Michael -- Rackspace Australia From pradip.interra at gmail.com Sun Dec 7 09:52:46 2014 From: pradip.interra at gmail.com (Pradip Mukhopadhyay) Date: Sun, 7 Dec 2014 15:22:46 +0530 Subject: [openstack-dev] [Cinder] Message-ID: Hi, Is there a way to find out/list down the backends discovered for Cinder? There is, I guess, no API for get the list of backends. Thanks, Pradip -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradip.interra at gmail.com Sun Dec 7 09:53:33 2014 From: pradip.interra at gmail.com (Pradip Mukhopadhyay) Date: Sun, 7 Dec 2014 15:23:33 +0530 Subject: [openstack-dev] [Cinder] Listing of backends Message-ID: Hi, Is there a way to find out/list down the backends discovered for Cinder? There is, I guess, no API for get the list of backends. Thanks, Pradip -------------- next part -------------- An HTML attachment was scrubbed... URL: From EvgenyF at Radware.com Sun Dec 7 10:37:45 2014 From: EvgenyF at Radware.com (Evgeny Fedoruk) Date: Sun, 7 Dec 2014 10:37:45 +0000 Subject: [openstack-dev] [neutron][lbaas] lbaas v2 drivers/specs In-Reply-To: <3A46DF73-D43B-4685-B022-4A479B21CDD5@a10networks.com> References: <3A46DF73-D43B-4685-B022-4A479B21CDD5@a10networks.com> Message-ID: Hi Doug, Thanks for a reminder, Res-submitted following three for kilo: LBaaS Layer 7 rules - https://review.openstack.org/#/c/139853/ Neutron LBaaS TLS - https://review.openstack.org/#/c/139852 Radware LBaaS Driver - https://review.openstack.org/#/c/139854/ Thanks, Evg -----Original Message----- From: Doug Wiegley [mailto:dougw at a10networks.com] Sent: Friday, December 05, 2014 12:17 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [neutron][lbaas] lbaas v2 drivers/specs Hi lbaas, Just a reminder that the spec submission deadline is Dec 8th (this Monday.) If you are working on lbaas v2 features or drivers, and had a spec in Juno, it must be re-submitted for Kilo. LBaaS v2 specs that are currently submitted for Kilo: LBaaS V2 API and object model definition - https://review.openstack.org/#/c/138205/ A10 Networks LBaaS v2 driver - https://review.openstack.org/#/c/138930/ Spec for the brocade lbaas driver based on v2 lbaas data model - https://review.openstack.org/#/c/134108/ Specs from Juno that have not been re-submitted yet: LBaaS Layer 7 rules - https://github.com/openstack/neutron-specs/blob/master/specs/juno-incubator/lbaas-l7-rules.rst LBaaS reference implementation TLS support - https://github.com/openstack/neutron-specs/blob/master/specs/juno-incubator/lbaas-ref-driver-impl-tls.rst LBaaS Refactor HAProxy namespace driver - https://github.com/openstack/neutron-specs/blob/master/specs/juno-incubator/lbaas-refactor-haproxy-namespace-driver-to-new-driver-interface.rst Neutron LBaaS TLS - Specification - https://github.com/openstack/neutron-specs/blob/master/specs/juno-incubator/lbaas-tls.rst Radware LBaaS Driver - https://github.com/openstack/neutron-specs/blob/master/specs/juno-incubator/radware-lbaas-driver.rst If you were working on a spec in Juno, and no longer have time, please reply here or let me know directly. Thanks, Doug _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gkotton at vmware.com Sun Dec 7 10:48:47 2014 From: gkotton at vmware.com (Gary Kotton) Date: Sun, 7 Dec 2014 10:48:47 +0000 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: References: <54834007.5070007@gmail.com> Message-ID: Please see http://stackalytics.com/report/contribution/nova-group/90. If we are following the average of 2 reviews per day then proposed list should be updated. On 12/7/14, 11:19 AM, "Michael Still" wrote: >On Sun, Dec 7, 2014 at 7:03 PM, Gary Kotton wrote: >> On 12/6/14, 7:42 PM, "Jay Pipes" wrote: > >[snip] > >>>-1 on pixelbeat, since he's been active in reviews on >>>various things AFAICT in the last 60-90 days and seems to be still a >>>considerate reviewer in various areas. >> >> I agree -1 for Padraig > >I'm going to be honest and say I'm confused here. > >We've always said we expect cores to maintain an average of two >reviews per day. That's not new, nor a rule created by me. Padraig is >a great guy, but has been working on other things -- he's done 60 >reviews in the last 60 days -- which is about half of what we expect >from a core. > >Are we talking about removing the two reviews a day requirement? If >so, how do we balance that with the widespread complaints that core >isn't keeping up with its workload? We could add more people to core, >but there is also a maximum practical size to the group if we're going >to keep everyone on the same page, especially when the less active >cores don't generally turn up to our IRC meetings and are therefore >more "expensive" to keep up to date. > >How can we say we are doing our best to keep up with the incoming >review workload if all reviewers aren't doing at least the minimum >level of reviews? > >Michael > >-- >Rackspace Australia > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From duncan.thomas at gmail.com Sun Dec 7 10:55:30 2014 From: duncan.thomas at gmail.com (Duncan Thomas) Date: Sun, 7 Dec 2014 10:55:30 +0000 Subject: [openstack-dev] [Cinder] Listing of backends In-Reply-To: References: Message-ID: See https://review.openstack.org/#/c/119938/ - now merged. I don't believe the python-cinderclient side work has been done yet, nor anything in Horizon, but the API itself is now there. On 7 December 2014 at 09:53, Pradip Mukhopadhyay wrote: > Hi, > > > Is there a way to find out/list down the backends discovered for Cinder? > > > There is, I guess, no API for get the list of backends. > > > > Thanks, > Pradip > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Duncan Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradip.interra at gmail.com Sun Dec 7 12:35:33 2014 From: pradip.interra at gmail.com (Pradip Mukhopadhyay) Date: Sun, 7 Dec 2014 18:05:33 +0530 Subject: [openstack-dev] [Cinder] Listing of backends In-Reply-To: References: Message-ID: Thanks! One more question. Is there any equivalent API to add keys to the volume-type? I understand we have APIs for creating volume-type? But how about adding key-value pair (say I want to add-key to the volume-type as backend-name="my_iscsi_backend" ? Thanks, Pradip On Sun, Dec 7, 2014 at 4:25 PM, Duncan Thomas wrote: > See https://review.openstack.org/#/c/119938/ - now merged. I don't > believe the python-cinderclient side work has been done yet, nor anything > in Horizon, but the API itself is now there. > > On 7 December 2014 at 09:53, Pradip Mukhopadhyay > wrote: > >> Hi, >> >> >> Is there a way to find out/list down the backends discovered for Cinder? >> >> >> There is, I guess, no API for get the list of backends. >> >> >> >> Thanks, >> Pradip >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Duncan Thomas > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dannchoi at cisco.com Sun Dec 7 13:47:26 2014 From: dannchoi at cisco.com (Danny Choi (dannchoi)) Date: Sun, 7 Dec 2014 13:47:26 +0000 Subject: [openstack-dev] [qa] How to delete a VM which is in ERROR state? Message-ID: That does not work. It put the VM in ACTIVE Status, but in NOSTATE Power State. Subsequent delete still won?t remove the VM. +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ACTIVE | - | NOSTATE | | Regards, Danny Date: Sat, 6 Dec 2014 21:25:30 -0500 From: "Xu (Simon) Chen" > To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [qa] How to delete a VM which is in ERROR state? Message-ID: > Content-Type: text/plain; charset="utf-8" Try "nova reset-state --active uuid". On Saturday, December 6, 2014, Danny Choi (dannchoi) > wrote: Hi, I have a VM which is in ERROR state. +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR | - | NOSTATE | | I tried in both CLI ?nova delete? and Horizon ?terminate instance?. Both accepted the delete command without any error. However, the VM never got deleted. Is there a way to remove the VM? Thanks, Danny -------------- next part -------------- An HTML attachment was scrubbed... URL: From thefossgeek at gmail.com Sun Dec 7 15:40:57 2014 From: thefossgeek at gmail.com (foss geek) Date: Sun, 7 Dec 2014 21:10:57 +0530 Subject: [openstack-dev] [qa] How to delete a VM which is in ERROR state? In-Reply-To: References: Message-ID: Have you tried to delete after reset? # nova reset-state --active # nova delete It works well for me if the VM state is error state. -- Thanks & Regards E-Mail: thefossgeek at gmail.com IRC: neophy Blog : http://lmohanphy.livejournal.com/ On Sun, Dec 7, 2014 at 7:17 PM, Danny Choi (dannchoi) wrote: > That does not work. > > It put the VM in ACTIVE Status, but in NOSTATE Power State. > > Subsequent delete still won?t remove the VM. > > > +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ > > | ID | Name > | Status | Task State | Power State | Networks | > > > +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ > | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | > cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ACTIVE | - | > NOSTATE | | > > > Regards, > Danny > > > Date: Sat, 6 Dec 2014 21:25:30 -0500 > From: "Xu (Simon) Chen" > To: "OpenStack Development Mailing List (not for usage questions)" > > Subject: Re: [openstack-dev] [qa] How to delete a VM which is in ERROR > state? > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Try "nova reset-state --active uuid". > > On Saturday, December 6, 2014, Danny Choi (dannchoi) > wrote: > > Hi, > > I have a VM which is in ERROR state. > > > > +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ > > | ID | Name > | Status | Task State | Power State | Networks | > > > > +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ > > | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | > cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR | - | > NOSTATE | | > > I tried in both CLI ?nova delete? and Horizon ?terminate instance?. > Both accepted the delete command without any error. > However, the VM never got deleted. > > Is there a way to remove the VM? > > Thanks, > Danny > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thefossgeek at gmail.com Sun Dec 7 15:47:30 2014 From: thefossgeek at gmail.com (foss geek) Date: Sun, 7 Dec 2014 21:17:30 +0530 Subject: [openstack-dev] [qa] How to delete a VM which is in ERROR state? In-Reply-To: References: Message-ID: Also try with nova force-delete after reset: $ nova help force-delete usage: nova force-delete Force delete a server. Positional arguments: Name or ID of server. -- Thanks & Regards E-Mail: thefossgeek at gmail.com IRC: neophy Blog : http://lmohanphy.livejournal.com/ On Sun, Dec 7, 2014 at 9:10 PM, foss geek wrote: > Have you tried to delete after reset? > > # nova reset-state --active > > # nova delete > > It works well for me if the VM state is error state. > > > -- > Thanks & Regards > E-Mail: thefossgeek at gmail.com > IRC: neophy > Blog : http://lmohanphy.livejournal.com/ > > > > On Sun, Dec 7, 2014 at 7:17 PM, Danny Choi (dannchoi) > wrote: > >> That does not work. >> >> It put the VM in ACTIVE Status, but in NOSTATE Power State. >> >> Subsequent delete still won?t remove the VM. >> >> >> +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ >> >> | ID | Name >> | Status | Task State | Power State | Networks | >> >> >> +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ >> | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | >> cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ACTIVE | - | >> NOSTATE | | >> >> >> Regards, >> Danny >> >> >> Date: Sat, 6 Dec 2014 21:25:30 -0500 >> From: "Xu (Simon) Chen" >> To: "OpenStack Development Mailing List (not for usage questions)" >> >> Subject: Re: [openstack-dev] [qa] How to delete a VM which is in ERROR >> state? >> Message-ID: >> >> Content-Type: text/plain; charset="utf-8" >> >> Try "nova reset-state --active uuid". >> >> On Saturday, December 6, 2014, Danny Choi (dannchoi) > > >> wrote: >> >> Hi, >> >> I have a VM which is in ERROR state. >> >> >> >> +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ >> >> | ID | Name >> | Status | Task State | Power State | Networks | >> >> >> >> +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ >> >> | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | >> cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR | - | >> NOSTATE | | >> >> I tried in both CLI ?nova delete? and Horizon ?terminate instance?. >> Both accepted the delete command without any error. >> However, the VM never got deleted. >> >> Is there a way to remove the VM? >> >> Thanks, >> Danny >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Sun Dec 7 16:41:25 2014 From: dms at danplanet.com (Dan Smith) Date: Sun, 07 Dec 2014 08:41:25 -0800 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: References: <54834007.5070007@gmail.com> Message-ID: <54848335.70803@danplanet.com> > I'm going to be honest and say I'm confused here. > > We've always said we expect cores to maintain an average of two > reviews per day. That's not new, nor a rule created by me. Padraig is > a great guy, but has been working on other things -- he's done 60 > reviews in the last 60 days -- which is about half of what we expect > from a core. > > Are we talking about removing the two reviews a day requirement? Please, no. A small set of nova-core does most of the reviews right now anyway. Honestly, I feel like two a day is a *really* low bar, especially given how much time I put into it. We need to be doing more reviews all around, which to me means a *higher* expectation at the low end, if anything. --Dan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From jaypipes at gmail.com Sun Dec 7 17:02:24 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Sun, 07 Dec 2014 12:02:24 -0500 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: References: <54834007.5070007@gmail.com> Message-ID: <54848820.7060802@gmail.com> On 12/07/2014 04:19 AM, Michael Still wrote: > On Sun, Dec 7, 2014 at 7:03 PM, Gary Kotton wrote: >> On 12/6/14, 7:42 PM, "Jay Pipes" wrote: > > [snip] > >>> -1 on pixelbeat, since he's been active in reviews on >>> various things AFAICT in the last 60-90 days and seems to be still a >>> considerate reviewer in various areas. >> >> I agree -1 for Padraig > > I'm going to be honest and say I'm confused here. > > We've always said we expect cores to maintain an average of two > reviews per day. That's not new, nor a rule created by me. Padraig is > a great guy, but has been working on other things -- he's done 60 > reviews in the last 60 days -- which is about half of what we expect > from a core. > > Are we talking about removing the two reviews a day requirement? If > so, how do we balance that with the widespread complaints that core > isn't keeping up with its workload? We could add more people to core, > but there is also a maximum practical size to the group if we're going > to keep everyone on the same page, especially when the less active > cores don't generally turn up to our IRC meetings and are therefore > more "expensive" to keep up to date. > > How can we say we are doing our best to keep up with the incoming > review workload if all reviewers aren't doing at least the minimum > level of reviews? Personally, I care more about the quality of reviews than the quantity. That said, I understand that we have a small number of core reviewers relative to the number of open reviews in Nova (~650-700 open reviews most days) and agree with Dan Smith that 2 reviews per day doesn't sound like too much of a hurdle for core reviewers. The reason I think it's important to keep Padraig as a core is that he has done considerate, thoughtful code reviews, albeit in a smaller quantity. By saying we only look at the number of reviews in our estimation of keeping contributors on the core team, we are incentivizing the wrong behaviour, IMO. We should be pushing that the thought that goes into reviews is more important than the sheer number of reviews. Is it critical that we get more eyeballs reviewing code? Yes, absolutely it is. Is it critical that we get more reviews from core reviewers as well as non-core reviewers. Yes, absolutely. Bottom line, we need to balance between quality and quantity, and kicking out a core reviewer who has quality code reviews because they don't have that many of them sends the wrong message, IMO. Best, -jay From mestery at mestery.com Sun Dec 7 17:19:05 2014 From: mestery at mestery.com (Kyle Mestery) Date: Sun, 7 Dec 2014 11:19:05 -0600 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: Gary, you are still miss the point of this proposal. Please see my comments in review. We are not forcing things out of tree, we are thinning them. The text you quoted in the review makes that clear. We will look at further decomposing ML2 post Kilo, but we have to be realistic with what we can accomplish during Kilo. Find me on IRC Monday morning and we can discuss further if you still have questions and concerns. Thanks! Kyle On Sun, Dec 7, 2014 at 2:08 AM, Gary Kotton wrote: > Hi, > I have raised my concerns on the proposal. I think that all plugins should > be treated on an equal footing. My main concern is having the ML2 plugin in > tree whilst the others will be moved out of tree will be problematic. I > think that the model will be complete if the ML2 was also out of tree. This > will help crystalize the idea and make sure that the model works correctly. > Thanks > Gary > > From: "Armando M." > Reply-To: OpenStack List > Date: Saturday, December 6, 2014 at 1:04 AM > To: OpenStack List , " > openstack at lists.openstack.org" > Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition > > Hi folks, > > For a few weeks now the Neutron team has worked tirelessly on [1]. > > This initiative stems from the fact that as the project matures, > evolution of processes and contribution guidelines need to evolve with it. > This is to ensure that the project can keep on thriving in order to meet > the needs of an ever growing community. > > The effort of documenting intentions, and fleshing out the various > details of the proposal is about to reach an end, and we'll soon kick the > tires to put the proposal into practice. Since the spec has grown pretty > big, I'll try to capture the tl;dr below. > > If you have any comment please do not hesitate to raise them here and/or > reach out to us. > > tl;dr >>> > > From the Kilo release, we'll initiate a set of steps to change the > following areas: > > - Code structure: every plugin or driver that exists or wants to exist > as part of Neutron project is decomposed in an slim vendor integration > (which lives in the Neutron repo), plus a bulkier vendor library (which > lives in an independent publicly available repo); > - Contribution process: this extends to the following aspects: > - Design and Development: the process is largely unchanged for the > part that pertains the vendor integration; the maintainer team is fully > auto governed for the design and development of the vendor library; > - Testing and Continuous Integration: maintainers will be required > to support their vendor integration with 3rd CI testing; the requirements > for 3rd CI testing are largely unchanged; > - Defect management: the process is largely unchanged, issues > affecting the vendor library can be tracked with whichever tool/process the > maintainer see fit. In cases where vendor library fixes need to be > reflected in the vendor integration, the usual OpenStack defect management > apply. > - Documentation: there will be some changes to the way plugins and > drivers are documented with the intention of promoting discoverability of > the integrated solutions. > - Adoption and transition plan: we strongly advise maintainers to stay > abreast of the developments of this effort, as their code, their CI, etc > will be affected. The core team will provide guidelines and support > throughout this cycle the ensure a smooth transition. > > To learn more, please refer to [1]. > > Many thanks, > Armando > > [1] https://review.openstack.org/#/c/134680 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndipanov at redhat.com Sun Dec 7 17:33:30 2014 From: ndipanov at redhat.com (=?UTF-8?B?Tmlrb2xhIMSQaXBhbm92?=) Date: Sun, 07 Dec 2014 18:33:30 +0100 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <54848820.7060802@gmail.com> References: <54834007.5070007@gmail.com> <54848820.7060802@gmail.com> Message-ID: <54848F6A.2050000@redhat.com> On 12/07/2014 06:02 PM, Jay Pipes wrote: > On 12/07/2014 04:19 AM, Michael Still wrote: >> On Sun, Dec 7, 2014 at 7:03 PM, Gary Kotton wrote: >>> On 12/6/14, 7:42 PM, "Jay Pipes" wrote: >> >> [snip] >> >>>> -1 on pixelbeat, since he's been active in reviews on >>>> various things AFAICT in the last 60-90 days and seems to be still a >>>> considerate reviewer in various areas. >>> >>> I agree -1 for Padraig >> >> I'm going to be honest and say I'm confused here. >> >> We've always said we expect cores to maintain an average of two >> reviews per day. That's not new, nor a rule created by me. Padraig is >> a great guy, but has been working on other things -- he's done 60 >> reviews in the last 60 days -- which is about half of what we expect >> from a core. >> >> Are we talking about removing the two reviews a day requirement? If >> so, how do we balance that with the widespread complaints that core >> isn't keeping up with its workload? We could add more people to core, >> but there is also a maximum practical size to the group if we're going >> to keep everyone on the same page, especially when the less active >> cores don't generally turn up to our IRC meetings and are therefore >> more "expensive" to keep up to date. >> >> How can we say we are doing our best to keep up with the incoming >> review workload if all reviewers aren't doing at least the minimum >> level of reviews? > > Personally, I care more about the quality of reviews than the quantity. > That said, I understand that we have a small number of core reviewers > relative to the number of open reviews in Nova (~650-700 open reviews > most days) and agree with Dan Smith that 2 reviews per day doesn't sound > like too much of a hurdle for core reviewers. > > The reason I think it's important to keep Padraig as a core is that he > has done considerate, thoughtful code reviews, albeit in a smaller > quantity. By saying we only look at the number of reviews in our > estimation of keeping contributors on the core team, we are > incentivizing the wrong behaviour, IMO. We should be pushing that the > thought that goes into reviews is more important than the sheer number > of reviews. > > Is it critical that we get more eyeballs reviewing code? Yes, absolutely > it is. Is it critical that we get more reviews from core reviewers as > well as non-core reviewers. Yes, absolutely. > > Bottom line, we need to balance between quality and quantity, and > kicking out a core reviewer who has quality code reviews because they > don't have that many of them sends the wrong message, IMO. > I could not *possibly* agree more with everything Jay wrote above! Quality should always win! And 2 reviews a day is a nice approximation of what is expected but we should not have any number as a hard requirement. It's lazy (in addition to sending the wrong message) and we _need_ to be better than that! Slightly off-topic - since we're so into numbers - Russell's statistics were at one point showing the ratio between reviews given and reviews received. I tend to be wary of people reviewing without writing any code themselves, as they tend to lose touch with the actual constraints the code is written under in different parts of Nova. This is especially important when reviewing larger feature branches or more complicated refactoring (a big part of what we want to prioritize in Kilo). As any number - that one is also never going to tell the whole story, and should not ever become a hard rule - but I for one would be interested to see it. N. From gkotton at vmware.com Sun Dec 7 17:41:05 2014 From: gkotton at vmware.com (Gary Kotton) Date: Sun, 7 Dec 2014 17:41:05 +0000 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <54848335.70803@danplanet.com> References: <54834007.5070007@gmail.com> <54848335.70803@danplanet.com> Message-ID: On 12/7/14, 6:41 PM, "Dan Smith" wrote: >> I'm going to be honest and say I'm confused here. >> >> We've always said we expect cores to maintain an average of two >> reviews per day. That's not new, nor a rule created by me. Padraig is >> a great guy, but has been working on other things -- he's done 60 >> reviews in the last 60 days -- which is about half of what we expect >> from a core. >> >> Are we talking about removing the two reviews a day requirement? > >Please, no. A small set of nova-core does most of the reviews right now >anyway. Honestly, I feel like two a day is a *really* low bar, >especially given how much time I put into it. We need to be doing more >reviews all around, which to me means a *higher* expectation at the low >end, if anything. +1 - well said! > >--Dan > From gkotton at vmware.com Sun Dec 7 17:51:16 2014 From: gkotton at vmware.com (Gary Kotton) Date: Sun, 7 Dec 2014 17:51:16 +0000 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: Hi Kyle, I am not missing the point. I understand the proposal. I just think that it has some shortcomings (unless I misunderstand, which will certainly not be the first time and most definitely not the last). The thinning out is to have a shim in place. I understand this and this will be the entry point for the plugin. I do not have a concern for this. My concern is that we are not doing this with the ML2 off the bat. That should lead by example as it is our reference architecture. Lets not kid anyone, but we are going to hit some problems with the decomposition. I would prefer that it be done with the default implementation. Why? 1. Cause we will fix them quicker as it is something that prevent Neutron from moving forwards 2. We will just need to fix in one place first and not in N (where N is the vendor plugins) 3. This is a community effort ? so we will have a lot more eyes on it 4. It will provide a reference architecture for all new plugins that want to be added to the tree 5. It will provide a working example for plugin that are already in tree and are to be replaced by the shim If we really want to do this, we can say freeze all development (which is just approvals for patches) for a few days so that we will can just focus on this. I stated what I think should be the process on the review. For those who do not feel like finding the link: 1. Create a stack forge project for ML2 2. Create the shim in Neutron 3. Update devstack for the to use the two repos and the shim When #3 is up and running we switch for that to be the gate. Then we start a stopwatch on all other plugins. Sure, I?ll catch you on IRC tomorrow. I guess that you guys will bash out the details at the meetup. Sadly I will not be able to attend ? so you will have to delay on the tar and feathers. Thanks Gary From: "mestery at mestery.com" > Reply-To: OpenStack List > Date: Sunday, December 7, 2014 at 7:19 PM To: OpenStack List > Cc: "openstack at lists.openstack.org" > Subject: Re: [openstack-dev] [Neutron] Core/Vendor code decomposition Gary, you are still miss the point of this proposal. Please see my comments in review. We are not forcing things out of tree, we are thinning them. The text you quoted in the review makes that clear. We will look at further decomposing ML2 post Kilo, but we have to be realistic with what we can accomplish during Kilo. Find me on IRC Monday morning and we can discuss further if you still have questions and concerns. Thanks! Kyle On Sun, Dec 7, 2014 at 2:08 AM, Gary Kotton > wrote: Hi, I have raised my concerns on the proposal. I think that all plugins should be treated on an equal footing. My main concern is having the ML2 plugin in tree whilst the others will be moved out of tree will be problematic. I think that the model will be complete if the ML2 was also out of tree. This will help crystalize the idea and make sure that the model works correctly. Thanks Gary From: "Armando M." > Reply-To: OpenStack List > Date: Saturday, December 6, 2014 at 1:04 AM To: OpenStack List >, "openstack at lists.openstack.org" > Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition Hi folks, For a few weeks now the Neutron team has worked tirelessly on [1]. This initiative stems from the fact that as the project matures, evolution of processes and contribution guidelines need to evolve with it. This is to ensure that the project can keep on thriving in order to meet the needs of an ever growing community. The effort of documenting intentions, and fleshing out the various details of the proposal is about to reach an end, and we'll soon kick the tires to put the proposal into practice. Since the spec has grown pretty big, I'll try to capture the tl;dr below. If you have any comment please do not hesitate to raise them here and/or reach out to us. tl;dr >>> >From the Kilo release, we'll initiate a set of steps to change the following areas: * Code structure: every plugin or driver that exists or wants to exist as part of Neutron project is decomposed in an slim vendor integration (which lives in the Neutron repo), plus a bulkier vendor library (which lives in an independent publicly available repo); * Contribution process: this extends to the following aspects: * Design and Development: the process is largely unchanged for the part that pertains the vendor integration; the maintainer team is fully auto governed for the design and development of the vendor library; * Testing and Continuous Integration: maintainers will be required to support their vendor integration with 3rd CI testing; the requirements for 3rd CI testing are largely unchanged; * Defect management: the process is largely unchanged, issues affecting the vendor library can be tracked with whichever tool/process the maintainer see fit. In cases where vendor library fixes need to be reflected in the vendor integration, the usual OpenStack defect management apply. * Documentation: there will be some changes to the way plugins and drivers are documented with the intention of promoting discoverability of the integrated solutions. * Adoption and transition plan: we strongly advise maintainers to stay abreast of the developments of this effort, as their code, their CI, etc will be affected. The core team will provide guidelines and support throughout this cycle the ensure a smooth transition. To learn more, please refer to [1]. Many thanks, Armando [1] https://review.openstack.org/#/c/134680 _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Sun Dec 7 19:52:24 2014 From: mikal at stillhq.com (Michael Still) Date: Mon, 8 Dec 2014 06:52:24 +1100 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <54848820.7060802@gmail.com> References: <54834007.5070007@gmail.com> <54848820.7060802@gmail.com> Message-ID: On Mon, Dec 8, 2014 at 4:02 AM, Jay Pipes wrote: > On 12/07/2014 04:19 AM, Michael Still wrote: [snip] >> We've always said we expect cores to maintain an average of two >> reviews per day. That's not new, nor a rule created by me. Padraig is >> a great guy, but has been working on other things -- he's done 60 >> reviews in the last 60 days -- which is about half of what we expect >> from a core. >> >> Are we talking about removing the two reviews a day requirement? If >> so, how do we balance that with the widespread complaints that core >> isn't keeping up with its workload? We could add more people to core, >> but there is also a maximum practical size to the group if we're going >> to keep everyone on the same page, especially when the less active >> cores don't generally turn up to our IRC meetings and are therefore >> more "expensive" to keep up to date. >> >> How can we say we are doing our best to keep up with the incoming >> review workload if all reviewers aren't doing at least the minimum >> level of reviews? > > Personally, I care more about the quality of reviews than the quantity. That > said, I understand that we have a small number of core reviewers relative to > the number of open reviews in Nova (~650-700 open reviews most days) and > agree with Dan Smith that 2 reviews per day doesn't sound like too much of a > hurdle for core reviewers. > > The reason I think it's important to keep Padraig as a core is that he has > done considerate, thoughtful code reviews, albeit in a smaller quantity. By > saying we only look at the number of reviews in our estimation of keeping > contributors on the core team, we are incentivizing the wrong behaviour, > IMO. We should be pushing that the thought that goes into reviews is more > important than the sheer number of reviews. > > Is it critical that we get more eyeballs reviewing code? Yes, absolutely it > is. Is it critical that we get more reviews from core reviewers as well as > non-core reviewers. Yes, absolutely. > > Bottom line, we need to balance between quality and quantity, and kicking > out a core reviewer who has quality code reviews because they don't have > that many of them sends the wrong message, IMO. I agree that we need to maintain the quality of reviews. What I am instead expecting is for core reviewers to spend enough of their work day to get two high quality reviews done. That's a very low bar. I am trying to balance the following constraints: - we aren't keeping up with reviews - a small number of cores are doing the majority of the work - there have been threats of forking drivers out of the nova code base if we don't solve this and I really don't want to see that happen There are other things happening behind the scenes as well -- we have a veto process for current cores when we propose a new core. It has been made clear to me that several current core members believe we have reached "the maximum effective size" for core, and that they will therefore veto new additions. Therefore, we need to make room in core for people who are able to keep up with our review workload. You know what makes me really sad? No one has suggested that perhaps Padraig could just pick up his review rate a little. I've repeatedly said we can re-add reviewers if that happens. Michael -- Rackspace Australia From johannes at erdfelt.com Sun Dec 7 21:33:44 2014 From: johannes at erdfelt.com (Johannes Erdfelt) Date: Sun, 7 Dec 2014 13:33:44 -0800 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: References: <54834007.5070007@gmail.com> <54848820.7060802@gmail.com> Message-ID: <20141207213344.GE26706@sventech.com> On Mon, Dec 08, 2014, Michael Still wrote: > There are other things happening behind the scenes as well -- we have > a veto process for current cores when we propose a new core. It has > been made clear to me that several current core members believe we > have reached "the maximum effective size" for core, and that they will > therefore veto new additions. Therefore, we need to make room in core > for people who are able to keep up with our review workload. I've heard this before, but I've never understood this. Can you (or someone else) elaborate on why they believe that there is an upper limit on the size of nova-core and why that is the current size? JE From carl at ecbaldwin.net Sun Dec 7 22:04:41 2014 From: carl at ecbaldwin.net (Carl Baldwin) Date: Sun, 7 Dec 2014 15:04:41 -0700 Subject: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration In-Reply-To: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP> References: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP> Message-ID: Ryan, I have been working with the L3 sub team in this direction. Progress has been slow because of other priorities but we have made some. I have written a blueprint detailing some changes needed to the code to enable the flexibility to one day run glaring ups on an l3 routed network [1]. Jaime has been working on one that integrates ryu (or other speakers) with neutron [2]. Dvr was also a step in this direction. I'd like to invite you to the l3 weekly meeting [3] to discuss further. I'm very happy to see interest in this area and have someone new to collaborate. Carl [1] https://review.openstack.org/#/c/88619/ [2] https://review.openstack.org/#/c/125401/ [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam On Dec 3, 2014 4:04 PM, "Ryan Clevenger" wrote: > Hi, > > At Rackspace, we have a need to create a higher level networking service > primarily for the purpose of creating a Floating IP solution in our > environment. The current solutions for Floating IPs, being tied to plugin > implementations, does not meet our needs at scale for the following reasons: > > 1. Limited endpoint H/A mainly targeting failover only and not > multi-active endpoints, > 2. Lack of noisy neighbor and DDOS mitigation, > 3. IP fragmentation (with cells, public connectivity is terminated inside > each cell leading to fragmentation and IP stranding when cell CPU/Memory > use doesn't line up with allocated IP blocks. Abstracting public > connectivity away from nova installations allows for much more efficient > use of those precious IPv4 blocks). > 4. Diversity in transit (multiple encapsulation and transit types on a per > floating ip basis). > > We realize that network infrastructures are often unique and such a > solution would likely diverge from provider to provider. However, we would > love to collaborate with the community to see if such a project could be > built that would meet the needs of providers at scale. We believe that, at > its core, this solution would boil down to terminating north<->south > traffic temporarily at a massively horizontally scalable centralized core > and then encapsulating traffic east<->west to a specific host based on the > association setup via the current L3 router's extension's 'floatingips' > resource. > > Our current idea, involves using Open vSwitch for header rewriting and > tunnel encapsulation combined with a set of Ryu applications for management: > > https://i.imgur.com/bivSdcC.png > > The Ryu application uses Ryu's BGP support to announce up to the Public > Routing layer individual floating ips (/32's or /128's) which are then > summarized and announced to the rest of the datacenter. If a particular > floating ip is experiencing unusually large traffic (DDOS, slashdot effect, > etc.), the Ryu application could change the announcements up to the Public > layer to shift that traffic to dedicated hosts setup for that purpose. It > also announces a single /32 "Tunnel Endpoint" ip downstream to the > TunnelNet Routing system which provides transit to and from the cells and > their hypervisors. Since traffic from either direction can then end up on > any of the FLIP hosts, a simple flow table to modify the MAC and IP in > either the SRC or DST fields (depending on traffic direction) allows the > system to be completely stateless. We have proven this out (with static > routing and flows) to work reliably in a small lab setup. > > On the hypervisor side, we currently plumb networks into separate OVS > bridges. Another Ryu application would control the bridge that handles > overlay networking to selectively divert traffic destined for the default > gateway up to the FLIP NAT systems, taking into account any configured > logical routing and local L2 traffic to pass out into the existing overlay > fabric undisturbed. > > Adding in support for L2VPN EVPN ( > https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN > Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to > the Ryu BGP speaker will allow the hypervisor side Ryu application to > advertise up to the FLIP system reachability information to take into > account VM failover, live-migrate, and supported encapsulation types. We > believe that decoupling the tunnel endpoint discovery from the control > plane (Nova/Neutron) will provide for a more robust solution as well as > allow for use outside of openstack if desired. > > ________________________________________ > > Ryan Clevenger > Manager, Cloud Engineering - US > m: 678.548.7261 > e: ryan.clevenger at rackspace.com > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Sun Dec 7 22:10:50 2014 From: mikal at stillhq.com (Michael Still) Date: Mon, 8 Dec 2014 09:10:50 +1100 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <20141207213344.GE26706@sventech.com> References: <54834007.5070007@gmail.com> <54848820.7060802@gmail.com> <20141207213344.GE26706@sventech.com> Message-ID: On Mon, Dec 8, 2014 at 8:33 AM, Johannes Erdfelt wrote: > On Mon, Dec 08, 2014, Michael Still wrote: >> There are other things happening behind the scenes as well -- we have >> a veto process for current cores when we propose a new core. It has >> been made clear to me that several current core members believe we >> have reached "the maximum effective size" for core, and that they will >> therefore veto new additions. Therefore, we need to make room in core >> for people who are able to keep up with our review workload. > > I've heard this before, but I've never understood this. > > Can you (or someone else) elaborate on why they believe that there is an > upper limit on the size of nova-core and why that is the current size? I'm not particularly advocating this stance, but it is the context I need to operate in (where a single veto can kill a nomination). The argument boils down to there is a communications cost to adding someone to core, and therefore there is a maximum size before the communications burden becomes to great. I will say that I am disappointed that we have cores who don't regularly attend our IRC meetings. That makes the communication much more complicated. Michael -- Rackspace Australia From dms at danplanet.com Sun Dec 7 22:27:00 2014 From: dms at danplanet.com (Dan Smith) Date: Sun, 07 Dec 2014 14:27:00 -0800 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: References: <54834007.5070007@gmail.com> <54848820.7060802@gmail.com> <20141207213344.GE26706@sventech.com> Message-ID: <5484D434.7080507@danplanet.com> > The argument boils down to there is a communications cost to adding > someone to core, and therefore there is a maximum size before the > communications burden becomes to great. I'm definitely of the mindset that the core team is something that has a maximum effective size. Nova is complicated and always changing; keeping everyone on top of current development themes is difficult. Just last week, we merged a patch that bumped the version of an RPC API without making the manager tolerant of the previous version. That's a theme we've had for a while, and yet it was still acked by two cores. A major complaint I hear a lot is "one core told me to do X and then another core told me to do !X". Obviously this will always happen, but I do think that the larger and more disconnected the core team becomes, the more often this will occur. If all the cores reviewed at the rate of the top five and we still had a throughput problem, then evaluating the optimal size would be a thing we'd need to do. However, even at the current size, we have (IMHO) communication problems, mostly uninvolved cores, and patches going in that break versioning rules. Making the team arbitrarily larger doesn't seem like a good idea to me. > I will say that I am disappointed that we have cores who don't > regularly attend our IRC meetings. That makes the communication much > more complicated. Agreed. We alternate the meeting times such that this shouldn't be hard, IMHO. --Dan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From tony at bakeyournoodle.com Sun Dec 7 22:56:35 2014 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 8 Dec 2014 09:56:35 +1100 Subject: [openstack-dev] [nova] Fixing the console.log grows forever bug. In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E50102585779@CERNXCHG43.cern.ch> References: <20141206053852.GB11931@thor.bakeyournoodle.com> <5D7F9996EA547448BC6C54C8C5AAF4E50102585779@CERNXCHG43.cern.ch> Message-ID: <20141207225635.GA19363@thor.bakeyournoodle.com> On Sun, Dec 07, 2014 at 08:47:28AM +0000, Tim Bell wrote: > Would the nova view console be able to see the older versions also ? Ideally, > we'd also improve on the current situation where the console contents are > limited to the current file which causes problems around hard reboot > operations such as watchdog restarts. Thus, if qemu is logrotating the log > files, the view console OpenStack operations would ideally be able to count > all the rotated files as part of the console output. So I think the TL;DR: is Yup we can do that and regardless of which process owns the logfile. Having said that I think there are at least 2 related topics in your question. As I see it here are the 2 issues I know about. - Currently if you restart an instance the console.log is overwritten which means you loose console logs from older boots. * With the 'helper app' this issue wouldn't happen anymore. * With the qemu approach extra code would need to be added to ensue we also close that bug. - nova console-log, only shows the current boot. * regardless of which approach we use to solve this bug we'd need to enhance nova console-log to be able to detect other logfiles and display them. I assume something similar would be needed for horizon. I don't think it's be hard to do but I'm not promising to hack on horizon. > Can we just say that the console for qemu 2.2 would remain as currently and > for the new functionality, you need qemu 2.3 ? Yes, but but that leaves operators using qemu < 2.3.0 open to this bug. The LP bug was opened about 3 years ago I'm not sure if that's a problem. I just want to know how much and what work I'll be doing to fix this. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From damon.devops at gmail.com Mon Dec 8 03:11:35 2014 From: damon.devops at gmail.com (Damon Wang) Date: Mon, 8 Dec 2014 11:11:35 +0800 Subject: [openstack-dev] [neutron] Neutron Priorities for Kilo In-Reply-To: References: Message-ID: Nice to see it :-) 2014-12-04 23:52 GMT+08:00 Kyle Mestery : > Note: Similar to Nova, cross-posting to operators. > > We've published the list of priorities for Neutron during the Kilo cycle, > and it's available here [1]. The team has been discussing these since > before the Paris Summit, and we're in the process of approving individual > specs around these right now. I'm sending this email to let the broader > community know what the priorities will be and to keep everyone aware. If > you're interested in helping with some of these, please reach out to us in > #openstack-neutron. > > Thanks! > Kyle > > [1] > http://specs.openstack.org/openstack/neutron-specs/priorities/kilo-priorities.html > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From loywolfe at gmail.com Mon Dec 8 03:13:46 2014 From: loywolfe at gmail.com (loy wolfe) Date: Mon, 8 Dec 2014 11:13:46 +0800 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: On Mon, Dec 8, 2014 at 1:51 AM, Gary Kotton wrote: > Hi Kyle, > I am not missing the point. I understand the proposal. I just think that it > has some shortcomings (unless I misunderstand, which will certainly not be > the first time and most definitely not the last). The thinning out is to > have a shim in place. I understand this and this will be the entry point for > the plugin. I do not have a concern for this. My concern is that we are not > doing this with the ML2 off the bat. That should lead by example as it is > our reference architecture. Lets not kid anyone, but we are going to hit > some problems with the decomposition. I would prefer that it be done with > the default implementation. Why? > On the contrary, any effort of refactoring ML2+OVS is totally meaningless, if the only reason is separating it from Neutron core, not to say moving it out of tree in the future. In fact the word "reference implementation" is not so appropriate, "baseline implementation" is preferred, because ML2+OVS COULD BE used for large scale commercial deployment already. Today over half of Openstack users deployed native Neutron/Quantum built-in backend, especially OVS. Although some are not ML2 based yet, but they can move to ML2 OVS MD so easily and then enjoy rich features newly introduced from Ice and Juno. So if we waste time to do some kind of separation on ML2+OVS, natural question would be: "does the community want to slow the progress of default built-in implementation, give more opportunity to 3rd plugin/driver to better compete with free&open ML2+OVS?" Although this decomposition spec is highly appreciated, but built-in OVS separation should be worried and warned, for it would introduce slower response to customer, then hurt overall competitiveness of our project. Standalone fast iteration on new features is strongly needed for our built-in baseline OVS implementation. E.g. I already know a little customer choose openstack because of exciting OVS-DVR from JUNO (although in early beta quality yet). It's not the place to talk about fair play between vendor plugin and open built-in backend. Fair is only needed between vendors/3rd controllers, but not needed for the de facto built-in free backend pervasively adopted. An off-the-shelf built-in backend with fully tested commercial quality would lay a solid foundation for the success of our community. Best Regards, Loy > Cause we will fix them quicker as it is something that prevent Neutron from > moving forwards > We will just need to fix in one place first and not in N (where N is the > vendor plugins) > This is a community effort ? so we will have a lot more eyes on it > It will provide a reference architecture for all new plugins that want to be > added to the tree > It will provide a working example for plugin that are already in tree and > are to be replaced by the shim > > If we really want to do this, we can say freeze all development (which is > just approvals for patches) for a few days so that we will can just focus on > this. I stated what I think should be the process on the review. For those > who do not feel like finding the link: > > Create a stack forge project for ML2 > Create the shim in Neutron > Update devstack for the to use the two repos and the shim > > When #3 is up and running we switch for that to be the gate. Then we start a > stopwatch on all other plugins. > > Sure, I?ll catch you on IRC tomorrow. I guess that you guys will bash out > the details at the meetup. Sadly I will not be able to attend ? so you will > have to delay on the tar and feathers. > Thanks > Gary > > > From: "mestery at mestery.com" > Reply-To: OpenStack List > Date: Sunday, December 7, 2014 at 7:19 PM > To: OpenStack List > Cc: "openstack at lists.openstack.org" > Subject: Re: [openstack-dev] [Neutron] Core/Vendor code decomposition > > Gary, you are still miss the point of this proposal. Please see my comments > in review. We are not forcing things out of tree, we are thinning them. The > text you quoted in the review makes that clear. We will look at further > decomposing ML2 post Kilo, but we have to be realistic with what we can > accomplish during Kilo. > > Find me on IRC Monday morning and we can discuss further if you still have > questions and concerns. > > Thanks! > Kyle > > On Sun, Dec 7, 2014 at 2:08 AM, Gary Kotton wrote: >> >> Hi, >> I have raised my concerns on the proposal. I think that all plugins should >> be treated on an equal footing. My main concern is having the ML2 plugin in >> tree whilst the others will be moved out of tree will be problematic. I >> think that the model will be complete if the ML2 was also out of tree. This >> will help crystalize the idea and make sure that the model works correctly. >> Thanks >> Gary >> >> From: "Armando M." >> Reply-To: OpenStack List >> Date: Saturday, December 6, 2014 at 1:04 AM >> To: OpenStack List , >> "openstack at lists.openstack.org" >> Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition >> >> Hi folks, >> >> For a few weeks now the Neutron team has worked tirelessly on [1]. >> >> This initiative stems from the fact that as the project matures, evolution >> of processes and contribution guidelines need to evolve with it. This is to >> ensure that the project can keep on thriving in order to meet the needs of >> an ever growing community. >> >> The effort of documenting intentions, and fleshing out the various details >> of the proposal is about to reach an end, and we'll soon kick the tires to >> put the proposal into practice. Since the spec has grown pretty big, I'll >> try to capture the tl;dr below. >> >> If you have any comment please do not hesitate to raise them here and/or >> reach out to us. >> >> tl;dr >>> >> >> From the Kilo release, we'll initiate a set of steps to change the >> following areas: >> >> Code structure: every plugin or driver that exists or wants to exist as >> part of Neutron project is decomposed in an slim vendor integration (which >> lives in the Neutron repo), plus a bulkier vendor library (which lives in an >> independent publicly available repo); >> Contribution process: this extends to the following aspects: >> >> Design and Development: the process is largely unchanged for the part that >> pertains the vendor integration; the maintainer team is fully auto governed >> for the design and development of the vendor library; >> Testing and Continuous Integration: maintainers will be required to >> support their vendor integration with 3rd CI testing; the requirements for >> 3rd CI testing are largely unchanged; >> Defect management: the process is largely unchanged, issues affecting the >> vendor library can be tracked with whichever tool/process the maintainer see >> fit. In cases where vendor library fixes need to be reflected in the vendor >> integration, the usual OpenStack defect management apply. >> Documentation: there will be some changes to the way plugins and drivers >> are documented with the intention of promoting discoverability of the >> integrated solutions. >> >> Adoption and transition plan: we strongly advise maintainers to stay >> abreast of the developments of this effort, as their code, their CI, etc >> will be affected. The core team will provide guidelines and support >> throughout this cycle the ensure a smooth transition. >> >> To learn more, please refer to [1]. >> >> Many thanks, >> Armando >> >> [1] https://review.openstack.org/#/c/134680 >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From wuhongning at huawei.com Mon Dec 8 04:00:59 2014 From: wuhongning at huawei.com (Wuhongning) Date: Mon, 8 Dec 2014 04:00:59 +0000 Subject: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup? In-Reply-To: <8761dptrrv.fsf@metaswitch.com> References: <87ppc090ne.fsf@metaswitch.com> <87d27z8kxj.fsf@metaswitch.com> , <8761dptrrv.fsf@metaswitch.com> Message-ID: <185155243D60FE48B384BC144F7C15A58E1686B0@SZXEML506-MBS.china.huawei.com> Hi Neil, @Neil, could you please also add VIF_TYPE_VHOSTUSER in your spec (as I commented on it)? There has been active VHOSTUSER discuss in JUNO nova BP, and it's the same usefulness as VIF_TYPE_TAP. Best Regards Wu ________________________________________ From: Neil Jerram [Neil.Jerram at metaswitch.com] Sent: Saturday, December 06, 2014 10:51 AM To: Kevin Benton Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup? Kevin Benton writes: > I see the difference now. > The main concern I see with the NOOP type is that creating the virtual > interface could require different logic for certain hypervisors. In > that case Neutron would now have to know things about nova and to me > it seems like that's slightly too far the other direction. Many thanks, Kevin. I see this now too, as I've just written more fully in my response to Ian. Based on your and others' insight, I've revised and reuploaded my VIF_TYPE_TAP spec, and hope it's a lot clearer now. Regards, Neil _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ayshihanzhang at 126.com Mon Dec 8 04:03:52 2014 From: ayshihanzhang at 126.com (shihanzhang) Date: Mon, 8 Dec 2014 12:03:52 +0800 (CST) Subject: [openstack-dev] [neutron][sriov] PciDeviceRequestFailed error In-Reply-To: References: Message-ID: <3bef26e6.f22e.14a281154ac.Coremail.ayshihanzhang@126.com> I think the problem is in nova, can you show your "pci_passthrough_whitelist" in nova.conf? At 2014-12-04 18:26:21, "Akilesh K" wrote: Hi, I am using neutron-plugin-sriov-agent. I have configured pci_whitelist in nova.conf I have configured ml2_conf_sriov.ini. But when I launch instance I get the exception in subject. On further checking with the help of some forum messages, I discovered that pci_stats are empty. mysql> select hypervisor_hostname,pci_stats from compute_nodes; +---------------------+-----------+ | hypervisor_hostname | pci_stats | +---------------------+-----------+ | openstack | [] | +---------------------+-----------+ 1 row in set (0.00 sec) Further to this I found that PciDeviceStats.pools is an empty list too. Can anyone tell me what I am missing. Thank you, Ageeleshwar K -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Mon Dec 8 05:01:49 2014 From: soulxu at gmail.com (Alex Xu) Date: Mon, 8 Dec 2014 13:01:49 +0800 Subject: [openstack-dev] [nova] V3 API support In-Reply-To: References: Message-ID: I think Chris is on vacation. We move V3 API to V2.1. V2.1 have some improvement compare to V2. You can find more detail at http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/v2-on-v3-api.html We need support instance tag for V2.1. And in your patch, we needn't json-schema for V2, just need for V2.1. Thanks Alex 2014-12-04 20:50 GMT+08:00 Sergey Nikitin : > Hi, Christopher, > > I working on API extension for instance tags ( > https://review.openstack.org/#/c/128940/). Recently one reviewer asked me > to add V3 API support. I talked with Jay Pipes about it and he told me > that V3 API became useless. So I wanted to ask you and our community: "Do > we need to support v3 API in future nova patches?" > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Mon Dec 8 05:13:54 2014 From: soulxu at gmail.com (Alex Xu) Date: Mon, 8 Dec 2014 13:13:54 +0800 Subject: [openstack-dev] [api] Using query string or request body to pass parameter Message-ID: Hi, I have question about using query string or request body for REST API. This question found when I review this spec: https://review.openstack.org/#/c/131633/6..7/specs/kilo/approved/judge-service-state-when-deleting.rst Think about use request body will have more benefit: 1. Request body can be validated by json-schema 2. json-schema can doc what can be accepted by the parameter Should we have guideline for this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From taget at linux.vnet.ibm.com Mon Dec 8 06:07:53 2014 From: taget at linux.vnet.ibm.com (Eli Qiao) Date: Mon, 08 Dec 2014 14:07:53 +0800 Subject: [openstack-dev] [api] Using query string or request body to pass parameter In-Reply-To: References: Message-ID: <54854039.3030405@linux.vnet.ibm.com> ? 2014?12?08? 13:13, Alex Xu ??: > Hi, > > I have question about using query string or request body for REST API. I wonder if we can use body in delete, currently , there isn't any case used in v2/v3 api. > > This question found when I review this spec: > https://review.openstack.org/#/c/131633/6..7/specs/kilo/approved/judge-service-state-when-deleting.rst > > Think about use request body will have more benefit: > 1. Request body can be validated by json-schema > 2. json-schema can doc what can be accepted by the parameter > > Should we have guideline for this? > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Eli (Li Yong) Qiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Mon Dec 8 06:19:07 2014 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Mon, 8 Dec 2014 14:19:07 +0800 Subject: [openstack-dev] [api] Using query string or request body to pass parameter In-Reply-To: <54854039.3030405@linux.vnet.ibm.com> References: <54854039.3030405@linux.vnet.ibm.com> Message-ID: Found something might be helpful for you http://stackoverflow.com/questions/299628/is-an-entity-body-allowed-for-an-http-delete-request Best Regards! Kevin (Chen) Ji ? ? Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82454158 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Eli Qiao To: "OpenStack Development Mailing List (not for usage questions)" Date: 12/08/2014 02:10 PM Subject: Re: [openstack-dev] [api] Using query string or request body to pass parameter ? 2014?12?08? 13:13, Alex Xu ??: Hi, I have question about using query string or request body for REST API. I wonder if we can use body in delete, currently , there isn't any case used in v2/v3 api. This question found when I review this spec: https://review.openstack.org/#/c/131633/6..7/specs/kilo/approved/judge-service-state-when-deleting.rst Think about use request body will have more benefit: 1. Request body can be validated by json-schema 2. json-schema can doc what can be accepted by the parameter Should we have guideline for this? _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Eli (Li Yong) Qiao_______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From SamuelB at Radware.com Mon Dec 8 07:43:31 2014 From: SamuelB at Radware.com (Samuel Bercovici) Date: Mon, 8 Dec 2014 07:43:31 +0000 Subject: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. In-Reply-To: References: <1416867738.3960.19.camel@localhost> <1417737958.4577.25.camel@localhost> Message-ID: +1 From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] Sent: Friday, December 05, 2014 7:59 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. German-- but the point is that sharing apparently has no effect on the number of permutations for status information. The only difference here is that without sharing it's more work for the user to maintain and modify trees of objects. On Fri, Dec 5, 2014 at 9:36 AM, Eichberger, German > wrote: Hi Brandon + Stephen, Having all those permutations (and potentially testing them) made us lean against the sharing case in the first place. It?s just a lot of extra work for only a small number of our customers. German From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] Sent: Thursday, December 04, 2014 9:17 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. Hi Brandon, Yeah, in your example, member1 could potentially have 8 different statuses (and this is a small example!)... If that member starts flapping, it means that every time it flaps there are 8 notifications being passed upstream. Note that this problem actually doesn't get any better if we're not sharing objects but are just duplicating them (ie. not sharing objects but the user makes references to the same back-end machine as 8 different members.) To be honest, I don't see sharing entities at many levels like this being the rule for most of our installations-- maybe a few percentage points of installations will do an excessive sharing of members, but I doubt it. So really, even though reporting status like this is likely to generate a pretty big tree of data, I don't think this is actually a problem, eh. And I don't see sharing entities actually reducing the workload of what needs to happen behind the scenes. (It just allows us to conceal more of this work from the user.) Stephen On Thu, Dec 4, 2014 at 4:05 PM, Brandon Logan > wrote: Sorry it's taken me a while to respond to this. So I wasn't thinking about this correctly. I was afraid you would have to pass in a full tree of parent child representations to /loadbalancers to update anything a load balancer it is associated to (including down to members). However, after thinking about it, a user would just make an association call on each object. For Example, associate member1 with pool1, associate pool1 with listener1, then associate loadbalancer1 with listener1. Updating is just as simple as updating each entity. This does bring up another problem though. If a listener can live on many load balancers, and a pool can live on many listeners, and a member can live on many pools, there's lot of permutations to keep track of for status. you can't just link a member's status to a load balancer bc a member can exist on many pools under that load balancer, and each pool can exist under many listeners under that load balancer. For example, say I have these: lb1 lb2 listener1 listener2 pool1 pool2 member1 member2 lb1 -> [listener1, listener2] lb2 -> [listener1] listener1 -> [pool1, pool2] listener2 -> [pool1] pool1 -> [member1, member2] pool2 -> [member1] member1 can now have a different statuses under pool1 and pool2. since listener1 and listener2 both have pool1, this means member1 will now have a different status for listener1 -> pool1 and listener2 -> pool2 combination. And so forth for load balancers. Basically there's a lot of permutations and combinations to keep track of with this model for statuses. Showing these in the body of load balancer details can get quite large. I hope this makes sense because my brain is ready to explode. Thanks, Brandon On Thu, 2014-11-27 at 08:52 +0000, Samuel Bercovici wrote: > Brandon, can you please explain further (1) bellow? > > -----Original Message----- > From: Brandon Logan [mailto:brandon.logan at RACKSPACE.COM] > Sent: Tuesday, November 25, 2014 12:23 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. > > My impression is that the statuses of each entity will be shown on a detailed info request of a loadbalancer. The root level objects would not have any statuses. For example a user makes a GET request to /loadbalancers/{lb_id} and the status of every child of that load balancer is show in a "status_tree" json object. For example: > > {"name": "loadbalancer1", > "status_tree": > {"listeners": > [{"name": "listener1", "operating_status": "ACTIVE", > "default_pool": > {"name": "pool1", "status": "ACTIVE", > "members": > [{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}} > > Sam, correct me if I am wrong. > > I generally like this idea. I do have a few reservations with this: > > 1) Creating and updating a load balancer requires a full tree configuration with the current extension/plugin logic in neutron. Since updates will require a full tree, it means the user would have to know the full tree configuration just to simply update a name. Solving this would require nested child resources in the URL, which the current neutron extension/plugin does not allow. Maybe the new one will. > > 2) The status_tree can get quite large depending on the number of listeners and pools being used. This is a minor issue really as it will make horizon's (or any other UI tool's) job easier to show statuses. > > Thanks, > Brandon > > On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote: > > Hi Samuel, > > > > > > We've actually been avoiding having a deeper discussion about status > > in Neutron LBaaS since this can get pretty hairy as the back-end > > implementations get more complicated. I suspect managing that is > > probably one of the bigger reasons we have disagreements around object > > sharing. Perhaps it's time we discussed representing state "correctly" > > (whatever that means), instead of a round-a-bout discussion about > > object sharing (which, I think, is really just avoiding this issue)? > > > > > > Do you have a proposal about how status should be represented > > (possibly including a description of the state machine) if we collapse > > everything down to be logical objects except the loadbalancer object? > > (From what you're proposing, I suspect it might be too general to, for > > example, represent the UP/DOWN status of members of a given pool.) > > > > > > Also, from an haproxy perspective, sharing pools within a single > > listener actually isn't a problem. That is to say, having the same > > L7Policy pointing at the same pool is OK, so I personally don't have a > > problem allowing sharing of objects within the scope of parent > > objects. What do the rest of y'all think? > > > > > > Stephen > > > > > > > > On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici > > > wrote: > > Hi Stephen, > > > > > > > > 1. The issue is that if we do 1:1 and allow status/state > > to proliferate throughout all objects we will then get an > > issue to fix it later, hence even if we do not do sharing, I > > would still like to have all objects besides LB be treated as > > logical. > > > > 2. The 3rd use case bellow will not be reasonable without > > pool sharing between different policies. Specifying different > > pools which are the same for each policy make it non-started > > to me. > > > > > > > > -Sam. > > > > > > > > > > > > > > > > From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] > > Sent: Friday, November 21, 2014 10:26 PM > > To: OpenStack Development Mailing List (not for usage > > questions) > > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects > > in LBaaS - Use Cases that led us to adopt this. > > > > > > > > I think the idea was to implement 1:1 initially to reduce the > > amount of code and operational complexity we'd have to deal > > with in initial revisions of LBaaS v2. Many to many can be > > simulated in this scenario, though it does shift the burden of > > maintenance to the end user. It does greatly simplify the > > initial code for v2, in any case, though. > > > > > > > > > > > > Did we ever agree to allowing listeners to be shared among > > load balancers? I think that still might be a N:1 > > relationship even in our latest models. > > > > > > > > > > There's also the difficulty introduced by supporting different > > flavors: Since flavors are essentially an association between > > a load balancer object and a driver (with parameters), once > > flavors are introduced, any sub-objects of a given load > > balancer objects must necessarily be purely logical until they > > are associated with a load balancer. I know there was talk of > > forcing these objects to be sub-objects of a load balancer > > which can't be accessed independently of the load balancer > > (which would have much the same effect as what you discuss: > > State / status only make sense once logical objects have an > > instantiation somewhere.) However, the currently proposed API > > treats most objects as root objects, which breaks this > > paradigm. > > > > > > > > > > > > How we handle status and updates once there's an instantiation > > of these logical objects is where we start getting into real > > complexity. > > > > > > > > > > > > It seems to me there's a lot of complexity introduced when we > > allow a lot of many to many relationships without a whole lot > > of benefit in real-world deployment scenarios. In most cases, > > objects are not going to be shared, and in those cases with > > sufficiently complicated deployments in which shared objects > > could be used, the user is likely to be sophisticated enough > > and skilled enough to manage updating what are essentially > > "copies" of objects, and would likely have an opinion about > > how individual failures should be handled which wouldn't > > necessarily coincide with what we developers of the system > > would assume. That is to say, allowing too many many to many > > relationships feels like a solution to a problem that doesn't > > really exist, and introduces a lot of unnecessary complexity. > > > > > > > > > > > > In any case, though, I feel like we should walk before we run: > > Implementing 1:1 initially is a good idea to get us rolling. > > Whether we then implement 1:N or M:N after that is another > > question entirely. But in any case, it seems like a bad idea > > to try to start with M:N. > > > > > > > > > > > > Stephen > > > > > > > > > > > > > > > > On Thu, Nov 20, 2014 at 4:52 AM, Samuel Bercovici > > > wrote: > > > > Hi, > > > > Per discussion I had at OpenStack Summit/Paris with Brandon > > and Doug, I would like to remind everyone why we choose to > > follow a model where pools and listeners are shared (many to > > many relationships). > > > > Use Cases: > > 1. The same application is being exposed via different LB > > objects. > > For example: users coming from the internal "private" > > organization network, have an LB1(private_VIP) --> > > Listener1(TLS) -->Pool1 and user coming from the "internet", > > have LB2(public_vip)-->Listener1(TLS)-->Pool1. > > This may also happen to support ipv4 and ipv6: LB_v4(ipv4_VIP) > > --> Listener1(TLS) -->Pool1 and LB_v6(ipv6_VIP) --> > > Listener1(TLS) -->Pool1 > > The operator would like to be able to manage the pool > > membership in cases of updates and error in a single place. > > > > 2. The same group of servers is being used via different > > listeners optionally also connected to different LB objects. > > For example: users coming from the internal "private" > > organization network, have an LB1(private_VIP) --> > > Listener1(HTTP) -->Pool1 and user coming from the "internet", > > have LB2(public_vip)-->Listener2(TLS)-->Pool1. > > The LBs may use different flavors as LB2 needs TLS termination > > and may prefer a different "stronger" flavor. > > The operator would like to be able to manage the pool > > membership in cases of updates and error in a single place. > > > > 3. The same group of servers is being used in several > > different L7_Policies connected to a listener. Such listener > > may be reused as in use case 1. > > For example: LB1(VIP1)-->Listener_L7(TLS) > > | > > > > +-->L7_Policy1(rules..)-->Pool1 > > | > > > > +-->L7_Policy2(rules..)-->Pool2 > > | > > > > +-->L7_Policy3(rules..)-->Pool1 > > | > > > > +-->L7_Policy3(rules..)-->Reject > > > > > > I think that the "key" issue handling correctly the > > "provisioning" state and the operation state in a many to many > > model. > > This is an issue as we have attached status fields to each and > > every object in the model. > > A side effect of the above is that to understand the > > "provisioning/operation" status one needs to check many > > different objects. > > > > To remedy this, I would like to turn all objects besides the > > LB to be logical objects. This means that the only place to > > manage the status/state will be on the LB object. > > Such status should be hierarchical so that logical object > > attached to an LB, would have their status consumed out of the > > LB object itself (in case of an error). > > We also need to discuss how modifications of a logical object > > will be "rendered" to the concrete LB objects. > > You may want to revisit > > https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r the "Logical Model + Provisioning Status + Operation Status + Statistics" for a somewhat more detailed explanation albeit it uses the LBaaS v1 model as a reference. > > > > Regards, > > -Sam. > > > > > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > -- > > > > Stephen Balukoff > > Blue Box Group, LLC > > (800)613-4305 x807 > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > -- > > Stephen Balukoff > > Blue Box Group, LLC > > (800)613-4305 x807 > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 -------------- next part -------------- An HTML attachment was scrubbed... URL: From daya_k at yahoo.com Mon Dec 8 07:44:23 2014 From: daya_k at yahoo.com (daya kamath) Date: Mon, 8 Dec 2014 07:44:23 +0000 (UTC) Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: References: Message-ID: <476497005.1536685.1418024663452.JavaMail.yahoo@jws10706.mail.gq1.yahoo.com> +1 for revolving meetings From: Kurt Taylor To: OpenStack Development Mailing List (not for usage questions) ; openstack-infra at lists.openstack.org Sent: Friday, December 5, 2014 8:38 PM Subject: Re: [OpenStack-Infra] [openstack-dev] [third-party]Time for Additional Meeting for third-party In my opinion, further discussion is needed. The proposal on the table is to have 2 weekly meetings, one at the existing time of 1800UTC on Monday and, also in the same week, to have another meeting at 0800 UTC on Tuesday.? Here are some of the problems that I see with this approach: 1. Meeting content: Having 2 meetings per week is more than is needed at this stage of the working group. There just isn't enough meeting content to justify having two meetings every week. 2. Decisions: Any decision made at one meeting will potentially be undone at the next, or at least not fully explained. It will be difficult to keep consistent direction with the overall work group. 3. Meeting chair(s): Currently we?do not have a commitment for a long-term chair of this new second weekly meeting. I will not be able to attend this new meeting at the proposed time. 4. Current meeting time: I am not aware of anyone that likes the current time of 1800 UTC on Monday. The current time is the main reason it is hard for EU and APAC CI Operators to attend. My proposal was to have only 1 meeting per week at alternating times, just as other work groups have done to solve this problem. (See examples at: https://wiki.openstack.org/wiki/Meetings) ?I volunteered to chair, then ask other CI Operators to chair as the meetings evolved. The meeting times could be any between 1300-0300 UTC. That way, one week we are good for US and Europe, the next week for APAC. Kurt Taylor (krtaylor) On Wed, Dec 3, 2014 at 11:10 PM, trinath.somanchi at freescale.com wrote: +1. -- Trinath Somanchi - B39208 trinath.somanchi at freescale.com | extn: 4048 -----Original Message----- From: Anita Kuno [mailto:anteaya at anteaya.info] Sent: Thursday, December 04, 2014 3:55 AM To: openstack-infra at lists.openstack.org Subject: Re: [OpenStack-Infra] [openstack-dev] [third-party]Time for Additional Meeting for third-party On 12/03/2014 03:15 AM, Omri Marcovitch wrote: > Hello Anteaya, > > A meeting between 8:00 - 16:00 UTC time will be great (Israel). > > > Thanks > Omri > > -----Original Message----- > From: Joshua Hesketh [mailto:joshua.hesketh at rackspace.com] > Sent: Wednesday, December 03, 2014 9:04 AM > To: He, Yongli; OpenStack Development Mailing List (not for usage > questions); openstack-infra at lists.openstack.org > Subject: Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for > Additional Meeting for third-party > > Hey, > > 0700 -> 1000 UTC would work for me most weeks fwiw. > > Cheers, > Josh > > Rackspace Australia > > On 12/3/14 11:17 AM, He, Yongli wrote: >> anteaya, >> >> UTC 7:00 AM to UTC9:00, or UTC11:30 to UTC13:00 is ideal time for china. >> >> if there is no time slot there, just pick up any time between UTC >> 7:00 AM to UCT 13:00. ( UTC9:00 to UTC 11:30 is on road to home and >> dinner.) >> >> Yongi He >> -----Original Message----- >> From: Anita Kuno [mailto:anteaya at anteaya.info] >> Sent: Tuesday, December 02, 2014 4:07 AM >> To: openstack Development Mailing List; >> openstack-infra at lists.openstack.org >> Subject: [openstack-dev] [third-party]Time for Additional Meeting for >> third-party >> >> One of the actions from the Kilo Third-Party CI summit session was to start up an additional meeting for CI operators to participate from non-North American time zones. >> >> Please reply to this email with times/days that would work for you. The current third party meeting is on Mondays at 1800 utc which works well since Infra meetings are on Tuesdays. If we could find a time that works for Europe and APAC that is also on Monday that would be ideal. >> >> Josh Hesketh has said he will try to be available for these meetings, he is in Australia. >> >> Let's get a sense of what days and timeframes work for those interested and then we can narrow it down and pick a channel. >> >> Thanks everyone, >> Anita. >> Okay first of all thanks to everyone who replied. Again, to clarify, the purpose of this thread has been to find a suitable additional third-party meeting time geared towards folks in EU and APAC. We live on a sphere, there is no time that will suit everyone. It looks like we are converging on 0800 UTC as a time and I am going to suggest Tuesdays. We have very little competition for space at that date + time combination so we can use #openstack-meeting (I have already booked the space on the wikipage). So barring further discussion, see you then! Thanks everyone, Anita. _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: From eduard.matei at cloudfounders.com Mon Dec 8 08:05:27 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Mon, 8 Dec 2014 10:05:27 +0200 Subject: [openstack-dev] [git] Rebase failed Message-ID: Hi, My review got approved and it was ready to merge but automatic merge failed. I tried to rebase manually but it still fails. I'm not very familiar with git, can someone give a hand? git rebase -i master error: could not apply db4a3bb... New Cinder volume driver for openvstorage. When you have resolved this problem, run "git rebase --continue". If you prefer to skip this patch, run "git rebase --skip" instead. To check out the original branch and stop rebasing, run "git rebase --abort". Could not apply db4a3bb3645b27de7b12c0aa405bde3530dd19f8... New Cinder volume driver for openvstorage. git rebase --continue etc/cinder/cinder.conf.sample: needs merge You must edit all merge conflicts and then mark them as resolved using git add Review is: https://review.openstack.org/#/c/130733/ Thanks, Eduard -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skalinowski at mirantis.com Mon Dec 8 08:07:29 2014 From: skalinowski at mirantis.com (Sebastian Kalinowski) Date: Mon, 8 Dec 2014 09:07:29 +0100 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> <547F3D48.5020401@gmail.com> <1A3C52DFCD06494D8528644858247BF017812DEB@EX10MBOX03.pnnl.gov> Message-ID: 2014-12-04 14:01 GMT+01:00 Igor Kalnitsky : > Ok, guys, > > It became obvious that most of us either vote for Pecan or abstain from > voting. > Yes, and it's been 4 days since last message in this thread and no objections, so it seems that Pecan in now our framework-of-choice for Nailgun and future apps/projects. > > So I propose to stop fighting this battle (Flask vs Pecan) and start > thinking about moving to Pecan. You know, there are many questions > that need to be discussed (such as 'should we change API version' or > 'should be it done iteratively or as one patchset'). > IMHO small, iterative changes are rather obvious. For other questions maybe we need (draft of ) a blueprint and a separate mail thread? > > - Igor > -------------- next part -------------- An HTML attachment was scrubbed... URL: From taget at linux.vnet.ibm.com Mon Dec 8 08:13:13 2014 From: taget at linux.vnet.ibm.com (Eli Qiao) Date: Mon, 08 Dec 2014 16:13:13 +0800 Subject: [openstack-dev] [git] Rebase failed In-Reply-To: References: Message-ID: <54855D99.1030002@linux.vnet.ibm.com> ? 2014?12?08? 16:05, Eduard Matei ??: > Hi, > My review got approved and it was ready to merge but automatic merge > failed. > I tried to rebase manually but it still fails. > > I'm not very familiar with git, can someone give a hand? > > git rebase -i master > error: could not apply db4a3bb... New Cinder volume driver for > openvstorage. > hi Eduard yeah, you need to manually rebase the changes. git status check which files should be changes (file name in red) and find <<< in the file , modify it. after doing that , git add , all conflict files need to be done, then git reabase --continue. > When you have resolved this problem, run "git rebase --continue". > If you prefer to skip this patch, run "git rebase --skip" instead. > To check out the original branch and stop rebasing, run "git rebase > --abort". > Could not apply db4a3bb3645b27de7b12c0aa405bde3530dd19f8... New Cinder > volume driver for openvstorage. > git rebase --continue > etc/cinder/cinder.conf.sample: needs merge > You must edit all merge conflicts and then > mark them as resolved using git add > > Review is: https://review.openstack.org/#/c/130733/ > > Thanks, > Eduard > > -- > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com | > eduard.matei at cloudfounders.com > > *CloudFounders, The Private Cloud Software Company* > Disclaimer: > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom they > are addressed.If you are not the named addressee or an employee or > agent responsible for delivering this message to the named addressee, > you are hereby notified that you are not authorized to read, print, > retain, copy or disseminate this message or any part of it. If you > have received this email in error we request you to notify us by reply > e-mail and to delete all electronic files of the message. If you are > not the intended recipient you are notified that disclosing, copying, > distributing or taking any action in reliance on the contents of this > information is strictly prohibited. E-mail transmission cannot be > guaranteed to be secure or error free as information could be > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or > contain viruses. The sender therefore does not accept liability for > any errors or omissions in the content of this message, and shall have > no liability for any loss or damage suffered by the user, which arise > as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Eli (Li Yong) Qiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon Dec 8 08:15:27 2014 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 08 Dec 2014 09:15:27 +0100 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <5484D434.7080507@danplanet.com> References: <54834007.5070007@gmail.com> <54848820.7060802@gmail.com> <20141207213344.GE26706@sventech.com> <5484D434.7080507@danplanet.com> Message-ID: <54855E1F.2050108@redhat.com> Le 07/12/2014 23:27, Dan Smith a ?crit : >> The argument boils down to there is a communications cost to adding >> someone to core, and therefore there is a maximum size before the >> communications burden becomes to great. > I'm definitely of the mindset that the core team is something that has a > maximum effective size. Nova is complicated and always changing; keeping > everyone on top of current development themes is difficult. Just last > week, we merged a patch that bumped the version of an RPC API without > making the manager tolerant of the previous version. That's a theme > we've had for a while, and yet it was still acked by two cores. > > A major complaint I hear a lot is "one core told me to do X and then > another core told me to do !X". Obviously this will always happen, but I > do think that the larger and more disconnected the core team becomes, > the more often this will occur. If all the cores reviewed at the rate of > the top five and we still had a throughput problem, then evaluating the > optimal size would be a thing we'd need to do. However, even at the > current size, we have (IMHO) communication problems, mostly uninvolved > cores, and patches going in that break versioning rules. Making the team > arbitrarily larger doesn't seem like a good idea to me. As a non-core, I can't speak about how cores communicate within the team. That said, I can just say it is sometimes very hard to review all the codepaths that Nova has, in particular when some new rules are coming in (for example, API microversions, online data migrations or reducing the tech debt in the Scheduler). As a consequence, I can understand that some people can do mistakes when reviewing a specific change because they are not experts or because they missed some important non-written good practice. That said, I think this situatiion doesn't necessarly mean that it can't be improved by simple rules. For example, the revert policy is a good thing : errors can happen, and admitting that it's normal that a revert can happen in the next couple of days seems fine by me. Also, why not considering that some cores are more experts than others in a single codepath ? I mean, we all know who to address if we have some specific questions about a change (like impacting virt drivers, objects, or API). So, why a change wouldn't be at least +1'd by these expert cores before *approving* it ? As Nova is growing, I'm not sure if it's good to cap the team. IMHO, mistakes are human, that shouldn't be the reason why the team is not growing, but rather how we can make sure that disagreements wouldn't be a problem. (Now going back in my cavern) -Sylvain >> I will say that I am disappointed that we have cores who don't >> regularly attend our IRC meetings. That makes the communication much >> more complicated. > Agreed. We alternate the meeting times such that this shouldn't be hard, > IMHO. > > --Dan > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Mon Dec 8 08:16:43 2014 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 8 Dec 2014 08:16:43 +0000 Subject: [openstack-dev] [git] Rebase failed In-Reply-To: References: Message-ID: Hi, I do the following: 1. Go to master branch. git checkout master 2. Get latest master code: git pull 3. Checkout you code: git checkout yourbranchname 4. Rebase this on the laster master: git rebase -i master 5. Resolve conflicts: git status (this will show conflicts) 6. Resolve them - look in the file and search for HEAD 7. Fix and then do git add filename-that-was-updated 8. Continue: git rebase -continue 9. Commit ... Hope that helps Thanks Gary From: Eduard Matei > Reply-To: OpenStack List > Date: Monday, December 8, 2014 at 10:05 AM To: OpenStack List > Subject: [openstack-dev] [git] Rebase failed Hi, My review got approved and it was ready to merge but automatic merge failed. I tried to rebase manually but it still fails. I'm not very familiar with git, can someone give a hand? git rebase -i master error: could not apply db4a3bb... New Cinder volume driver for openvstorage. When you have resolved this problem, run "git rebase --continue". If you prefer to skip this patch, run "git rebase --skip" instead. To check out the original branch and stop rebasing, run "git rebase --abort". Could not apply db4a3bb3645b27de7b12c0aa405bde3530dd19f8... New Cinder volume driver for openvstorage. git rebase --continue etc/cinder/cinder.conf.sample: needs merge You must edit all merge conflicts and then mark them as resolved using git add Review is: https://review.openstack.org/#/c/130733/ Thanks, Eduard -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon Dec 8 08:18:16 2014 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 08 Dec 2014 09:18:16 +0100 Subject: [openstack-dev] [git] Rebase failed In-Reply-To: <54855D99.1030002@linux.vnet.ibm.com> References: <54855D99.1030002@linux.vnet.ibm.com> Message-ID: <54855EC8.3040301@redhat.com> Le 08/12/2014 09:13, Eli Qiao a ?crit : > > ? 2014?12?08? 16:05, Eduard Matei ??: >> Hi, >> My review got approved and it was ready to merge but automatic merge >> failed. >> I tried to rebase manually but it still fails. >> >> I'm not very familiar with git, can someone give a hand? >> >> git rebase -i master >> error: could not apply db4a3bb... New Cinder volume driver for >> openvstorage. >> > hi Eduard > yeah, you need to manually rebase the changes. > git status check which files should be changes (file name in red) and > find <<< in > the file , modify it. after doing that , git add , all > conflict files need to > be done, then git reabase --continue. Or you can just raise the magical wand and invoke "git mergetool" with your favourite editor (vimdiff, kdiff3 or whatever else) and leave this tool show you the difference in between the base branch (when the code branched), the local branch (the master branch) and the remote branch (your changes) My USD0.02 -Sylvain >> When you have resolved this problem, run "git rebase --continue". >> If you prefer to skip this patch, run "git rebase --skip" instead. >> To check out the original branch and stop rebasing, run "git rebase >> --abort". >> Could not apply db4a3bb3645b27de7b12c0aa405bde3530dd19f8... New >> Cinder volume driver for openvstorage. >> git rebase --continue >> etc/cinder/cinder.conf.sample: needs merge >> You must edit all merge conflicts and then >> mark them as resolved using git add >> >> Review is: https://review.openstack.org/#/c/130733/ >> >> Thanks, >> Eduard >> >> -- >> *Eduard Biceri Matei, Senior Software Developer* >> www.cloudfounders.com | >> eduard.matei at cloudfounders.com >> >> *CloudFounders, The Private Cloud Software Company* >> Disclaimer: >> This email and any files transmitted with it are confidential and >> intended solely for the use of the individual or entity to whom they >> are addressed.If you are not the named addressee or an employee or >> agent responsible for delivering this message to the named addressee, >> you are hereby notified that you are not authorized to read, print, >> retain, copy or disseminate this message or any part of it. If you >> have received this email in error we request you to notify us by >> reply e-mail and to delete all electronic files of the message. If >> you are not the intended recipient you are notified that disclosing, >> copying, distributing or taking any action in reliance on the >> contents of this information is strictly prohibited. E-mail >> transmission cannot be guaranteed to be secure or error free as >> information could be intercepted, corrupted, lost, destroyed, arrive >> late or incomplete, or contain viruses. The sender therefore does not >> accept liability for any errors or omissions in the content of this >> message, and shall have no liability for any loss or damage suffered >> by the user, which arise as a result of e-mail transmission. >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Thanks, > Eli (Li Yong) Qiao > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Mon Dec 8 08:25:07 2014 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 8 Dec 2014 08:25:07 +0000 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <54855E1F.2050108@redhat.com> References: <54834007.5070007@gmail.com> <54848820.7060802@gmail.com> <20141207213344.GE26706@sventech.com> <5484D434.7080507@danplanet.com> <54855E1F.2050108@redhat.com> Message-ID: Hi, I would expect that if a core does not understand a piece of code then he/she would not approve it they can always give a +1 and be honest that it is not part of the code base that they understand. That is legitimate in such a complex and large project. We all make mistakes, it is the only way that we can learn and grow. Limiting the size of the core team is limiting the growth, quality and pulse of the project. Thanks Gary From: Sylvain Bauza > Reply-To: OpenStack List > Date: Monday, December 8, 2014 at 10:15 AM To: OpenStack List > Subject: Re: [openstack-dev] [Nova] Spring cleaning nova-core Le 07/12/2014 23:27, Dan Smith a ?crit : The argument boils down to there is a communications cost to adding someone to core, and therefore there is a maximum size before the communications burden becomes to great. I'm definitely of the mindset that the core team is something that has a maximum effective size. Nova is complicated and always changing; keeping everyone on top of current development themes is difficult. Just last week, we merged a patch that bumped the version of an RPC API without making the manager tolerant of the previous version. That's a theme we've had for a while, and yet it was still acked by two cores. A major complaint I hear a lot is "one core told me to do X and then another core told me to do !X". Obviously this will always happen, but I do think that the larger and more disconnected the core team becomes, the more often this will occur. If all the cores reviewed at the rate of the top five and we still had a throughput problem, then evaluating the optimal size would be a thing we'd need to do. However, even at the current size, we have (IMHO) communication problems, mostly uninvolved cores, and patches going in that break versioning rules. Making the team arbitrarily larger doesn't seem like a good idea to me. As a non-core, I can't speak about how cores communicate within the team. That said, I can just say it is sometimes very hard to review all the codepaths that Nova has, in particular when some new rules are coming in (for example, API microversions, online data migrations or reducing the tech debt in the Scheduler). As a consequence, I can understand that some people can do mistakes when reviewing a specific change because they are not experts or because they missed some important non-written good practice. That said, I think this situatiion doesn't necessarly mean that it can't be improved by simple rules. For example, the revert policy is a good thing : errors can happen, and admitting that it's normal that a revert can happen in the next couple of days seems fine by me. Also, why not considering that some cores are more experts than others in a single codepath ? I mean, we all know who to address if we have some specific questions about a change (like impacting virt drivers, objects, or API). So, why a change wouldn't be at least +1'd by these expert cores before *approving* it ? As Nova is growing, I'm not sure if it's good to cap the team. IMHO, mistakes are human, that shouldn't be the reason why the team is not growing, but rather how we can make sure that disagreements wouldn't be a problem. (Now going back in my cavern) -Sylvain I will say that I am disappointed that we have cores who don't regularly attend our IRC meetings. That makes the communication much more complicated. Agreed. We alternate the meeting times such that this shouldn't be hard, IMHO. --Dan _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From eduard.matei at cloudfounders.com Mon Dec 8 08:25:43 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Mon, 8 Dec 2014 10:25:43 +0200 Subject: [openstack-dev] [git] Rebase failed In-Reply-To: <54855EC8.3040301@redhat.com> References: <54855D99.1030002@linux.vnet.ibm.com> <54855EC8.3040301@redhat.com> Message-ID: Thanks guys, It seems one of the files i was trying to merge was actually removed by remote (?). So i removed it locally (git rm ... ) and now rebase worked, but now i had an extra patchset. Will this be merged automatically or does it need again reviewers ? Thanks, Eduard On Mon, Dec 8, 2014 at 10:18 AM, Sylvain Bauza wrote: > > Le 08/12/2014 09:13, Eli Qiao a ?crit : > > > ? 2014?12?08? 16:05, Eduard Matei ??: > > Hi, > My review got approved and it was ready to merge but automatic merge > failed. > I tried to rebase manually but it still fails. > > I'm not very familiar with git, can someone give a hand? > > git rebase -i master > error: could not apply db4a3bb... New Cinder volume driver for > openvstorage. > > hi Eduard > yeah, you need to manually rebase the changes. > git status check which files should be changes (file name in red) and find > <<< in > the file , modify it. after doing that , git add , all conflict > files need to > be done, then git reabase --continue. > > > > Or you can just raise the magical wand and invoke "git mergetool" with > your favourite editor (vimdiff, kdiff3 or whatever else) and leave this > tool show you the difference in between the base branch (when the code > branched), the local branch (the master branch) and the remote branch (your > changes) > > My USD0.02 > > -Sylvain > > > When you have resolved this problem, run "git rebase --continue". > If you prefer to skip this patch, run "git rebase --skip" instead. > To check out the original branch and stop rebasing, run "git rebase > --abort". > Could not apply db4a3bb3645b27de7b12c0aa405bde3530dd19f8... New Cinder > volume driver for openvstorage. > git rebase --continue > etc/cinder/cinder.conf.sample: needs merge > You must edit all merge conflicts and then > mark them as resolved using git add > > Review is: https://review.openstack.org/#/c/130733/ > > Thanks, > Eduard > > -- > > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com | eduard.matei at cloudfounders.com > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed.If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > _______________________________________________ > OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- > Thanks, > Eli (Li Yong) Qiao > > > > _______________________________________________ > OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Dec 8 08:38:30 2014 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 08 Dec 2014 09:38:30 +0100 Subject: [openstack-dev] [nova] Fixing the console.log grows forever bug. In-Reply-To: <20141206053852.GB11931@thor.bakeyournoodle.com> References: <20141206053852.GB11931@thor.bakeyournoodle.com> Message-ID: <54856386.1030109@openstack.org> Tony Breeds wrote: > [...] > So if that timeline is approximately correct: > > - Can we wait this long to fix the bug? As opposed to having it squashed in Kilo. > - What do we do in nova for the next ~12 months while know there isn't a qemu to fix this? > - Then once there is a qemu that fixes the issue, do we just say 'thou must use > qemu 2.3.0' or would nova still need to support old and new qemu's ? Fixing it in qemu looks like the right way to fix this issue. If it was simple to fix, it would have been fixed already: this is one of our oldest bugs with security impact. So I'd say yes, this should be fixed in qemu, even if that takes a long time to propagate. If someone finds an interesting way to work around this issue in Nova, then by all means, add the workaround to Kilo and deprecate it once we can assume everyone moved to newer qemu. But given it's been 3 years this bug has been around, I wouldn't hold my breath. -- Thierry Carrez (ttx) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From gkotton at vmware.com Mon Dec 8 08:56:16 2014 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 8 Dec 2014 08:56:16 +0000 Subject: [openstack-dev] [git] Rebase failed In-Reply-To: References: <54855D99.1030002@linux.vnet.ibm.com> <54855EC8.3040301@redhat.com> Message-ID: Hi, The whole review process starts again from scratch :). You can feel free to reach out the guys who originally reviewed and then go from there. Good luck! Thanks Gary From: Eduard Matei > Reply-To: OpenStack List > Date: Monday, December 8, 2014 at 10:25 AM To: OpenStack List > Subject: Re: [openstack-dev] [git] Rebase failed Thanks guys, It seems one of the files i was trying to merge was actually removed by remote (?). So i removed it locally (git rm ... ) and now rebase worked, but now i had an extra patchset. Will this be merged automatically or does it need again reviewers ? Thanks, Eduard On Mon, Dec 8, 2014 at 10:18 AM, Sylvain Bauza > wrote: Le 08/12/2014 09:13, Eli Qiao a ?crit : ? 2014?12?08? 16:05, Eduard Matei ??: Hi, My review got approved and it was ready to merge but automatic merge failed. I tried to rebase manually but it still fails. I'm not very familiar with git, can someone give a hand? git rebase -i master error: could not apply db4a3bb... New Cinder volume driver for openvstorage. hi Eduard yeah, you need to manually rebase the changes. git status check which files should be changes (file name in red) and find <<< in the file , modify it. after doing that , git add , all conflict files need to be done, then git reabase --continue. Or you can just raise the magical wand and invoke "git mergetool" with your favourite editor (vimdiff, kdiff3 or whatever else) and leave this tool show you the difference in between the base branch (when the code branched), the local branch (the master branch) and the remote branch (your changes) My USD0.02 -Sylvain When you have resolved this problem, run "git rebase --continue". If you prefer to skip this patch, run "git rebase --skip" instead. To check out the original branch and stop rebasing, run "git rebase --abort". Could not apply db4a3bb3645b27de7b12c0aa405bde3530dd19f8... New Cinder volume driver for openvstorage. git rebase --continue etc/cinder/cinder.conf.sample: needs merge You must edit all merge conflicts and then mark them as resolved using git add Review is: https://review.openstack.org/#/c/130733/ Thanks, Eduard -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed.If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Eli (Li Yong) Qiao _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkt at kde.org Mon Dec 8 08:57:15 2014 From: jkt at kde.org (=?iso-8859-1?Q?Jan_Kundr=E1t?=) Date: Mon, 08 Dec 2014 09:57:15 +0100 Subject: [openstack-dev] [git] Rebase failed In-Reply-To: References: <54855D99.1030002@linux.vnet.ibm.com> <54855EC8.3040301@redhat.com> Message-ID: On Monday, 8 December 2014 09:25:43 CEST, Eduard Matei wrote: > So i removed it locally (git rm ... ) and now rebase worked, but now i had > an extra patchset. > Will this be merged automatically or does it need again reviewers ? Make sure you have a single commit which includes both the `git rm` and your original changes, and that this commit still has the same Change-Id line as the one you uploaded originally. Cheers, Jan -- Trojit?, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/ From eduard.matei at cloudfounders.com Mon Dec 8 09:02:08 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Mon, 8 Dec 2014 11:02:08 +0200 Subject: [openstack-dev] [git] Rebase failed In-Reply-To: References: <54855D99.1030002@linux.vnet.ibm.com> <54855EC8.3040301@redhat.com> Message-ID: Indeed, it started again the review process. Thanks for your input. Eduard On Mon, Dec 8, 2014 at 10:56 AM, Gary Kotton wrote: > Hi, > The whole review process starts again from scratch :). You can feel free > to reach out the guys who originally reviewed and then go from there. Good > luck! > Thanks > Gary > > From: Eduard Matei > Reply-To: OpenStack List > Date: Monday, December 8, 2014 at 10:25 AM > To: OpenStack List > Subject: Re: [openstack-dev] [git] Rebase failed > > Thanks guys, > It seems one of the files i was trying to merge was actually removed by > remote (?). > So i removed it locally (git rm ... ) and now rebase worked, but now i had > an extra patchset. > Will this be merged automatically or does it need again reviewers ? > > Thanks, > Eduard > > On Mon, Dec 8, 2014 at 10:18 AM, Sylvain Bauza wrote: > >> >> Le 08/12/2014 09:13, Eli Qiao a ?crit : >> >> >> ? 2014?12?08? 16:05, Eduard Matei ??: >> >> Hi, >> My review got approved and it was ready to merge but automatic merge >> failed. >> I tried to rebase manually but it still fails. >> >> I'm not very familiar with git, can someone give a hand? >> >> git rebase -i master >> error: could not apply db4a3bb... New Cinder volume driver for >> openvstorage. >> >> hi Eduard >> yeah, you need to manually rebase the changes. >> git status check which files should be changes (file name in red) and >> find <<< in >> the file , modify it. after doing that , git add , all >> conflict files need to >> be done, then git reabase --continue. >> >> >> >> Or you can just raise the magical wand and invoke "git mergetool" with >> your favourite editor (vimdiff, kdiff3 or whatever else) and leave this >> tool show you the difference in between the base branch (when the code >> branched), the local branch (the master branch) and the remote branch (your >> changes) >> >> My USD0.02 >> >> -Sylvain >> >> >> When you have resolved this problem, run "git rebase --continue". >> If you prefer to skip this patch, run "git rebase --skip" instead. >> To check out the original branch and stop rebasing, run "git rebase >> --abort". >> Could not apply db4a3bb3645b27de7b12c0aa405bde3530dd19f8... New Cinder >> volume driver for openvstorage. >> git rebase --continue >> etc/cinder/cinder.conf.sample: needs merge >> You must edit all merge conflicts and then >> mark them as resolved using git add >> >> Review is: https://review.openstack.org/#/c/130733/ >> >> Thanks, >> Eduard >> >> -- >> >> *Eduard Biceri Matei, Senior Software Developer* >> www.cloudfounders.com | eduard.matei at cloudfounders.com >> >> >> *CloudFounders, The Private Cloud Software Company* >> >> Disclaimer: >> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed.If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. >> E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. >> >> >> >> _______________________________________________ >> OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> -- >> Thanks, >> Eli (Li Yong) Qiao >> >> >> >> _______________________________________________ >> OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkt at kde.org Mon Dec 8 09:02:17 2014 From: jkt at kde.org (=?iso-8859-1?Q?Jan_Kundr=E1t?=) Date: Mon, 08 Dec 2014 10:02:17 +0100 Subject: [openstack-dev] [git] Rebase failed In-Reply-To: References: <54855D99.1030002@linux.vnet.ibm.com> <54855EC8.3040301@redhat.com> Message-ID: On Monday, 8 December 2014 09:57:15 CEST, Jan Kundr?t wrote: > On Monday, 8 December 2014 09:25:43 CEST, Eduard Matei wrote: >> So i removed it locally (git rm ... ) and now rebase worked, but now i had >> an extra patchset. >> Will this be merged automatically or does it need again reviewers ? > > Make sure you have a single commit which includes both the `git > rm` and your original changes, and that this commit still has > the same Change-Id line as the one you uploaded originally. And on a re-read, it seems that you already know that very well, so in that case, sorry for noise. /me grabs that coffee again. -- Trojit?, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/ From ndipanov at redhat.com Mon Dec 8 09:50:19 2014 From: ndipanov at redhat.com (=?UTF-8?B?Tmlrb2xhIMSQaXBhbm92?=) Date: Mon, 08 Dec 2014 10:50:19 +0100 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: References: <54834007.5070007@gmail.com> <54848820.7060802@gmail.com> Message-ID: <5485745B.4030207@redhat.com> On 12/07/2014 08:52 PM, Michael Still wrote: > > You know what makes me really sad? No one has suggested that perhaps > Padraig could just pick up his review rate a little. I've repeatedly > said we can re-add reviewers if that happens. > This is of course not true - everybody *but* the people on this thread agree with it (otherwise they would have responded) since re-adding cores is a well know process, so him picking it up and getting re-added is not what is discussed here. What we (or at least I and afaict Jay) are saying is - even though his numbers are low - we still think he should be core because the thoughtfulness of his reviews matters more to us than the fact that he is otherwise engaged besides Nova. N. From nmarkov at mirantis.com Mon Dec 8 10:10:01 2014 From: nmarkov at mirantis.com (Nikolay Markov) Date: Mon, 8 Dec 2014 14:10:01 +0400 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547DE513.1080203@redhat.com> <547EF7E5.7020509@mirantis.com> <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> <547F3D48.5020401@gmail.com> <1A3C52DFCD06494D8528644858247BF017812DEB@EX10MBOX03.pnnl.gov> Message-ID: > Yes, and it's been 4 days since last message in this thread and no > objections, so it seems > that Pecan in now our framework-of-choice for Nailgun and future > apps/projects. We still need some research to do about technical issues and how easy we can move to Pecan. Thanks to Ryan, we now have multiple links to solutions and docs on discussed issues. I guess we'll dedicate some engineer(s) responsible for doing such a research and then make all our decisions on subject. On Mon, Dec 8, 2014 at 11:07 AM, Sebastian Kalinowski wrote: > 2014-12-04 14:01 GMT+01:00 Igor Kalnitsky : >> >> Ok, guys, >> >> It became obvious that most of us either vote for Pecan or abstain from >> voting. > > > Yes, and it's been 4 days since last message in this thread and no > objections, so it seems > that Pecan in now our framework-of-choice for Nailgun and future > apps/projects. > >> >> >> So I propose to stop fighting this battle (Flask vs Pecan) and start >> thinking about moving to Pecan. You know, there are many questions >> that need to be discussed (such as 'should we change API version' or >> 'should be it done iteratively or as one patchset'). > > > IMHO small, iterative changes are rather obvious. > For other questions maybe we need (draft of ) a blueprint and a separate > mail thread? > >> >> >> - Igor > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Nick Markov From berrange at redhat.com Mon Dec 8 10:17:25 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Mon, 8 Dec 2014 10:17:25 +0000 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: References: <20141205134159.GK2383@redhat.com> <5481C3A1.5030900@redhat.com> Message-ID: <20141208101725.GE29159@redhat.com> On Sat, Dec 06, 2014 at 07:56:21AM +1100, Michael Still wrote: > I used Russell's 60 day stats in making this decision. I can't find a > documented historical precedent on what period the stats should be > generated over, however 60 days seems entirely reasonable to me. > > 2014-12-05 15:41:11.212927 > > Reviews for the last 60 days in nova > ** -- nova-core team member > +-----------------------------+---------------------------------------+----------------+ > | Reviewer | Reviews -2 -1 +1 +2 +A +/- % > | Disagreements* | > +-----------------------------+---------------------------------------+----------------+ > | berrange ** | 669 13 134 1 521 194 78.0% > | 47 ( 7.0%) | > | jogo ** | 431 38 161 2 230 117 53.8% > | 19 ( 4.4%) | > | oomichi ** | 309 1 106 4 198 58 65.4% > | 3 ( 1.0%) | > | danms ** | 293 34 133 15 111 43 43.0% > | 12 ( 4.1%) | > | jaypipes ** | 290 10 108 14 158 42 59.3% > | 15 ( 5.2%) | > | ndipanov ** | 192 10 78 6 98 24 54.2% > | 24 ( 12.5%) | > | klmitch ** | 190 1 22 0 167 12 87.9% > | 21 ( 11.1%) | > | cyeoh-0 ** | 184 0 70 10 104 41 62.0% > | 9 ( 4.9%) | > | mriedem ** | 173 3 86 8 76 31 48.6% > | 8 ( 4.6%) | > | johngarbutt ** | 164 19 79 6 60 24 40.2% > | 7 ( 4.3%) | > | cerberus ** | 151 0 9 40 102 38 94.0% > | 7 ( 4.6%) | > | mikalstill ** | 145 2 8 1 134 48 93.1% > | 3 ( 2.1%) | > | alaski ** | 104 0 7 6 91 54 93.3% > | 5 ( 4.8%) | > | sdague ** | 98 6 21 2 69 40 72.4% > | 4 ( 4.1%) | > | russellb ** | 86 1 10 0 75 29 87.2% > | 5 ( 5.8%) | > | p-draigbrady ** | 60 0 12 1 47 10 80.0% > | 4 ( 6.7%) | > | belliott ** | 32 0 8 1 23 15 75.0% > | 4 ( 12.5%) | > | vishvananda ** | 8 0 2 0 6 1 75.0% > | 2 ( 25.0%) | > | dan-prince ** | 7 0 0 0 7 3 100.0% > | 4 ( 57.1%) | > | cbehrens ** | 4 0 2 0 2 0 50.0% > | 1 ( 25.0%) | > > The previously held standard for core reviewer activity has been an > _average_ of two reviews per day, which is why I used the 60 days > stats (to eliminate vacations and so forth). It should be noted that > the top ten or so reviewers are doing at lot more than that. > > All of the reviewers I dropped are valued members of the team, and I > am sad to see all of them go. However, it is important that reviewers > remain active. Given that the Nova core is horrifically overworked & understaffed, I really think this being really counterproductive to the project needs to do this. It is just making the bad situation we're in even worse :-( Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From isviridov at mirantis.com Mon Dec 8 10:20:04 2014 From: isviridov at mirantis.com (isviridov) Date: Mon, 08 Dec 2014 12:20:04 +0200 Subject: [openstack-dev] [MagnetoDB] Intercycle release package versioning In-Reply-To: <3E6DF46E67210942B7919419E636D029165310258F@TUS1XCHEVSPIN34.SYMC.SYMANTEC.COM> References: <3E6DF46E67210942B7919419E636D029165310258F@TUS1XCHEVSPIN34.SYMC.SYMANTEC.COM> Message-ID: <54857B54.6050200@mirantis.com> Hello Aleksei, Thanks for rising it. I'm ok with it and let me clarify the source code repo state for each to releases. > 1:2014.2-0ubuntu1 tag: 2014.2 fixes in stable/juno > 1:2014.2~rc2-0ubuntu1 tag: 2014.2.rc2 fixes in master > 1:2014.2~b2-0ubuntu1 tag: 2014.2.b2 fixed in master > 1:2014.2~b2.dev{YYYYMMDD}_{GIT_SHA1}-0ubuntu1 no tag fixes in master Any thoughts? Thanks, Ilya 05.12.2014 15:39, Aleksei Chuprin (CS): > Hello everyone, > > Because MagnetoDB project uses more frequent releases than other OpenStack projects, i propose use following versioning strategy for MagnetoDB packages: > > 1:2014.2-0ubuntu1 > 1:2014.2~rc2-0ubuntu1 > 1:2014.2~rc1-0ubuntu1 > 1:2014.2~b2-0ubuntu1 > 1:2014.2~b2.dev{YYYYMMDD}_{GIT_SHA1}-0ubuntu1 > 1:2014.2~b2.dev{YYYYMMDD}_{GIT_SHA1}-0ubuntu1 > 1:2014.2~b1-0ubuntu1 > > What do you think about this? > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From berrange at redhat.com Mon Dec 8 10:22:57 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Mon, 8 Dec 2014 10:22:57 +0000 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: References: <54834007.5070007@gmail.com> Message-ID: <20141208102257.GF29159@redhat.com> On Sun, Dec 07, 2014 at 08:19:54PM +1100, Michael Still wrote: > On Sun, Dec 7, 2014 at 7:03 PM, Gary Kotton wrote: > > On 12/6/14, 7:42 PM, "Jay Pipes" wrote: > > [snip] > > >>-1 on pixelbeat, since he's been active in reviews on > >>various things AFAICT in the last 60-90 days and seems to be still a > >>considerate reviewer in various areas. > > > > I agree -1 for Padraig > > I'm going to be honest and say I'm confused here. > > We've always said we expect cores to maintain an average of two > reviews per day. That's not new, nor a rule created by me. Padraig is > a great guy, but has been working on other things -- he's done 60 > reviews in the last 60 days -- which is about half of what we expect > from a core. Even that limited 60 reviews is still having a notable positive impact to the ability of Nova core to get things done. > Are we talking about removing the two reviews a day requirement? If > so, how do we balance that with the widespread complaints that core > isn't keeping up with its workload? We could add more people to core, > but there is also a maximum practical size to the group if we're going > to keep everyone on the same page, especially when the less active > cores don't generally turn up to our IRC meetings and are therefore > more "expensive" to keep up to date. > > How can we say we are doing our best to keep up with the incoming > review workload if all reviewers aren't doing at least the minimum > level of reviews? How exactly is cutting more people from core helping us to keep up with the incoming review workoad ? It just makes it worse. The only way to mahorly help with that is to either get about 10-20 more people onto core which is unlikely, or to majorly split up the project as I've suggested in the past, or something in between. eg give the top 40 people in the review count list the ability to +2 things, leaving Nova core to just toggle the +A bit. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From berrange at redhat.com Mon Dec 8 10:28:02 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Mon, 8 Dec 2014 10:28:02 +0000 Subject: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support In-Reply-To: <5481FE7D.7020804@linux.vnet.ibm.com> References: <5481FE7D.7020804@linux.vnet.ibm.com> Message-ID: <20141208102802.GG29159@redhat.com> On Fri, Dec 05, 2014 at 12:50:37PM -0600, Matt Riedemann wrote: > In Juno we effectively disabled live snapshots with libvirt due to bug > 1334398 [1] failing the gate about 25% of the time. > > I was going through the Juno release notes today and saw this as a known > issue, which reminded me of it and was wondering if there is anything being > done about it? > > As I recall, it *works* but it wasn't working under the stress our > check/gate system puts on that code path. Yep, I've tried to reproduce the problem in countless different ways and never succeeded, even when replicating the gate test VM config & setup exactly. IOW it is highly load dependant edge case. IMHO we did a disservice to users by disabling this. Based on my experiance trying to reproduce it, is something that would work fine for end users 9999 times out of 10000. I think we should just put a temporary hack into Nova that only disables the code when running under the gate systems, leaving it enabled for users. > One thing I'm thinking is, couldn't we make this an experimental config > option and by default it's disabled but we could run it in the experimental > queue, or people could use it without having to patch the code to remove the > artificial minimum version constraint put in the code. > > Something like: > > if CONF.libvirt.live_snapshot_supported: > # do your thing I don't really think we need that. Just enable it permanently, except for under the gate. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From berrange at redhat.com Mon Dec 8 10:33:46 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Mon, 8 Dec 2014 10:33:46 +0000 Subject: [openstack-dev] [nova] Fixing the console.log grows forever bug. In-Reply-To: <20141206053852.GB11931@thor.bakeyournoodle.com> References: <20141206053852.GB11931@thor.bakeyournoodle.com> Message-ID: <20141208103346.GH29159@redhat.com> On Sat, Dec 06, 2014 at 04:38:52PM +1100, Tony Breeds wrote: > Hi All, > In the most recent team meeting we briefly discussed: [1] where the > console.log grows indefinitely, eventually causing guest stalls. I mentioned > that I was working on a spec to fix this issue. > > My original plan was fairly similar to [2] In that we'd switch libvirt/qemu to > using a unix domain socket and write a simple helper to read from that socket > and write to disk. That helper would close and reopen the on disk file upon > receiving a HUP (so logrotate just works). Life would be good. and we could > all move on. > > However I was encouraged to investigate fixing this in qemu, such that qemu > could process the HUP and make life better for all. This is certainly doable > and I'm happy[3] to do this work. I've floated the idea past qemu-devel and > they seem okay with the idea. My main concern is in lag and supporting > qemu/libvirt that can't handle this option. As mentioned in my reply on qemu-devel, I think the right long term solution for this is to fix it in libvirt. We have a general security goal to remove QEMU's ability to open any files whatsoever, instead having it receive all host resources as pre-opened file descriptors from libvirt. So what we anticipate is a new libvirt daemon for processing logs, virtlogd. Anywhere where QEMU currently gets a file to log to ( devices, and its stdout/stderr), it would instead be given a FD that's connected to virtlogd. virtlogd would simply write the data out to file & would be able to close & re-open files to integrate with logrotate. > For the sake of discussion I'll lay out my best guess right now on fixing this > in qemu. > > qemu 2.2.0 /should/ release this year the ETA is 2014-12-09[4] so the fix I'm > proposing would be available in qemu 2.3.0 which I think will be available in > June/July 2015. So we'd be into 'L' development before this fix is available > and possibly 'M' before the community distros (Fedora and Ubuntu)[5] include > and almost certainly longer for Enterprise distros. Along with the qemu > development I expect there to be some libvirt development as well but right now > I don't think that's critical to the feature or this discussion. > > So if that timeline is approximately correct: > > - Can we wait this long to fix the bug? As opposed to having it squashed in Kilo. > - What do we do in nova for the next ~12 months while know there isn't a qemu to fix this? > - Then once there is a qemu that fixes the issue, do we just say 'thou must use > qemu 2.3.0' or would nova still need to support old and new qemu's ? FWIW, by comparison libvirt is on a monthly release schedule, so a fix done in libvirt has potential to be available sooner, though obviously there's bigger dev work to be done in libvirt for this. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From kchamart at redhat.com Mon Dec 8 10:45:36 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 8 Dec 2014 11:45:36 +0100 Subject: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support In-Reply-To: <548211BC.6070900@linux.vnet.ibm.com> References: <5481FE7D.7020804@linux.vnet.ibm.com> <54820841.6080200@dague.net> <548211BC.6070900@linux.vnet.ibm.com> Message-ID: <20141208104536.GC14205@tesla.redhat.com> On Fri, Dec 05, 2014 at 02:12:44PM -0600, Matt Riedemann wrote: > > > On 12/5/2014 1:32 PM, Sean Dague wrote: > >On 12/05/2014 01:50 PM, Matt Riedemann wrote: > >>In Juno we effectively disabled live snapshots with libvirt due to bug > >>1334398 [1] failing the gate about 25% of the time. > >> > >>I was going through the Juno release notes today and saw this as a known > >>issue, which reminded me of it and was wondering if there is anything > >>being done about it? As Dan Berrang? noted, it's nearly impossible to reproduce this issue independently outside of OpenStack Gating environment. I brought this up at the recently concluded KVM Forum earlier this October. To debug this any further, one of the QEMU block layer developers asked if we can get QEMU instance running on Gate run under `gdb` (IIRC, danpb suggested this too, previously) to get further tracing details. > >>As I recall, it *works* but it wasn't working under the stress our > >>check/gate system puts on that code path. FWIW, I myself couldn't reproduce it independently via libvirt alone or via QMP (QEMU Machine Protocol) commands. Dan's workaround ("enable it permanently, except for under the gate") sounds sensible to me. > >>One thing I'm thinking is, couldn't we make this an experimental config > >>option and by default it's disabled but we could run it in the > >>experimental queue, or people could use it without having to patch the > >>code to remove the artificial minimum version constraint put in the code. > >> > >>Something like: > >> > >>if CONF.libvirt.live_snapshot_supported: > >> # do your thing > >> > >>[1] https://bugs.launchpad.net/nova/+bug/1334398 > > > >So, it works. If you aren't booting / shutting down guests at exactly > >the same time as snapshotting. Tried this exact case independently, and cannot reproduce, as stated by Dan (and others on the bug) in this thread. > >I believe cburgess said in IRC yesterday > >he was going to take another look at it next week. > > > >I'm happy to put this into dansmith's pattented [workarounds] config > >group (coming soon to fix the qemu-convert bug). But I don't think this > >should be a normal libvirt option. > > > > -Sean > > > > Yeah the [workarounds] group Is there any URL where I can read about this more? > is what got me thinking about it too as a config option, otherwise I > think the idea of an [experimental] config group has come up before as > a place to put 'not tested, here be dragons' type stuff. -- /kashyap From visnusaran.murugan at hp.com Mon Dec 8 12:00:22 2014 From: visnusaran.murugan at hp.com (Murugan, Visnusaran) Date: Mon, 8 Dec 2014 12:00:22 +0000 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <547FEEEB.3070507@redhat.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> Message-ID: <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> Hi Zane & Michael, Please have a look @ https://etherpad.openstack.org/p/execution-stream-and-aggregator-based-convergence Updated with a combined approach which does not require persisting graph and backup stack removal. This approach reduces DB queries by waiting for completion notification on a topic. The drawback I see is that delete stack stream will be huge as it will have the entire graph. We can always dump such data in ResourceLock.data Json and pass a simple flag "load_stream_from_db" to converge RPC call as a workaround for delete operation. To Stop current stack operation, we will use your traversal_id based approach. If in case you feel Aggregator model creates more queues, then we might have to poll DB to get resource status. (Which will impact performance adversely :) ) Lock table: name(Unique - Resource_id), stack_id, engine_id, data (Json to store stream dict) Your thoughts. Vishnu (irc: ckmvishnu) Unmesh (irc: unmeshg) -----Original Message----- From: Zane Bitter [mailto:zbitter at redhat.com] Sent: Thursday, December 4, 2014 10:50 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown On 01/12/14 02:02, Anant Patil wrote: > On GitHub:https://github.com/anantpatil/heat-convergence-poc I'm trying to review this code at the moment, and finding some stuff I don't understand: https://github.com/anantpatil/heat-convergence-poc/blob/master/heat/engine/stack.py#L911-L916 This appears to loop through all of the resources *prior* to kicking off any actual updates to check if the resource will change. This is impossible to do in general, since a resource may obtain a property value from an attribute of another resource and there is no way to know whether an update to said other resource would cause a change in the attribute value. In addition, no attempt to catch UpdateReplace is made. Although that looks like a simple fix, I'm now worried about the level to which this code has been tested. I'm also trying to wrap my head around how resources are cleaned up in dependency order. If I understand correctly, you store in the ResourceGraph table the dependencies between various resource names in the current template (presumably there could also be some left around from previous templates too?). For each resource name there may be a number of rows in the Resource table, each with an incrementing version. As far as I can tell though, there's nowhere that the dependency graph for _previous_ templates is persisted? So if the dependency order changes in the template we have no way of knowing the correct order to clean up in any more? (There's not even a mechanism to associate a resource version with a particular template, which might be one avenue by which to recover the dependencies.) I think this is an important case we need to be able to handle, so I added a scenario to my test framework to exercise it and discovered that my implementation was also buggy. Here's the fix: https://github.com/zaneb/heat-convergence-prototype/commit/786f367210ca0acf9eb22bea78fd9d51941b0e40 > It was difficult, for me personally, to completely understand Zane's > PoC and how it would lay the foundation for aforementioned design > goals. It would be very helpful to have Zane's understanding here. I > could understand that there are ideas like async message passing and > notifying the parent which we also subscribe to. So I guess the thing to note is that there are essentially two parts to my Poc: 1) A simulation framework that takes what will be in the final implementation multiple tasks running in parallel in separate processes and talking to a database, and replaces it with an event loop that runs the tasks sequentially in a single process with an in-memory data store. I could have built a more realistic simulator using Celery or something, but I preferred this way as it offers deterministic tests. 2) A toy implementation of Heat on top of this framework. The files map roughly to Heat something like this: converge.engine -> heat.engine.service converge.stack -> heat.engine.stack converge.resource -> heat.engine.resource converge.template -> heat.engine.template converge.dependencies -> actually is heat.engine.dependencies converge.sync_point -> no equivalent converge.converger -> no equivalent (this is convergence "worker") converge.reality -> represents the actual OpenStack services For convenience, I just use the @asynchronous decorator to turn an ordinary method call into a simulated message. The concept is essentially as follows: At the start of a stack update (creates and deletes are also just updates) we create any new resources in the DB calculate the dependency graph for the update from the data in the DB and template. This graph is the same one used by updates in Heat currently, so it contains both the forward and reverse (cleanup) dependencies. The stack update then kicks off checks of all the leaf nodes, passing the pre-calculated dependency graph. Each resource check may result in a call to the create(), update() or delete() methods of a Resource plugin. The resource also reads any attributes that will be required from it. Once this is complete, it triggers any dependent resources that are ready, or updates a SyncPoint in the database if there are dependent resources that have multiple requirements. The message triggering the next resource will contain the dependency graph again, as well as the RefIds and required attributes of any resources it depends on. The new dependencies thus created are added to the resource itself in the database at the time it is checked, allowing it to record the changes caused by a requirement being unexpectedly replaced without needing a global lock on anything. When cleaning up resources, we also endeavour to remove any that are successfully deleted from the dependencies graph. Each traversal has a unique ID that is both stored in the stack and passed down through the resource check triggers. (At present this is the template ID, but it may make more sense to have a unique ID since old template IDs can be resurrected in the case of a rollback.) As soon as these fail to match the resource checks stop propagating, so only an update of a single field is required (rather than locking an entire table) before beginning a new stack update. Hopefully that helps a little. Please let me know if you have specific questions. I'm *very* happy to incorporate other ideas into it, since it's pretty quick to change, has tests to check for regressions, and is intended to be thrown away anyhow (so I genuinely don't care if some bits get thrown away earlier than others). > In retrospective, we had to struggle a lot to understand the existing > Heat engine. We couldn't have done justice by just creating another > project in GitHub and without any concrete understanding of existing > state-of-affairs. I completely agree, and you guys did the right thing by starting out looking at Heat. But remember, the valuable thing isn't the code, it's what you learned. My concern is that now that you have Heat pretty well figured out, you won't be able to continue to learn nearly as fast trying to wrestle with the Heat codebase as you could with the simulator. We don't want to fall into the trap of just shipping whatever we have because it's too hard to explore the other options, we want to identify a promising design and iterate it as quickly as possible. cheers, Zane. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sombrafam at gmail.com Mon Dec 8 12:28:14 2014 From: sombrafam at gmail.com (Erlon Cruz) Date: Mon, 8 Dec 2014 10:28:14 -0200 Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: References: <547CCA69.4000909@anteaya.info> <837B116B6E5B934DA06D9AD0FD79C6A3018E9C60@SHSMSX104.ccr.corp.intel.com> <547EB5C9.8020008@rackspace.com> <547F8DC1.5000202@anteaya.info> Message-ID: I agree that 2 meetings per week will mess things up. 1+ for alternating meetings. On Fri, Dec 5, 2014 at 1:08 PM, Kurt Taylor wrote: > In my opinion, further discussion is needed. The proposal on the table is > to have 2 weekly meetings, one at the existing time of 1800UTC on Monday > and, also in the same week, to have another meeting at 0800 UTC on Tuesday. > > Here are some of the problems that I see with this approach: > > 1. Meeting content: Having 2 meetings per week is more than is needed at > this stage of the working group. There just isn't enough meeting content to > justify having two meetings every week. > > 2. Decisions: Any decision made at one meeting will potentially be undone > at the next, or at least not fully explained. It will be difficult to keep > consistent direction with the overall work group. > > 3. Meeting chair(s): Currently we do not have a commitment for a long-term > chair of this new second weekly meeting. I will not be able to attend this > new meeting at the proposed time. > > 4. Current meeting time: I am not aware of anyone that likes the current > time of 1800 UTC on Monday. The current time is the main reason it is hard > for EU and APAC CI Operators to attend. > > My proposal was to have only 1 meeting per week at alternating times, just > as other work groups have done to solve this problem. (See examples at: > https://wiki.openstack.org/wiki/Meetings) I volunteered to chair, then > ask other CI Operators to chair as the meetings evolved. The meeting times > could be any between 1300-0300 UTC. That way, one week we are good for US > and Europe, the next week for APAC. > > Kurt Taylor (krtaylor) > > > On Wed, Dec 3, 2014 at 11:10 PM, trinath.somanchi at freescale.com < > trinath.somanchi at freescale.com> wrote: > >> +1. >> >> -- >> Trinath Somanchi - B39208 >> trinath.somanchi at freescale.com | extn: 4048 >> >> -----Original Message----- >> From: Anita Kuno [mailto:anteaya at anteaya.info] >> Sent: Thursday, December 04, 2014 3:55 AM >> To: openstack-infra at lists.openstack.org >> Subject: Re: [OpenStack-Infra] [openstack-dev] [third-party]Time for >> Additional Meeting for third-party >> >> On 12/03/2014 03:15 AM, Omri Marcovitch wrote: >> > Hello Anteaya, >> > >> > A meeting between 8:00 - 16:00 UTC time will be great (Israel). >> > >> > >> > Thanks >> > Omri >> > >> > -----Original Message----- >> > From: Joshua Hesketh [mailto:joshua.hesketh at rackspace.com] >> > Sent: Wednesday, December 03, 2014 9:04 AM >> > To: He, Yongli; OpenStack Development Mailing List (not for usage >> > questions); openstack-infra at lists.openstack.org >> > Subject: Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for >> > Additional Meeting for third-party >> > >> > Hey, >> > >> > 0700 -> 1000 UTC would work for me most weeks fwiw. >> > >> > Cheers, >> > Josh >> > >> > Rackspace Australia >> > >> > On 12/3/14 11:17 AM, He, Yongli wrote: >> >> anteaya, >> >> >> >> UTC 7:00 AM to UTC9:00, or UTC11:30 to UTC13:00 is ideal time for >> china. >> >> >> >> if there is no time slot there, just pick up any time between UTC >> >> 7:00 AM to UCT 13:00. ( UTC9:00 to UTC 11:30 is on road to home and >> >> dinner.) >> >> >> >> Yongi He >> >> -----Original Message----- >> >> From: Anita Kuno [mailto:anteaya at anteaya.info] >> >> Sent: Tuesday, December 02, 2014 4:07 AM >> >> To: openstack Development Mailing List; >> >> openstack-infra at lists.openstack.org >> >> Subject: [openstack-dev] [third-party]Time for Additional Meeting for >> >> third-party >> >> >> >> One of the actions from the Kilo Third-Party CI summit session was to >> start up an additional meeting for CI operators to participate from >> non-North American time zones. >> >> >> >> Please reply to this email with times/days that would work for you. >> The current third party meeting is on Mondays at 1800 utc which works well >> since Infra meetings are on Tuesdays. If we could find a time that works >> for Europe and APAC that is also on Monday that would be ideal. >> >> >> >> Josh Hesketh has said he will try to be available for these meetings, >> he is in Australia. >> >> >> >> Let's get a sense of what days and timeframes work for those >> interested and then we can narrow it down and pick a channel. >> >> >> >> Thanks everyone, >> >> Anita. >> >> >> >> Okay first of all thanks to everyone who replied. >> >> Again, to clarify, the purpose of this thread has been to find a suitable >> additional third-party meeting time geared towards folks in EU and APAC. We >> live on a sphere, there is no time that will suit everyone. >> >> It looks like we are converging on 0800 UTC as a time and I am going to >> suggest Tuesdays. We have very little competition for space at that date >> + time combination so we can use #openstack-meeting (I have already >> booked the space on the wikipage). >> >> So barring further discussion, see you then! >> >> Thanks everyone, >> Anita. >> >> _______________________________________________ >> OpenStack-Infra mailing list >> OpenStack-Infra at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra >> >> _______________________________________________ >> OpenStack-Infra mailing list >> OpenStack-Infra at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at dague.net Mon Dec 8 13:00:11 2014 From: sean at dague.net (Sean Dague) Date: Mon, 08 Dec 2014 08:00:11 -0500 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: <54848820.7060802@gmail.com> References: <54834007.5070007@gmail.com> <54848820.7060802@gmail.com> Message-ID: <5485A0DB.6080104@dague.net> On 12/07/2014 12:02 PM, Jay Pipes wrote: > On 12/07/2014 04:19 AM, Michael Still wrote: >> On Sun, Dec 7, 2014 at 7:03 PM, Gary Kotton wrote: >>> On 12/6/14, 7:42 PM, "Jay Pipes" wrote: >> >> [snip] >> >>>> -1 on pixelbeat, since he's been active in reviews on >>>> various things AFAICT in the last 60-90 days and seems to be still a >>>> considerate reviewer in various areas. >>> >>> I agree -1 for Padraig >> >> I'm going to be honest and say I'm confused here. >> >> We've always said we expect cores to maintain an average of two >> reviews per day. That's not new, nor a rule created by me. Padraig is >> a great guy, but has been working on other things -- he's done 60 >> reviews in the last 60 days -- which is about half of what we expect >> from a core. >> >> Are we talking about removing the two reviews a day requirement? If >> so, how do we balance that with the widespread complaints that core >> isn't keeping up with its workload? We could add more people to core, >> but there is also a maximum practical size to the group if we're going >> to keep everyone on the same page, especially when the less active >> cores don't generally turn up to our IRC meetings and are therefore >> more "expensive" to keep up to date. >> >> How can we say we are doing our best to keep up with the incoming >> review workload if all reviewers aren't doing at least the minimum >> level of reviews? > > Personally, I care more about the quality of reviews than the quantity. > That said, I understand that we have a small number of core reviewers > relative to the number of open reviews in Nova (~650-700 open reviews > most days) and agree with Dan Smith that 2 reviews per day doesn't sound > like too much of a hurdle for core reviewers. > > The reason I think it's important to keep Padraig as a core is that he > has done considerate, thoughtful code reviews, albeit in a smaller > quantity. By saying we only look at the number of reviews in our > estimation of keeping contributors on the core team, we are > incentivizing the wrong behaviour, IMO. We should be pushing that the > thought that goes into reviews is more important than the sheer number > of reviews. > > Is it critical that we get more eyeballs reviewing code? Yes, absolutely > it is. Is it critical that we get more reviews from core reviewers as > well as non-core reviewers. Yes, absolutely. > > Bottom line, we need to balance between quality and quantity, and > kicking out a core reviewer who has quality code reviews because they > don't have that many of them sends the wrong message, IMO. Maybe. I'm kind of torn on it. I think we need to separate "providing insightful reviews" with "actively engaged in Nova". I feel like there are tons of community members that provide insightful reviews that we hold a patch until we've seen their relevant +1 in an area of their expertise. If our concern is missing expertise, then I don't think this changes things. I could go either way on this one in particular. But I'm also happy to drop and move forward. Padraig's commit history in OpenStack atm show's that his focus right now isn't upstream. He's not currently very active in IRC regularly, on the ML, triaging bugs, or fixing bugs, which are all ways we know folks are engaged enough to have a feel where the norms of Nova have evolved. Which is cool, folks change focus. I think in the past we've erred very heavily on making it tough to let people into the core reviewer team because it's so hard to remove people. Which doesn't help us grow, we stagnate. I think the fear of a fight on removal of core reviewers every time makes people even more cautious in supporting adds. Maybe this is erring in the other direction, but I'm happy to take Michael's judgement call on that that it isn't. If Padraig gets more engaged, I'd be happy adding him back in. -Sean -- Sean Dague http://dague.net From email at daviey.com Mon Dec 8 13:20:19 2014 From: email at daviey.com (Dave Walker) Date: Mon, 8 Dec 2014 13:20:19 +0000 Subject: [openstack-dev] [nova] Fixing the console.log grows forever bug. In-Reply-To: <20141208103346.GH29159@redhat.com> References: <20141206053852.GB11931@thor.bakeyournoodle.com> <20141208103346.GH29159@redhat.com> Message-ID: On 8 December 2014 at 10:33, Daniel P. Berrange wrote: > On Sat, Dec 06, 2014 at 04:38:52PM +1100, Tony Breeds wrote: >> Hi All, >> In the most recent team meeting we briefly discussed: [1] where the >> console.log grows indefinitely, eventually causing guest stalls. I mentioned >> that I was working on a spec to fix this issue. >> >> My original plan was fairly similar to [2] In that we'd switch libvirt/qemu to >> using a unix domain socket and write a simple helper to read from that socket >> and write to disk. That helper would close and reopen the on disk file upon >> receiving a HUP (so logrotate just works). Life would be good. and we could >> all move on. >> >> However I was encouraged to investigate fixing this in qemu, such that qemu >> could process the HUP and make life better for all. This is certainly doable >> and I'm happy[3] to do this work. I've floated the idea past qemu-devel and >> they seem okay with the idea. My main concern is in lag and supporting >> qemu/libvirt that can't handle this option. > > As mentioned in my reply on qemu-devel, I think the right long term solution > for this is to fix it in libvirt. We have a general security goal to remove > QEMU's ability to open any files whatsoever, instead having it receive all > host resources as pre-opened file descriptors from libvirt. So what we > anticipate is a new libvirt daemon for processing logs, virtlogd. Anywhere > where QEMU currently gets a file to log to ( devices, and its > stdout/stderr), it would instead be given a FD that's connected to virtlogd. > virtlogd would simply write the data out to file & would be able to close > & re-open files to integrate with logrotate. > >> For the sake of discussion I'll lay out my best guess right now on fixing this >> in qemu. >> >> qemu 2.2.0 /should/ release this year the ETA is 2014-12-09[4] so the fix I'm >> proposing would be available in qemu 2.3.0 which I think will be available in >> June/July 2015. So we'd be into 'L' development before this fix is available >> and possibly 'M' before the community distros (Fedora and Ubuntu)[5] include >> and almost certainly longer for Enterprise distros. Along with the qemu >> development I expect there to be some libvirt development as well but right now >> I don't think that's critical to the feature or this discussion. >> >> So if that timeline is approximately correct: >> >> - Can we wait this long to fix the bug? As opposed to having it squashed in Kilo. >> - What do we do in nova for the next ~12 months while know there isn't a qemu to fix this? >> - Then once there is a qemu that fixes the issue, do we just say 'thou must use >> qemu 2.3.0' or would nova still need to support old and new qemu's ? > > FWIW, by comparison libvirt is on a monthly release schedule, so a fix done in > libvirt has potential to be available sooner, though obviously there's bigger > dev work to be done in libvirt for this. > > Regards, > Daniel Hey, This thread started by suggesting having a scheduled task to read from a unix socket. I don't think this can really be considered an acceptable fix, as the guest does indeed lock up when the buffer is full. Initially, I proposed a quick fix for this back in 2011 which provided a config option to enable a kernel level ring buffer via a non-mainline module called emlog. This was not merged for understandable reasons. (pre gerrit) - http://bazaar.launchpad.net/~davewalker/nova/832507_with_emlog/revision/1509/nova/virt/libvirt/connection.py Later that same year, Robie Basak presented a change which introduced similar logic ringbuffer support in the nova code itself making use of eventlet. This seems quite a reasonable fix, but there was concern it might lock-up guests.. https://review.openstack.org/#/c/706/ I think shortly after this, it was pretty widely agreed that fixing this in Nova is not the correct layer. Personally, I struggle thinking qemu or libvirt is right layer either. I can't think that treating a console as a flat log file is the best default behavior. I still quite like the emlog approach, as having a ringbuffer device type in the kernel provides exactly what we need and is pretty simple to implement. Does anyone know if this generic ringbuffer kernel support was proposed to mainline kernel? -- Kind Regards, Dave Walker From berrange at redhat.com Mon Dec 8 13:39:35 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Mon, 8 Dec 2014 13:39:35 +0000 Subject: [openstack-dev] [nova] Fixing the console.log grows forever bug. In-Reply-To: References: <20141206053852.GB11931@thor.bakeyournoodle.com> <20141208103346.GH29159@redhat.com> Message-ID: <20141208133935.GJ29159@redhat.com> On Mon, Dec 08, 2014 at 01:20:19PM +0000, Dave Walker wrote: > On 8 December 2014 at 10:33, Daniel P. Berrange wrote: > > On Sat, Dec 06, 2014 at 04:38:52PM +1100, Tony Breeds wrote: > >> Hi All, > >> In the most recent team meeting we briefly discussed: [1] where the > >> console.log grows indefinitely, eventually causing guest stalls. I mentioned > >> that I was working on a spec to fix this issue. > >> > >> My original plan was fairly similar to [2] In that we'd switch libvirt/qemu to > >> using a unix domain socket and write a simple helper to read from that socket > >> and write to disk. That helper would close and reopen the on disk file upon > >> receiving a HUP (so logrotate just works). Life would be good. and we could > >> all move on. > >> > >> However I was encouraged to investigate fixing this in qemu, such that qemu > >> could process the HUP and make life better for all. This is certainly doable > >> and I'm happy[3] to do this work. I've floated the idea past qemu-devel and > >> they seem okay with the idea. My main concern is in lag and supporting > >> qemu/libvirt that can't handle this option. > > > > As mentioned in my reply on qemu-devel, I think the right long term solution > > for this is to fix it in libvirt. We have a general security goal to remove > > QEMU's ability to open any files whatsoever, instead having it receive all > > host resources as pre-opened file descriptors from libvirt. So what we > > anticipate is a new libvirt daemon for processing logs, virtlogd. Anywhere > > where QEMU currently gets a file to log to ( devices, and its > > stdout/stderr), it would instead be given a FD that's connected to virtlogd. > > virtlogd would simply write the data out to file & would be able to close > > & re-open files to integrate with logrotate. > > > >> For the sake of discussion I'll lay out my best guess right now on fixing this > >> in qemu. > >> > >> qemu 2.2.0 /should/ release this year the ETA is 2014-12-09[4] so the fix I'm > >> proposing would be available in qemu 2.3.0 which I think will be available in > >> June/July 2015. So we'd be into 'L' development before this fix is available > >> and possibly 'M' before the community distros (Fedora and Ubuntu)[5] include > >> and almost certainly longer for Enterprise distros. Along with the qemu > >> development I expect there to be some libvirt development as well but right now > >> I don't think that's critical to the feature or this discussion. > >> > >> So if that timeline is approximately correct: > >> > >> - Can we wait this long to fix the bug? As opposed to having it squashed in Kilo. > >> - What do we do in nova for the next ~12 months while know there isn't a qemu to fix this? > >> - Then once there is a qemu that fixes the issue, do we just say 'thou must use > >> qemu 2.3.0' or would nova still need to support old and new qemu's ? > > > > FWIW, by comparison libvirt is on a monthly release schedule, so a fix done in > > libvirt has potential to be available sooner, though obviously there's bigger > > dev work to be done in libvirt for this. > > > > Regards, > > Daniel > > Hey, > > This thread started by suggesting having a scheduled task to read from > a unix socket. I don't think this can really be considered an > acceptable fix, as the guest does indeed lock up when the buffer is > full. > > Initially, I proposed a quick fix for this back in 2011 which provided > a config option to enable a kernel level ring buffer via a > non-mainline module called emlog. This was not merged for > understandable reasons. (pre gerrit) - > http://bazaar.launchpad.net/~davewalker/nova/832507_with_emlog/revision/1509/nova/virt/libvirt/connection.py > > Later that same year, Robie Basak presented a change which introduced > similar logic ringbuffer support in the nova code itself making use of > eventlet. This seems quite a reasonable fix, but there was concern it > might lock-up guests.. https://review.openstack.org/#/c/706/ > > I think shortly after this, it was pretty widely agreed that fixing > this in Nova is not the correct layer. Personally, I struggle > thinking qemu or libvirt is right layer either. I can't think that > treating a console as a flat log file is the best default behavior. > > I still quite like the emlog approach, as having a ringbuffer device > type in the kernel provides exactly what we need and is pretty simple > to implement. > > Does anyone know if this generic ringbuffer kernel support was > proposed to mainline kernel? The emlog approach means the data would only ever be stored in RAM on the host, so in the event of a host reboot/crash you loose all guest logs. While that might be ok for some people, I think we need to support the persistent store of the logs on disk for historical / auditing record purposes. We don't need kernel support to provide a ring buffer. An more or less identical solution can be done in userspace with just a pair of fixed size files. eg write to one file, when it hits a limit, switch to the second file, then back to the original, etc. We can easily do this in a libvirt based solution. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From Neil.Jerram at metaswitch.com Mon Dec 8 13:42:47 2014 From: Neil.Jerram at metaswitch.com (Neil Jerram) Date: Mon, 8 Dec 2014 13:42:47 +0000 Subject: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup? In-Reply-To: <185155243D60FE48B384BC144F7C15A58E1686B0@SZXEML506-MBS.china.huawei.com> (wuhongning@huawei.com's message of "Mon, 8 Dec 2014 04:00:59 +0000") References: <87ppc090ne.fsf@metaswitch.com> <87d27z8kxj.fsf@metaswitch.com> <8761dptrrv.fsf@metaswitch.com> <185155243D60FE48B384BC144F7C15A58E1686B0@SZXEML506-MBS.china.huawei.com> Message-ID: <87sigqs1ew.fsf@metaswitch.com> Hi Wu, As I've also written in a comment at https://review.openstack.org/#/c/130732/6, it appears that VIF_TYPE_VHOSTUSER is already covered by the approved spec at https://review.openstack.org/#/c/96138/. Given that, is there any reason to consider adding VIF_TYPE_VHOSTUSER into the VIF_TYPE_TAP spec as well? Thanks, Neil Wuhongning writes: > Hi Neil, > > @Neil, could you please also add VIF_TYPE_VHOSTUSER in your spec (as I > commented on it)? There has been active VHOSTUSER discuss in JUNO nova > BP, and it's the same usefulness as VIF_TYPE_TAP. > > Best Regards > Wu > ________________________________________ > From: Neil Jerram [Neil.Jerram at metaswitch.com] > Sent: Saturday, December 06, 2014 10:51 AM > To: Kevin Benton > Cc: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup? > > Kevin Benton writes: > >> I see the difference now. >> The main concern I see with the NOOP type is that creating the virtual >> interface could require different logic for certain hypervisors. In >> that case Neutron would now have to know things about nova and to me >> it seems like that's slightly too far the other direction. > > Many thanks, Kevin. I see this now too, as I've just written more fully > in my response to Ian. > > Based on your and others' insight, I've revised and reuploaded my > VIF_TYPE_TAP spec, and hope it's a lot clearer now. > > Regards, > Neil > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From trinath.somanchi at freescale.com Mon Dec 8 13:48:13 2014 From: trinath.somanchi at freescale.com (trinath.somanchi at freescale.com) Date: Mon, 8 Dec 2014 13:48:13 +0000 Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: References: <547CCA69.4000909@anteaya.info> <837B116B6E5B934DA06D9AD0FD79C6A3018E9C60@SHSMSX104.ccr.corp.intel.com> <547EB5C9.8020008@rackspace.com> <547F8DC1.5000202@anteaya.info> Message-ID: With Kurt Taylor. +1 Very nice idea to start with. -- Trinath Somanchi - B39208 trinath.somanchi at freescale.com | extn: 4048 From: Kurt Taylor [mailto:kurt.r.taylor at gmail.com] Sent: Friday, December 05, 2014 8:39 PM To: OpenStack Development Mailing List (not for usage questions); openstack-infra at lists.openstack.org Subject: Re: [OpenStack-Infra] [openstack-dev] [third-party]Time for Additional Meeting for third-party In my opinion, further discussion is needed. The proposal on the table is to have 2 weekly meetings, one at the existing time of 1800UTC on Monday and, also in the same week, to have another meeting at 0800 UTC on Tuesday. Here are some of the problems that I see with this approach: 1. Meeting content: Having 2 meetings per week is more than is needed at this stage of the working group. There just isn't enough meeting content to justify having two meetings every week. 2. Decisions: Any decision made at one meeting will potentially be undone at the next, or at least not fully explained. It will be difficult to keep consistent direction with the overall work group. 3. Meeting chair(s): Currently we do not have a commitment for a long-term chair of this new second weekly meeting. I will not be able to attend this new meeting at the proposed time. 4. Current meeting time: I am not aware of anyone that likes the current time of 1800 UTC on Monday. The current time is the main reason it is hard for EU and APAC CI Operators to attend. My proposal was to have only 1 meeting per week at alternating times, just as other work groups have done to solve this problem. (See examples at: https://wiki.openstack.org/wiki/Meetings) I volunteered to chair, then ask other CI Operators to chair as the meetings evolved. The meeting times could be any between 1300-0300 UTC. That way, one week we are good for US and Europe, the next week for APAC. Kurt Taylor (krtaylor) On Wed, Dec 3, 2014 at 11:10 PM, trinath.somanchi at freescale.com > wrote: +1. -- Trinath Somanchi - B39208 trinath.somanchi at freescale.com | extn: 4048 -----Original Message----- From: Anita Kuno [mailto:anteaya at anteaya.info] Sent: Thursday, December 04, 2014 3:55 AM To: openstack-infra at lists.openstack.org Subject: Re: [OpenStack-Infra] [openstack-dev] [third-party]Time for Additional Meeting for third-party On 12/03/2014 03:15 AM, Omri Marcovitch wrote: > Hello Anteaya, > > A meeting between 8:00 - 16:00 UTC time will be great (Israel). > > > Thanks > Omri > > -----Original Message----- > From: Joshua Hesketh [mailto:joshua.hesketh at rackspace.com] > Sent: Wednesday, December 03, 2014 9:04 AM > To: He, Yongli; OpenStack Development Mailing List (not for usage > questions); openstack-infra at lists.openstack.org > Subject: Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for > Additional Meeting for third-party > > Hey, > > 0700 -> 1000 UTC would work for me most weeks fwiw. > > Cheers, > Josh > > Rackspace Australia > > On 12/3/14 11:17 AM, He, Yongli wrote: >> anteaya, >> >> UTC 7:00 AM to UTC9:00, or UTC11:30 to UTC13:00 is ideal time for china. >> >> if there is no time slot there, just pick up any time between UTC >> 7:00 AM to UCT 13:00. ( UTC9:00 to UTC 11:30 is on road to home and >> dinner.) >> >> Yongi He >> -----Original Message----- >> From: Anita Kuno [mailto:anteaya at anteaya.info] >> Sent: Tuesday, December 02, 2014 4:07 AM >> To: openstack Development Mailing List; >> openstack-infra at lists.openstack.org >> Subject: [openstack-dev] [third-party]Time for Additional Meeting for >> third-party >> >> One of the actions from the Kilo Third-Party CI summit session was to start up an additional meeting for CI operators to participate from non-North American time zones. >> >> Please reply to this email with times/days that would work for you. The current third party meeting is on Mondays at 1800 utc which works well since Infra meetings are on Tuesdays. If we could find a time that works for Europe and APAC that is also on Monday that would be ideal. >> >> Josh Hesketh has said he will try to be available for these meetings, he is in Australia. >> >> Let's get a sense of what days and timeframes work for those interested and then we can narrow it down and pick a channel. >> >> Thanks everyone, >> Anita. >> Okay first of all thanks to everyone who replied. Again, to clarify, the purpose of this thread has been to find a suitable additional third-party meeting time geared towards folks in EU and APAC. We live on a sphere, there is no time that will suit everyone. It looks like we are converging on 0800 UTC as a time and I am going to suggest Tuesdays. We have very little competition for space at that date + time combination so we can use #openstack-meeting (I have already booked the space on the wikipage). So barring further discussion, see you then! Thanks everyone, Anita. _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: From snikitin at mirantis.com Mon Dec 8 13:57:05 2014 From: snikitin at mirantis.com (Sergey Nikitin) Date: Mon, 8 Dec 2014 17:57:05 +0400 Subject: [openstack-dev] [nova] V3 API support In-Reply-To: References: Message-ID: Thank you guys for helpful information. Alex, I'll remove v2 schema and add v2.1 support 2014-12-08 8:01 GMT+03:00 Alex Xu : > I think Chris is on vacation. We move V3 API to V2.1. V2.1 have some > improvement compare to V2. You can find more detail at > http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/v2-on-v3-api.html > > We need support instance tag for V2.1. And in your patch, we needn't > json-schema for V2, just need for V2.1. > > Thanks > Alex > > 2014-12-04 20:50 GMT+08:00 Sergey Nikitin : > >> Hi, Christopher, >> >> I working on API extension for instance tags ( >> https://review.openstack.org/#/c/128940/). Recently one reviewer asked >> me to add V3 API support. I talked with Jay Pipes about it and he told me >> that V3 API became useless. So I wanted to ask you and our community: "Do >> we need to support v3 API in future nova patches?" >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eduard.matei at cloudfounders.com Mon Dec 8 14:33:02 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Mon, 8 Dec 2014 16:33:02 +0200 Subject: [openstack-dev] [OpenStack-Infra][ThirdPartyCI] Need help setting up CI In-Reply-To: References: Message-ID: Resending this to dev ML as it seems i get quicker response :) I created a job in Jenkins, added as Build Trigger: "Gerrit Event: Patchset Created", chose as server the configured Gerrit server that was previously tested, then added the project openstack-dev/sandbox and saved. I made a change on dev sandbox repo but couldn't trigger my job. Any ideas? Thanks, Eduard On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei < eduard.matei at cloudfounders.com> wrote: > Hello everyone, > > Thanks to the latest changes to the creation of service accounts process > we're one step closer to setting up our own CI platform for Cinder. > > So far we've got: > - Jenkins master (with Gerrit plugin) and slave (with DevStack and our > storage solution) > - Service account configured and tested (can manually connect to > review.openstack.org and get events and publish comments) > > Next step would be to set up a job to do the actual testing, this is where > we're stuck. > Can someone please point us to a clear example on how a job should look > like (preferably for testing Cinder on Kilo)? Most links we've found are > broken, or tools/scripts are no longer working. > Also, we cannot change the Jenkins master too much (it's owned by Ops team > and they need a list of tools/scripts to review before installing/running > so we're not allowed to experiment). > > Thanks, > Eduard > > -- > > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From roman.dobosz at intel.com Mon Dec 8 14:20:04 2014 From: roman.dobosz at intel.com (Roman Dobosz) Date: Mon, 8 Dec 2014 15:20:04 +0100 Subject: [openstack-dev] [nova] Host health monitoring In-Reply-To: <20141203084457.b2fbb17d004166e43560f91a@intel.com> References: <20141203084457.b2fbb17d004166e43560f91a@intel.com> Message-ID: <20141208152004.eaa8ca60b226a5ba187253a9@intel.com> On Wed, 3 Dec 2014 08:44:57 +0100 Roman Dobosz wrote: > I've just started to work on the topic of detection if host is alive or > not: https://blueprints.launchpad.net/nova/+spec/host-health-monitoring > > I'll appreciate any comments :) I've submitted another blueprint, which is closely bounded with previous one: https://blueprints.launchpad.net/nova/+spec/pacemaker-servicegroup-driver The idea behind those two blueprints is to enable Nova to be aware of host status, not only services that run on such. Bringing Pacemaker as a driver for servicegroup will provide us with two things: fencing and reliable information about host state, therefore we can avoid situations, where some actions will misinterpret information like service state with host state. Comments? -- Kind regards Roman From sandy.walsh at RACKSPACE.COM Mon Dec 8 14:39:30 2014 From: sandy.walsh at RACKSPACE.COM (Sandy Walsh) Date: Mon, 8 Dec 2014 14:39:30 +0000 Subject: [openstack-dev] Where should Schema files live? In-Reply-To: <60A3427EF882A54BA0A1971AE6EF0388A5362028@ORD1EXD02.RACKSPACE.CORP> References: <60A3427EF882A54BA0A1971AE6EF0388A535151D@ORD1EXD02.RACKSPACE.CORP> <547644AA.1010207@gmail.com> <60A3427EF882A54BA0A1971AE6EF0388A5360749@ORD1EXD02.RACKSPACE.CORP>, , <60A3427EF882A54BA0A1971AE6EF0388A5362028@ORD1EXD02.RACKSPACE.CORP> Message-ID: <60A3427EF882A54BA0A1971AE6EF0388A53669FF@ORD1EXD02.RACKSPACE.CORP> >From: Sandy Walsh [sandy.walsh at RACKSPACE.COM] Monday, December 01, 2014 9:29 AM > >>From: Duncan Thomas [duncan.thomas at gmail.com] >>Sent: Sunday, November 30, 2014 5:40 AM >>To: OpenStack Development Mailing List >>Subject: Re: [openstack-dev] Where should Schema files live? >> >>Duncan Thomas >>On Nov 27, 2014 10:32 PM, "Sandy Walsh" wrote: >>> >>> We were thinking each service API would expose their schema via a new /schema resource (or something). Nova would expose its schema. Glance its own. etc. This would also work well for installations still using older deployments. >>This feels like externally exposing info that need not be external (since the notifications are not external to the deploy) and it sounds like it will potentially leak fine detailed version and maybe deployment config details that you don't want to make public - either for commercial reasons or to make targeted attacks harder >> > >Yep, good point. Makes a good case for standing up our own service or just relying on the tarballs being in a well know place. Hmm, I wonder if it makes sense to limit the /schema resource to service accounts. Expose it by role. There's something in the back of my head that doesn't like calling out to the public API though. Perhaps unfounded. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sushma_korati at persistent.com Mon Dec 8 14:41:55 2014 From: sushma_korati at persistent.com (Sushma Korati) Date: Mon, 8 Dec 2014 14:41:55 +0000 Subject: [openstack-dev] [Mistral] Query on creating multiple resources In-Reply-To: <1418048476946.19042@persistent.com> References: <1418048204426.95510@persistent.com>, <1418048476946.19042@persistent.com> Message-ID: <1418050120681.46696@persistent.com> Hello All, Can we create multiple resources using a single task, like multiple keypairs or security-groups or networks etc? I am trying to extend the existing "create_vm" workflow, such that it accepts a list of security groups. In the workflow, before create_vm I am trying to create the security group if it does not exist. Just to test the security group functionality individually I wrote a sample workflow: -------------------- version: '2.0' name: secgroup_actions workflows: create_security_group: type: direct input: - name - description tasks: create_secgroups: action: nova.security_groups_create name={$.name} description={$.description} ------------ This is a straight forward workflow, but I am unable figure out how to pass multiple security groups to the above workflow. I tried passing multiple dicts in context file but it did not work. ------ { "name": "secgrp1", "description": "using mistral" }, { "name": "secgrp2", "description": "using mistral" } ----- Is there any way to modify this workflow such that it creates more than one security group? Please help. Regards, Sushma DISCLAIMER ========== This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.petrello at dreamhost.com Mon Dec 8 14:47:51 2014 From: ryan.petrello at dreamhost.com (Ryan Petrello) Date: Mon, 8 Dec 2014 09:47:51 -0500 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> <547F3D48.5020401@gmail.com> <1A3C52DFCD06494D8528644858247BF017812DEB@EX10MBOX03.pnnl.gov> Message-ID: <20141208144751.GA6497@Ryans-MBP> Feel free to ask any questions you have in #pecanpy on IRC; I can answer a lot more quickly than researching docs, and if you have a special need, I can usually accommodate with changes to Pecan (I've done so with several OpenStack projects in the past). On 12/08/14 02:10 PM, Nikolay Markov wrote: > > Yes, and it's been 4 days since last message in this thread and no > > objections, so it seems > > that Pecan in now our framework-of-choice for Nailgun and future > > apps/projects. > > We still need some research to do about technical issues and how easy > we can move to Pecan. Thanks to Ryan, we now have multiple links to > solutions and docs on discussed issues. I guess we'll dedicate some > engineer(s) responsible for doing such a research and then make all > our decisions on subject. > > On Mon, Dec 8, 2014 at 11:07 AM, Sebastian Kalinowski > wrote: > > 2014-12-04 14:01 GMT+01:00 Igor Kalnitsky : > >> > >> Ok, guys, > >> > >> It became obvious that most of us either vote for Pecan or abstain from > >> voting. > > > > > > Yes, and it's been 4 days since last message in this thread and no > > objections, so it seems > > that Pecan in now our framework-of-choice for Nailgun and future > > apps/projects. > > > >> > >> > >> So I propose to stop fighting this battle (Flask vs Pecan) and start > >> thinking about moving to Pecan. You know, there are many questions > >> that need to be discussed (such as 'should we change API version' or > >> 'should be it done iteratively or as one patchset'). > > > > > > IMHO small, iterative changes are rather obvious. > > For other questions maybe we need (draft of ) a blueprint and a separate > > mail thread? > > > >> > >> > >> - Igor > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Best regards, > Nick Markov > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ryan Petrello Senior Developer, DreamHost ryan.petrello at dreamhost.com From everett.toews at RACKSPACE.COM Mon Dec 8 14:58:46 2014 From: everett.toews at RACKSPACE.COM (Everett Toews) Date: Mon, 8 Dec 2014 14:58:46 +0000 Subject: [openstack-dev] [MagnetoDB][api] Hello from the API WG Message-ID: Hello MagnetoDB! During the latest meeting [1] of the API Working Group (WG) we noticed that MagnetoDB made use of the APIImpact flag [2]. That?s excellent and exactly how we were hoping the use of flag as a discovery mechanism would work! We were wondering if the MagentoDB team would like to designate a cross-project liaison [3] for the API WG? We would communicate with that person a bit more closely and figure out how we can best help your project. Perhaps they could attend an API WG Meeting [4] to get started. One thing that came up during the meeting was my suggestion that, if MagnetoDB had an API definition (like Swagger [5]), we could review the API design independently of the source code that implements the API. There are many other benefits of an API definition for documentation, testing, validation, and client creation. Does an API definition exist for the MagnetoDB API or would you be interested in creating one? Either way we?d like to hear your thoughts on the subject. Cheers, Everett P.S. Just to set expectations properly, please note that review of the API by the WG does not endorse the project in any way. We?re just trying to help design better APIs that are consistent with the rest of the OpenStack APIs. [1] http://eavesdrop.openstack.org/meetings/api_wg/2014/api_wg.2014-12-04-16.01.html [2] https://review.openstack.org/#/c/138059/ [3] https://wiki.openstack.org/wiki/CrossProjectLiaisons#API_Working_Group [4] https://wiki.openstack.org/wiki/Meetings/API-WG [5] http://swagger.io/ From michal.dulko at intel.com Mon Dec 8 15:18:17 2014 From: michal.dulko at intel.com (Dulko, Michal) Date: Mon, 8 Dec 2014 15:18:17 +0000 Subject: [openstack-dev] [cinder] HA issues Message-ID: <3895CB36EABD4E49B816E6081F3B00171F601C36@IRSMSX108.ger.corp.intel.com> Hi all! At the summit during crossproject HA session there were multiple Cinder issues mentioned. These can be found in this etherpad: https://etherpad.openstack.org/p/kilo-crossproject-ha-integration Is there any ongoing effort to fix these issues? Is there an idea how to approach any of them? -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyz at princessleia.com Mon Dec 8 15:21:37 2014 From: lyz at princessleia.com (Elizabeth K. Joseph) Date: Mon, 8 Dec 2014 07:21:37 -0800 Subject: [openstack-dev] [Infra] Meeting Tuesday December 9th at 19:00 UTC Message-ID: Hi everyone, The OpenStack Infrastructure (Infra) team is hosting our weekly meeting on Tuesday December 9th, at 19:00 UTC in #openstack-meeting Meeting agenda available here: https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is welcome to to add agenda items) Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend. And in case you missed it, meeting log and minutes from the last meeting are available here: Minutes: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-02-19.01.html Minutes (text): http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-02-19.01.txt Log: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-02-19.01.log.html -- Elizabeth Krumbach Joseph || Lyz || pleia2 From derekh at redhat.com Mon Dec 8 15:28:49 2014 From: derekh at redhat.com (Derek Higgins) Date: Mon, 08 Dec 2014 15:28:49 +0000 Subject: [openstack-dev] [TripleO] CI report : 1/11/2014 - 4/12/2014 In-Reply-To: <1417700253.2112.23.camel@dovetail.localdomain> References: <54804AA5.9090902@redhat.com> <1417700253.2112.23.camel@dovetail.localdomain> Message-ID: <5485C3B1.1010503@redhat.com> On 04/12/14 13:37, Dan Prince wrote: > On Thu, 2014-12-04 at 11:51 +0000, Derek Higgins wrote: >> A month since my last update, sorry my bad >> >> since the last email we've had 5 incidents causing ci failures >> >> 26/11/2014 : Lots of ubuntu jobs failed over 24 hours (maybe half) >> - We seem to suffer any time an ubuntu mirror isn't in sync causing hash >> mismatch errors. For now I've pinned DNS on our proxy to a specific >> server so we stop DNS round robining > > This sound fine to me. I personally like the model where you pin to a > specific mirror, perhaps one that is geographically closer to your > datacenter. This also makes Squid caching (in the rack) happier in some > cases. > >> >> 21/11/2014 : All tripleo jobs failed for about 16 hours >> - Neutron started asserting that local_ip be set to a valid ip address, >> on the seed we had been leaving it blank >> - Cinder moved to using oslo.concurreny which in turn requires that >> lock_path be set, we are now setting it > > > Thinking about how we might catch these ahead of time with our limited > resources ATM. These sorts of failures all seem related to configuration > and or requirements changes. I wonder if we were to selectively > (automatically) run check experimental jobs on all reviews with > associated tickets which have either doc changes or modify > requirements.txt. Probably a bit of work to pull this off but if we had > a report containing these results "coming down the pike" we might be > able to catch them ahead of time. Yup, this sounds like it could be beneficial, alternatively if we soon have the capacity to run on more projects (capacity is increasing) we'll be running on all reviews and we'll be able to generate the report your talking about, either way we should do something like this soon. > > >> >> 8/11/2014 : All fedora tripleo jobs failed for about 60 hours (over a >> weekend) >> - A url being accessed on https://bzr.linuxfoundation.org is no longer >> available, we removed the dependency >> >> 7/11/2014 : All tripleo tests failed for about 24 hours >> - Options were removed from nova.conf that had been deprecated (although >> no deprecation warnings were being reported), we were still using these >> in tripleo >> >> as always more details can be found here >> https://etherpad.openstack.org/p/tripleo-ci-breakages > > Thanks for sending this out! Very useful. no problem > > Dan > >> >> thanks, >> Derek. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zbitter at redhat.com Mon Dec 8 15:36:00 2014 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 08 Dec 2014 10:36:00 -0500 Subject: [openstack-dev] [Mistral] Query on creating multiple resources In-Reply-To: <1418050120681.46696@persistent.com> References: <1418048204426.95510@persistent.com>, <1418048476946.19042@persistent.com> <1418050120681.46696@persistent.com> Message-ID: <5485C560.9060605@redhat.com> On 08/12/14 09:41, Sushma Korati wrote: > Can we create multiple resources using a single task, like multiple > keypairs or security-groups or networks etc? Define them in a Heat template and create the Heat stack as a single task. - ZB From ryan.clevenger at RACKSPACE.COM Mon Dec 8 15:43:40 2014 From: ryan.clevenger at RACKSPACE.COM (Ryan Clevenger) Date: Mon, 8 Dec 2014 15:43:40 +0000 Subject: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration In-Reply-To: References: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP>, Message-ID: <9EDBC83C95615E4A97C964A30A3E18AC2F695F28@ORD1EXD01.RACKSPACE.CORP> Thanks for getting back Carl. I think we may be able to make this weeks meeting. Jason K?lker is the engineer doing all of the lifting on this side. Let me get with him to review what you all have so far and check our availability. ________________________________________ Ryan Clevenger Manager, Cloud Engineering - US m: 678.548.7261 e: ryan.clevenger at rackspace.com ________________________________ From: Carl Baldwin [carl at ecbaldwin.net] Sent: Sunday, December 07, 2014 4:04 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration Ryan, I have been working with the L3 sub team in this direction. Progress has been slow because of other priorities but we have made some. I have written a blueprint detailing some changes needed to the code to enable the flexibility to one day run glaring ups on an l3 routed network [1]. Jaime has been working on one that integrates ryu (or other speakers) with neutron [2]. Dvr was also a step in this direction. I'd like to invite you to the l3 weekly meeting [3] to discuss further. I'm very happy to see interest in this area and have someone new to collaborate. Carl [1] https://review.openstack.org/#/c/88619/ [2] https://review.openstack.org/#/c/125401/ [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam On Dec 3, 2014 4:04 PM, "Ryan Clevenger" > wrote: Hi, At Rackspace, we have a need to create a higher level networking service primarily for the purpose of creating a Floating IP solution in our environment. The current solutions for Floating IPs, being tied to plugin implementations, does not meet our needs at scale for the following reasons: 1. Limited endpoint H/A mainly targeting failover only and not multi-active endpoints, 2. Lack of noisy neighbor and DDOS mitigation, 3. IP fragmentation (with cells, public connectivity is terminated inside each cell leading to fragmentation and IP stranding when cell CPU/Memory use doesn't line up with allocated IP blocks. Abstracting public connectivity away from nova installations allows for much more efficient use of those precious IPv4 blocks). 4. Diversity in transit (multiple encapsulation and transit types on a per floating ip basis). We realize that network infrastructures are often unique and such a solution would likely diverge from provider to provider. However, we would love to collaborate with the community to see if such a project could be built that would meet the needs of providers at scale. We believe that, at its core, this solution would boil down to terminating north<->south traffic temporarily at a massively horizontally scalable centralized core and then encapsulating traffic east<->west to a specific host based on the association setup via the current L3 router's extension's 'floatingips' resource. Our current idea, involves using Open vSwitch for header rewriting and tunnel encapsulation combined with a set of Ryu applications for management: https://i.imgur.com/bivSdcC.png The Ryu application uses Ryu's BGP support to announce up to the Public Routing layer individual floating ips (/32's or /128's) which are then summarized and announced to the rest of the datacenter. If a particular floating ip is experiencing unusually large traffic (DDOS, slashdot effect, etc.), the Ryu application could change the announcements up to the Public layer to shift that traffic to dedicated hosts setup for that purpose. It also announces a single /32 "Tunnel Endpoint" ip downstream to the TunnelNet Routing system which provides transit to and from the cells and their hypervisors. Since traffic from either direction can then end up on any of the FLIP hosts, a simple flow table to modify the MAC and IP in either the SRC or DST fields (depending on traffic direction) allows the system to be completely stateless. We have proven this out (with static routing and flows) to work reliably in a small lab setup. On the hypervisor side, we currently plumb networks into separate OVS bridges. Another Ryu application would control the bridge that handles overlay networking to selectively divert traffic destined for the default gateway up to the FLIP NAT systems, taking into account any configured logical routing and local L2 traffic to pass out into the existing overlay fabric undisturbed. Adding in support for L2VPN EVPN (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to the Ryu BGP speaker will allow the hypervisor side Ryu application to advertise up to the FLIP system reachability information to take into account VM failover, live-migrate, and supported encapsulation types. We believe that decoupling the tunnel endpoint discovery from the control plane (Nova/Neutron) will provide for a more robust solution as well as allow for use outside of openstack if desired. ________________________________________ Ryan Clevenger Manager, Cloud Engineering - US m: 678.548.7261 e: ryan.clevenger at rackspace.com _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gotouday at gmail.com Mon Dec 8 15:44:36 2014 From: gotouday at gmail.com (uday bhaskar) Date: Mon, 8 Dec 2014 21:14:36 +0530 Subject: [openstack-dev] Fwd: [Horizon] drag and drop widget in horizon. In-Reply-To: References: Message-ID: We are looking for documentation for the widget used launch instance form of the horizon, where on the network tab, you are able to select networks in a particular order. How is this implemented? is there any widget available to reuse? any help is appreciated. Thanks Uday Bhaskar -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougw at a10networks.com Mon Dec 8 15:49:36 2014 From: dougw at a10networks.com (Doug Wiegley) Date: Mon, 8 Dec 2014 15:49:36 +0000 Subject: [openstack-dev] [neutron] services split starting today Message-ID: Hi all, The neutron advanced services split is starting today at 9am PDT, as described here: https://review.openstack.org/#/c/136835/ .. The remove change from neutron can be seen here: https://review.openstack.org/#/c/139901/ .. While the new repos are being sorted out, advanced services will be broken, and services tempest tests will be disabled. Either grab Juno, or an earlier rev of neutron. The new repos are: neutron-lbaas, neutron-fwaas, neutron-vpnaas. Thanks, Doug From nmakhotkin at mirantis.com Mon Dec 8 15:54:04 2014 From: nmakhotkin at mirantis.com (Nikolay Makhotkin) Date: Mon, 8 Dec 2014 19:54:04 +0400 Subject: [openstack-dev] [Mistral] Query on creating multiple resources In-Reply-To: <5485C560.9060605@redhat.com> References: <1418048204426.95510@persistent.com> <1418048476946.19042@persistent.com> <1418050120681.46696@persistent.com> <5485C560.9060605@redhat.com> Message-ID: Hi, Sushma! Can we create multiple resources using a single task, like multiple > keypairs or security-groups or networks etc? Yes, we can. This feature is in the development now and it is considered as experimental - https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections Just clone the last master branch from mistral. You can specify "for-each" task property and provide the array of data to your workflow: -------------------- version: '2.0' name: secgroup_actions workflows: create_security_group: type: direct input: - array_with_names_and_descriptions tasks: create_secgroups: for-each: data: $.array_with_names_and_descriptions action: nova.security_groups_create name={$.data.name} description={$.data.description} ------------ On Mon, Dec 8, 2014 at 6:36 PM, Zane Bitter wrote: > On 08/12/14 09:41, Sushma Korati wrote: > >> Can we create multiple resources using a single task, like multiple >> keypairs or security-groups or networks etc? >> > > Define them in a Heat template and create the Heat stack as a single task. > > - ZB > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best Regards, Nikolay -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougw at a10networks.com Mon Dec 8 16:19:50 2014 From: dougw at a10networks.com (Doug Wiegley) Date: Mon, 8 Dec 2014 16:19:50 +0000 Subject: [openstack-dev] [neutron] services split starting today Message-ID: To all neutron cores, Please do not approve any gerrit reviews for advanced services code for the next few days. We will post again when those reviews can resume. Thanks, Doug On 12/8/14, 8:49 AM, "Doug Wiegley" wrote: >Hi all, > >The neutron advanced services split is starting today at 9am PDT, as >described here: > >https://review.openstack.org/#/c/136835/ > > >.. The remove change from neutron can be seen here: > >https://review.openstack.org/#/c/139901/ > > >.. While the new repos are being sorted out, advanced services will be >broken, and services tempest tests will be disabled. Either grab Juno, or >an earlier rev of neutron. > >The new repos are: neutron-lbaas, neutron-fwaas, neutron-vpnaas. > >Thanks, >Doug > > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Mon Dec 8 16:33:51 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 08 Dec 2014 11:33:51 -0500 Subject: [openstack-dev] [third-party] Third-party CI account creation is now self-serve In-Reply-To: <547F7918.6000700@anteaya.info> References: <547F7918.6000700@anteaya.info> Message-ID: <5485D2EF.2090400@gmail.com> On 12/03/2014 03:56 PM, Anita Kuno wrote: > As of now third-party CI account creation is now self-serve. I think > this makes everybody happy. > > What does this mean? > > Well for a new third-party account this means you follow the new > process, outlined here: > http://ci.openstack.org/third_party.html#creating-a-service-account > > If you don't have enough information from these docs, please contact the > infra team then we will work on a patch once you learn what you needed, > to fill in the holes for others. > > If you currently have a third-party CI account on Gerrit, this is what > will happen with your account: > http://ci.openstack.org/third_party.html#permissions-on-your-third-party-system > > Short story is we will be moving voting accounts into project specific > voting groups. Your voting rights will not change, but will be directly > managed by project release groups. > Non voting accounts will be removed from the now redundant Third-Party > CI group and otherwise will not be changed. > > If you are a member of a -release group for a project currently > receiving third-party CI votes, you will find that you have access to > manage membership in a new group in Gerrit called -ci. To > allow a CI system to vote on your project, add it to the -ci > group, and to disable voting on your project, remove them from that group. > > We hope you are as excited about this change as we are. > > Let us know if you have questions, do try to work with third-party > project representatives as much as you can. Excellent work, Anita and the infra team, thank you so much! -jay From davanum at gmail.com Mon Dec 8 16:50:57 2014 From: davanum at gmail.com (Davanum Srinivas) Date: Mon, 8 Dec 2014 11:50:57 -0500 Subject: [openstack-dev] Remove XML support from Nova API Message-ID: Hi Team, We currently have disabled XML support when https://review.openstack.org/#/c/134332/ merged. I've prepared a followup patch series to entirely remove XML support [1] soon after we ship K1. I've marked it as WIP for now though all tests are working fine. Looking forward to your feedback. thanks, dims [1] https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:nuke-xml,n,z -- Davanum Srinivas :: https://twitter.com/dims From carl at ecbaldwin.net Mon Dec 8 17:05:28 2014 From: carl at ecbaldwin.net (Carl Baldwin) Date: Mon, 8 Dec 2014 10:05:28 -0700 Subject: [openstack-dev] [Neutron] Freeze on L3 agent Message-ID: For the next few weeks, we'll be tackling L3 agent restructuring [1] in earnest. This will require some heavy lifting, especially initially, in the l3_agent.py file. Because of this, I'd like to ask that we not approve any non-critical changes to the L3 agent that are unrelated to this restructuring starting today. After the heavy lifting has merged, I will notify again. I imagine that this effort will take a few weeks realistically. Carl [1] https://review.openstack.org/#/c/131535/ From nmakhotkin at mirantis.com Mon Dec 8 17:07:33 2014 From: nmakhotkin at mirantis.com (Nikolay Makhotkin) Date: Mon, 8 Dec 2014 20:07:33 +0300 Subject: [openstack-dev] [mistral] Team meeting minutes/log 12/08/2014 Message-ID: Thanks for joining our team meeting today! * Meeting minutes: *http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-08-16.04.log.html * * Meeting log: *http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-08-16.04.html * The next meeting is scheduled for Dec 15 at 16.00 UTC. -- Best Regards, Nikolay -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.mitchell at rackspace.com Mon Dec 8 17:11:35 2014 From: kevin.mitchell at rackspace.com (Kevin L. Mitchell) Date: Mon, 8 Dec 2014 11:11:35 -0600 Subject: [openstack-dev] [api] Using query string or request body to pass parameter In-Reply-To: <54854039.3030405@linux.vnet.ibm.com> References: <54854039.3030405@linux.vnet.ibm.com> Message-ID: <1418058695.6209.0.camel@einstein.kev> On Mon, 2014-12-08 at 14:07 +0800, Eli Qiao wrote: > I wonder if we can use body in delete, currently , there isn't any > case used in v2/v3 api. No, many frameworks raise an error if you try to include a body with a DELETE request. -- Kevin L. Mitchell Rackspace From eglynn at redhat.com Mon Dec 8 17:22:36 2014 From: eglynn at redhat.com (Eoghan Glynn) Date: Mon, 8 Dec 2014 12:22:36 -0500 (EST) Subject: [openstack-dev] Where should Schema files live? In-Reply-To: <60A3427EF882A54BA0A1971AE6EF0388A53669FF@ORD1EXD02.RACKSPACE.CORP> References: <60A3427EF882A54BA0A1971AE6EF0388A535151D@ORD1EXD02.RACKSPACE.CORP> <547644AA.1010207@gmail.com> <60A3427EF882A54BA0A1971AE6EF0388A5360749@ORD1EXD02.RACKSPACE.CORP> <60A3427EF882A54BA0A1971AE6EF0388A5362028@ORD1EXD02.RACKSPACE.CORP> <60A3427EF882A54BA0A1971AE6EF0388A53669FF@ORD1EXD02.RACKSPACE.CORP> Message-ID: <1710546841.17420569.1418059356293.JavaMail.zimbra@redhat.com> > >From: Sandy Walsh [sandy.walsh at RACKSPACE.COM] Monday, December 01, 2014 9:29 > >AM > > > >>From: Duncan Thomas [duncan.thomas at gmail.com] > >>Sent: Sunday, November 30, 2014 5:40 AM > >>To: OpenStack Development Mailing List > >>Subject: Re: [openstack-dev] Where should Schema files live? > >> > >>Duncan Thomas > >>On Nov 27, 2014 10:32 PM, "Sandy Walsh" wrote: > >>> > >>> We were thinking each service API would expose their schema via a new > >>> /schema resource (or something). Nova would expose its schema. Glance > >>> its own. etc. This would also work well for installations still using > >>> older deployments. > >>This feels like externally exposing info that need not be external (since > >>the notifications are not external to the deploy) and it sounds like it > >>will potentially leak fine detailed version and maybe deployment config > >>details that you don't want to make public - either for commercial reasons > >>or to make targeted attacks harder > >> > > > >Yep, good point. Makes a good case for standing up our own service or just > >relying on the tarballs being in a well know place. > > Hmm, I wonder if it makes sense to limit the /schema resource to service > accounts. Expose it by role. > > There's something in the back of my head that doesn't like calling out to the > public API though. Perhaps unfounded. I'm wondering here how this relates to the other URLs in the service catalog that aren't intended for external consumption, e.g. the internalURL and adminURL. I had assumed that these URLs would be visible to external clients, but protected by firewall rules such that clients would be unable to do anything in anger with those raw addresses from the outside. So would including a schemaURL in the service catalog actually expose an attack surface, assuming this was in general safely firewalled off in any realistic deployment? Cheers, Eoghan From mestery at mestery.com Mon Dec 8 17:59:55 2014 From: mestery at mestery.com (Kyle Mestery) Date: Mon, 8 Dec 2014 11:59:55 -0600 Subject: [openstack-dev] [neutron] services split starting today In-Reply-To: References: Message-ID: Copying the operators list here to gain additional visibility for trunk chasers. On Mon, Dec 8, 2014 at 10:19 AM, Doug Wiegley wrote: > To all neutron cores, > > Please do not approve any gerrit reviews for advanced services code for > the next few days. We will post again when those reviews can resume. > > Thanks, > Doug > > > > On 12/8/14, 8:49 AM, "Doug Wiegley" wrote: > > >Hi all, > > > >The neutron advanced services split is starting today at 9am PDT, as > >described here: > > > >https://review.openstack.org/#/c/136835/ > > > > > >.. The remove change from neutron can be seen here: > > > >https://review.openstack.org/#/c/139901/ > > > > > >.. While the new repos are being sorted out, advanced services will be > >broken, and services tempest tests will be disabled. Either grab Juno, or > >an earlier rev of neutron. > > > >The new repos are: neutron-lbaas, neutron-fwaas, neutron-vpnaas. > > > >Thanks, > >Doug > > > > > >_______________________________________________ > >OpenStack-dev mailing list > >OpenStack-dev at lists.openstack.org > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mestery at mestery.com Mon Dec 8 18:02:07 2014 From: mestery at mestery.com (Kyle Mestery) Date: Mon, 8 Dec 2014 12:02:07 -0600 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: On Tue, Dec 2, 2014 at 9:59 AM, Kyle Mestery wrote: > Now that we're in the thick of working hard on Kilo deliverables, I'd > like to make some changes to the neutron core team. Reviews are the > most important part of being a core reviewer, so we need to ensure > cores are doing reviews. The stats for the 180 day period [1] indicate > some changes are needed for cores who are no longer reviewing. > > First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from > neutron-core. Bob and Nachi have been core members for a while now. > They have contributed to Neutron over the years in reviews, code and > leading sub-teams. I'd like to thank them for all that they have done > over the years. I'd also like to propose that should they start > reviewing more going forward the core team looks to fast track them > back into neutron-core. But for now, their review stats place them > below the rest of the team for 180 days. > > As part of the changes, I'd also like to propose two new members to > neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have > been very active in reviews, meetings, and code for a while now. Henry > lead the DB team which fixed Neutron DB migrations during Juno. Kevin > has been actively working across all of Neutron, he's done some great > work on security fixes and stability fixes in particular. Their > comments in reviews are insightful and they have helped to onboard new > reviewers and taken the time to work with people on their patches. > > Existing neutron cores, please vote +1/-1 for the addition of Henry > and Kevin to the core team. > > Enough time has passed now, and Kevin and Henry have received enough +1 votes. So I'd like to welcome them to the core team! Thanks, Kyle > Thanks! > Kyle > > [1] http://stackalytics.com/report/contribution/neutron-group/180 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anteaya at anteaya.info Mon Dec 8 18:23:38 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Mon, 08 Dec 2014 11:23:38 -0700 Subject: [openstack-dev] [third-party] Third-party CI account creation is now self-serve In-Reply-To: <5485D2EF.2090400@gmail.com> References: <547F7918.6000700@anteaya.info> <5485D2EF.2090400@gmail.com> Message-ID: <5485ECAA.4090604@anteaya.info> On 12/08/2014 09:33 AM, Jay Pipes wrote: > On 12/03/2014 03:56 PM, Anita Kuno wrote: >> As of now third-party CI account creation is now self-serve. I think >> this makes everybody happy. >> >> What does this mean? >> >> Well for a new third-party account this means you follow the new >> process, outlined here: >> http://ci.openstack.org/third_party.html#creating-a-service-account >> >> If you don't have enough information from these docs, please contact the >> infra team then we will work on a patch once you learn what you needed, >> to fill in the holes for others. >> >> If you currently have a third-party CI account on Gerrit, this is what >> will happen with your account: >> http://ci.openstack.org/third_party.html#permissions-on-your-third-party-system >> >> >> Short story is we will be moving voting accounts into project specific >> voting groups. Your voting rights will not change, but will be directly >> managed by project release groups. >> Non voting accounts will be removed from the now redundant Third-Party >> CI group and otherwise will not be changed. >> >> If you are a member of a -release group for a project currently >> receiving third-party CI votes, you will find that you have access to >> manage membership in a new group in Gerrit called -ci. To >> allow a CI system to vote on your project, add it to the -ci >> group, and to disable voting on your project, remove them from that >> group. >> >> We hope you are as excited about this change as we are. >> >> Let us know if you have questions, do try to work with third-party >> project representatives as much as you can. > > Excellent work, Anita and the infra team, thank you so much! > > -jay > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Thanks Jay. Clark and Jeremy did the heavy lifting here. We are glad this is in place, hopefully this will work well for all concerned. Thank you, Anita. From pcm at cisco.com Mon Dec 8 18:33:37 2014 From: pcm at cisco.com (Paul Michali (pcm)) Date: Mon, 8 Dec 2014 18:33:37 +0000 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: Way to go Kevin and Henry! PCM (Paul Michali) MAIL ?..?. pcm at cisco.com IRC ??..? pc_m (irc.freenode.com) TW ???... @pmichali GPG Key ? 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 On Dec 8, 2014, at 11:02 AM, Kyle Mestery wrote: > On Tue, Dec 2, 2014 at 9:59 AM, Kyle Mestery wrote: > Now that we're in the thick of working hard on Kilo deliverables, I'd > like to make some changes to the neutron core team. Reviews are the > most important part of being a core reviewer, so we need to ensure > cores are doing reviews. The stats for the 180 day period [1] indicate > some changes are needed for cores who are no longer reviewing. > > First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from > neutron-core. Bob and Nachi have been core members for a while now. > They have contributed to Neutron over the years in reviews, code and > leading sub-teams. I'd like to thank them for all that they have done > over the years. I'd also like to propose that should they start > reviewing more going forward the core team looks to fast track them > back into neutron-core. But for now, their review stats place them > below the rest of the team for 180 days. > > As part of the changes, I'd also like to propose two new members to > neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have > been very active in reviews, meetings, and code for a while now. Henry > lead the DB team which fixed Neutron DB migrations during Juno. Kevin > has been actively working across all of Neutron, he's done some great > work on security fixes and stability fixes in particular. Their > comments in reviews are insightful and they have helped to onboard new > reviewers and taken the time to work with people on their patches. > > Existing neutron cores, please vote +1/-1 for the addition of Henry > and Kevin to the core team. > > Enough time has passed now, and Kevin and Henry have received enough +1 votes. So I'd like to welcome them to the core team! > > Thanks, > Kyle > > Thanks! > Kyle > > [1] http://stackalytics.com/report/contribution/neutron-group/180 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ayoung at redhat.com Mon Dec 8 19:06:05 2014 From: ayoung at redhat.com (Adam Young) Date: Mon, 08 Dec 2014 14:06:05 -0500 Subject: [openstack-dev] Where should Schema files live? In-Reply-To: <60A3427EF882A54BA0A1971AE6EF0388A5360754@ORD1EXD02.RACKSPACE.CORP> References: <60A3427EF882A54BA0A1971AE6EF0388A535151D@ORD1EXD02.RACKSPACE.CORP> <244070269.6015180.1416519285449.JavaMail.zimbra@redhat.com> <60A3427EF882A54BA0A1971AE6EF0388A535222C@ORD1EXD02.RACKSPACE.CORP> <1641271722.6367928.1416582229769.JavaMail.zimbra@redhat.com> <60A3427EF882A54BA0A1971AE6EF0388A5352E0B@ORD1EXD02.RACKSPACE.CORP>, <1357304690.9046926.1416937758405.JavaMail.zimbra@redhat.com> <60A3427EF882A54BA0A1971AE6EF0388A5360754@ORD1EXD02.RACKSPACE.CORP> Message-ID: <5485F69D.8020705@redhat.com> Isn't this what the API repos are for? Should EG the Keystone schemes be served from https://github.com/openstack/identity-api/ From lbragstad at gmail.com Mon Dec 8 19:30:51 2014 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 8 Dec 2014 13:30:51 -0600 Subject: [openstack-dev] Where should Schema files live? In-Reply-To: <5485F69D.8020705@redhat.com> References: <60A3427EF882A54BA0A1971AE6EF0388A535151D@ORD1EXD02.RACKSPACE.CORP> <244070269.6015180.1416519285449.JavaMail.zimbra@redhat.com> <60A3427EF882A54BA0A1971AE6EF0388A535222C@ORD1EXD02.RACKSPACE.CORP> <1641271722.6367928.1416582229769.JavaMail.zimbra@redhat.com> <60A3427EF882A54BA0A1971AE6EF0388A5352E0B@ORD1EXD02.RACKSPACE.CORP> <1357304690.9046926.1416937758405.JavaMail.zimbra@redhat.com> <60A3427EF882A54BA0A1971AE6EF0388A5360754@ORD1EXD02.RACKSPACE.CORP> <5485F69D.8020705@redhat.com> Message-ID: Keystone also has API documentation in the keystone-spec repo [1], which went in with [2] and [3]. [1] https://github.com/openstack/keystone-specs/tree/master/api [2] https://review.openstack.org/#/c/128712/ [3] https://review.openstack.org/#/c/130577/ On Mon, Dec 8, 2014 at 1:06 PM, Adam Young wrote: > Isn't this what the API repos are for? Should EG the Keystone schemes be > served from > > https://github.com/openstack/identity-api/ > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl at ecbaldwin.net Mon Dec 8 19:57:03 2014 From: carl at ecbaldwin.net (Carl Baldwin) Date: Mon, 8 Dec 2014 12:57:03 -0700 Subject: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration In-Reply-To: <9EDBC83C95615E4A97C964A30A3E18AC2F695F28@ORD1EXD01.RACKSPACE.CORP> References: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP> <9EDBC83C95615E4A97C964A30A3E18AC2F695F28@ORD1EXD01.RACKSPACE.CORP> Message-ID: Ryan, I'll be traveling around the time of the L3 meeting this week. My flight leaves 40 minutes after the meeting and I might have trouble attending. It might be best to put it off a week or to plan another time -- maybe Friday -- when we could discuss it in IRC or in a Hangout. Carl On Mon, Dec 8, 2014 at 8:43 AM, Ryan Clevenger wrote: > Thanks for getting back Carl. I think we may be able to make this weeks > meeting. Jason K?lker is the engineer doing all of the lifting on this side. > Let me get with him to review what you all have so far and check our > availability. > > ________________________________________ > > Ryan Clevenger > Manager, Cloud Engineering - US > m: 678.548.7261 > e: ryan.clevenger at rackspace.com > > ________________________________ > From: Carl Baldwin [carl at ecbaldwin.net] > Sent: Sunday, December 07, 2014 4:04 PM > To: OpenStack Development Mailing List > Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation > and collaboration > > Ryan, > > I have been working with the L3 sub team in this direction. Progress has > been slow because of other priorities but we have made some. I have written > a blueprint detailing some changes needed to the code to enable the > flexibility to one day run glaring ups on an l3 routed network [1]. Jaime > has been working on one that integrates ryu (or other speakers) with neutron > [2]. Dvr was also a step in this direction. > > I'd like to invite you to the l3 weekly meeting [3] to discuss further. I'm > very happy to see interest in this area and have someone new to collaborate. > > Carl > > [1] https://review.openstack.org/#/c/88619/ > [2] https://review.openstack.org/#/c/125401/ > [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam > > On Dec 3, 2014 4:04 PM, "Ryan Clevenger" > wrote: >> >> Hi, >> >> At Rackspace, we have a need to create a higher level networking service >> primarily for the purpose of creating a Floating IP solution in our >> environment. The current solutions for Floating IPs, being tied to plugin >> implementations, does not meet our needs at scale for the following reasons: >> >> 1. Limited endpoint H/A mainly targeting failover only and not >> multi-active endpoints, >> 2. Lack of noisy neighbor and DDOS mitigation, >> 3. IP fragmentation (with cells, public connectivity is terminated inside >> each cell leading to fragmentation and IP stranding when cell CPU/Memory use >> doesn't line up with allocated IP blocks. Abstracting public connectivity >> away from nova installations allows for much more efficient use of those >> precious IPv4 blocks). >> 4. Diversity in transit (multiple encapsulation and transit types on a per >> floating ip basis). >> >> We realize that network infrastructures are often unique and such a >> solution would likely diverge from provider to provider. However, we would >> love to collaborate with the community to see if such a project could be >> built that would meet the needs of providers at scale. We believe that, at >> its core, this solution would boil down to terminating north<->south traffic >> temporarily at a massively horizontally scalable centralized core and then >> encapsulating traffic east<->west to a specific host based on the >> association setup via the current L3 router's extension's 'floatingips' >> resource. >> >> Our current idea, involves using Open vSwitch for header rewriting and >> tunnel encapsulation combined with a set of Ryu applications for management: >> >> https://i.imgur.com/bivSdcC.png >> >> The Ryu application uses Ryu's BGP support to announce up to the Public >> Routing layer individual floating ips (/32's or /128's) which are then >> summarized and announced to the rest of the datacenter. If a particular >> floating ip is experiencing unusually large traffic (DDOS, slashdot effect, >> etc.), the Ryu application could change the announcements up to the Public >> layer to shift that traffic to dedicated hosts setup for that purpose. It >> also announces a single /32 "Tunnel Endpoint" ip downstream to the TunnelNet >> Routing system which provides transit to and from the cells and their >> hypervisors. Since traffic from either direction can then end up on any of >> the FLIP hosts, a simple flow table to modify the MAC and IP in either the >> SRC or DST fields (depending on traffic direction) allows the system to be >> completely stateless. We have proven this out (with static routing and >> flows) to work reliably in a small lab setup. >> >> On the hypervisor side, we currently plumb networks into separate OVS >> bridges. Another Ryu application would control the bridge that handles >> overlay networking to selectively divert traffic destined for the default >> gateway up to the FLIP NAT systems, taking into account any configured >> logical routing and local L2 traffic to pass out into the existing overlay >> fabric undisturbed. >> >> Adding in support for L2VPN EVPN >> (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN >> Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to the >> Ryu BGP speaker will allow the hypervisor side Ryu application to advertise >> up to the FLIP system reachability information to take into account VM >> failover, live-migrate, and supported encapsulation types. We believe that >> decoupling the tunnel endpoint discovery from the control plane >> (Nova/Neutron) will provide for a more robust solution as well as allow for >> use outside of openstack if desired. >> >> ________________________________________ >> >> Ryan Clevenger >> Manager, Cloud Engineering - US >> m: 678.548.7261 >> e: ryan.clevenger at rackspace.com >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From vadivel.openstack at gmail.com Mon Dec 8 20:10:29 2014 From: vadivel.openstack at gmail.com (Vadivel Poonathan) Date: Mon, 8 Dec 2014 12:10:29 -0800 Subject: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository? In-Reply-To: References: Message-ID: Hi Anne, I provided my comment in the review itself.. and pasted below for your quick view. Thanks, Vad -- Vadivel Poonathan12:05 PM I understand the "other drivers" mean the external drivers which are not part of the official openstack repository. So the reference drivers which are part of the official openstack repository will be "fully" documented and for the other such external drivers, this new idea of listing them with a short description and a link to vendor provided webpage is proposed!. So why again it is mentioned as ""Only drivers are covered that are contained in the official OpenStack project repository"" ???? - i 'm confused!. If this new proposal (short desc and external link) is again meant for only the drivers that are part of official openstack main repository - then why do we need this proposal itself?... I believe it originally triggered from the fact that we need a placeholder for listing the out-of-tree drivers. Since they are not part of the official openstack release/repository, they can not be documented or listed in the current existing documentation. Hence this idea of providing a placeholder with short desc and external link is proposed. Hence the out-of-tree vendors will maintain their plugin/drivers and detailed documentation. Pls. let me know if i missing something. On Thu, Dec 4, 2014 at 9:18 AM, Anne Gentle wrote: > Hi Vadivel, > We do have a blueprint in the docs-specs repo under review for driver > documentation and I'd like to get your input. > https://review.openstack.org/#/c/133372/ > > Here's a relevant excerpt: > > The documentation team will fully document the reference drivers as > specified below and just add short sections for other drivers. > > Guidelines for drivers that will be documented fully in the OpenStack > documentation: > > * The complete solution must be open source and use standard hardware > * The driver must be part of the respective OpenStack repository > * The driver is considered one of the reference drivers > > For documentation of other drivers, the following guidelines apply: > > * The Configuration Reference will contain a small section for each > driver, see below for details > * Only drivers are covered that are contained in the official > OpenStack project repository for drivers (for example in the main > project repository or the official "third party" repository). > > With this policy, the docs team will document in their guides the > following: > > * For cinder: volume drivers: document LVM only (TBD later: Samba, > glusterfs); backup drivers: document swift (TBD later: ceph) > * For glance: Document local storage, cinder, and swift as backends > * For neutron: document ML2 plug-in with the mechanisms drivers > OpenVSwitch and LinuxBridge > * For nova: document KVM (mostly), send Xen open source call for help > * For sahara: apache hadoop > * For trove: document all supported Open Source database engines like > MySQL. > > Let us know in the review itself if this answers your question about > third-party drivers not in an official repository. > Thanks, > Anne > > On Thu, Dec 4, 2014 at 9:59 AM, Vadivel Poonathan < > vadivel.openstack at gmail.com> wrote: > >> Hi Kyle and all, >> >> Was there any conclusion in the design summit or the meetings afterward >> about splitting the vendor plugins/drivers from the mainstream neutron and >> documentation of out-of-tree plugins/drivers?... >> >> Thanks, >> Vad >> -- >> >> >> On Thu, Oct 23, 2014 at 11:27 AM, Kyle Mestery >> wrote: >> >>> On Thu, Oct 23, 2014 at 12:35 PM, Vadivel Poonathan >>> wrote: >>> > Hi Kyle and Anne, >>> > >>> > Thanks for the clarifications... understood and it makes sense. >>> > >>> > However, per my understanding, the drivers (aka plugins) are meant to >>> be >>> > developed and supported by third-party vendors, outside of the >>> OpenStack >>> > community, and they are supposed to work as plug-n-play... they are >>> not part >>> > of the core OpenStack development, nor any of its components. If that >>> is the >>> > case, then why should OpenStack community include and maintain them as >>> part >>> > of it, for every release?... Wouldnt it be enough to limit the scope >>> with >>> > the plugin framework and built-in drivers such as LinuxBridge or OVS >>> etc?... >>> > not extending to commercial vendors?... (It is just a curious >>> question, >>> > forgive me if i missed something and correct me!). >>> > >>> You haven't misunderstood anything, we're in the process of splitting >>> these things out, and this will be a prime focus of the Neutron design >>> summit track at the upcoming summit. >>> >>> Thanks, >>> Kyle >>> >>> > At the same time, IMHO, there must be some reference or a page within >>> the >>> > scope of OpenStack documentation (not necessarily the core docs, but >>> some >>> > wiki page or reference link or so - as Anne suggested) to mention the >>> list >>> > of the drivers/plugins supported as of given release and may be an >>> external >>> > link to know more details about the driver, if the link is provided by >>> > respective vendor. >>> > >>> > >>> > Anyway, besides my opinion, the wiki page similar to hypervisor driver >>> would >>> > be good for now atleast, until the direction/policy level decision is >>> made >>> > to maintain out-of-tree plugins/drivers. >>> > >>> > >>> > Thanks, >>> > Vad >>> > -- >>> > >>> > >>> > >>> > >>> > On Thu, Oct 23, 2014 at 9:46 AM, Edgar Magana < >>> edgar.magana at workday.com> >>> > wrote: >>> >> >>> >> I second Anne?s and Kyle comments. Actually, I like very much the wiki >>> >> part to provide some visibility for out-of-tree plugins/drivers but >>> not into >>> >> the official documentation. >>> >> >>> >> Thanks, >>> >> >>> >> Edgar >>> >> >>> >> From: Anne Gentle >>> >> Reply-To: "OpenStack Development Mailing List (not for usage >>> questions)" >>> >> >>> >> Date: Thursday, October 23, 2014 at 8:51 AM >>> >> To: Kyle Mestery >>> >> Cc: "OpenStack Development Mailing List (not for usage questions)" >>> >> >>> >> Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update >>> >> about new vendor plugin, but without code in repository? >>> >> >>> >> >>> >> >>> >> On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery >>> >> wrote: >>> >>> >>> >>> Vad: >>> >>> >>> >>> The third-party CI is required for your upstream driver. I think >>> >>> what's different from my reading of this thread is the question of >>> >>> what is the requirement to have a driver listed in the upstream >>> >>> documentation which is not in the upstream codebase. To my knowledge, >>> >>> we haven't done this. Thus, IMHO, we should NOT be utilizing upstream >>> >>> documentation to document drivers which are themselves not upstream. >>> >>> When we split out the drivers which are currently upstream in neutron >>> >>> into a separate repo, they will still be upstream. So my opinion here >>> >>> is that if your driver is not upstream, it shouldn't be in the >>> >>> upstream documentation. But I'd like to hear others opinions as well. >>> >>> >>> >> >>> >> This is my sense as well. >>> >> >>> >> The hypervisor drivers are documented on the wiki, sometimes they're >>> >> in-tree, sometimes they're not, but the state of testing is >>> documented on >>> >> the wiki. I think we could take this approach for network and storage >>> >> drivers as well. >>> >> >>> >> https://wiki.openstack.org/wiki/HypervisorSupportMatrix >>> >> >>> >> Anne >>> >> >>> >>> >>> >>> Thanks, >>> >>> Kyle >>> >>> >>> >>> On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan >>> >>> wrote: >>> >>> > Kyle, >>> >>> > Gentle reminder... when you get a chance!.. >>> >>> > >>> >>> > Anne, >>> >>> > In case, if i need to send it to different group or email-id to >>> reach >>> >>> > Kyle >>> >>> > Mestery, pls. let me know. Thanks for your help. >>> >>> > >>> >>> > Regards, >>> >>> > Vad >>> >>> > -- >>> >>> > >>> >>> > >>> >>> > On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan >>> >>> > wrote: >>> >>> >> >>> >>> >> Hi Kyle, >>> >>> >> >>> >>> >> Can you pls. comment on this discussion and confirm the >>> requirements >>> >>> >> for >>> >>> >> getting out-of-tree mechanism_driver listed in the supported >>> >>> >> plugin/driver >>> >>> >> list of the Openstack Neutron docs. >>> >>> >> >>> >>> >> Thanks, >>> >>> >> Vad >>> >>> >> -- >>> >>> >> >>> >>> >> On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle >> > >>> >>> >> wrote: >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan >>> >>> >>> wrote: >>> >>> >>>> >>> >>> >>>> Hi, >>> >>> >>>> >>> >>> >>>> >>>> On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton >>> >>> >>>> >>>> >>> >>> >>>> >>>> wrote: >>> >>> >>>> >>>>> >>> >>> >>>> >>>>> I think you will probably have to wait until after the >>> summit >>> >>> >>>> >>>>> so >>> >>> >>>> >>>>> we can >>> >>> >>>> >>>>> see the direction that will be taken with the rest of the >>> >>> >>>> >>>>> in-tree >>> >>> >>>> >>>>> drivers/plugins. It seems like we are moving towards >>> removing >>> >>> >>>> >>>>> all >>> >>> >>>> >>>>> of them so >>> >>> >>>> >>>>> we would definitely need a solution to documenting >>> out-of-tree >>> >>> >>>> >>>>> drivers as >>> >>> >>>> >>>>> you suggested. >>> >>> >>>> >>> >>> >>>> [Vad] while i 'm waiting for the conclusion on this subject, i >>> 'm >>> >>> >>>> trying >>> >>> >>>> to setup the third-party CI/Test system and meet its >>> requirements to >>> >>> >>>> get my >>> >>> >>>> mechanism_driver listed in the Kilo's documentation, in >>> parallel. >>> >>> >>>> >>> >>> >>>> Couple of questions/confirmations before i proceed further on >>> this >>> >>> >>>> direction... >>> >>> >>>> >>> >>> >>>> 1) Is there anything more required other than the third-party >>> >>> >>>> CI/Test >>> >>> >>>> requirements ??.. like should I still need to go-through the >>> entire >>> >>> >>>> development process of submit/review/approval of the blue-print >>> and >>> >>> >>>> code of >>> >>> >>>> my ML2 driver which was already developed and in-use?... >>> >>> >>>> >>> >>> >>> >>> >>> >>> The neutron PTL Kyle Mestery can answer if there are any >>> additional >>> >>> >>> requirements. >>> >>> >>> >>> >>> >>>> >>> >>> >>>> 2) Who is the authority to clarify and confirm the above (and >>> how do >>> >>> >>>> i >>> >>> >>>> contact them)?... >>> >>> >>> >>> >>> >>> >>> >>> >>> Elections just completed, and the newly elected PTL is Kyle >>> Mestery, >>> >>> >>> >>> >>> >>> >>> http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html >>> . >>> >>> >>> >>> >>> >>>> >>> >>> >>>> >>> >>> >>>> Thanks again for your inputs... >>> >>> >>>> >>> >>> >>>> Regards, >>> >>> >>>> Vad >>> >>> >>>> -- >>> >>> >>>> >>> >>> >>>> On Tue, Oct 14, 2014 at 3:17 PM, Anne Gentle < >>> anne at openstack.org> >>> >>> >>>> wrote: >>> >>> >>>>> >>> >>> >>>>> >>> >>> >>>>> >>> >>> >>>>> On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan >>> >>> >>>>> wrote: >>> >>> >>>>>> >>> >>> >>>>>> Agreed on the requirements of test results to qualify the >>> vendor >>> >>> >>>>>> plugin to be listed in the upstream docs. >>> >>> >>>>>> Is there any procedure/infrastructure currently available for >>> this >>> >>> >>>>>> purpose?.. >>> >>> >>>>>> Pls. fwd any link/pointers on those info. >>> >>> >>>>>> >>> >>> >>>>> >>> >>> >>>>> Here's a link to the third-party testing setup information. >>> >>> >>>>> >>> >>> >>>>> http://ci.openstack.org/third_party.html >>> >>> >>>>> >>> >>> >>>>> Feel free to keep asking questions as you dig deeper. >>> >>> >>>>> Thanks, >>> >>> >>>>> Anne >>> >>> >>>>> >>> >>> >>>>>> >>> >>> >>>>>> Thanks, >>> >>> >>>>>> Vad >>> >>> >>>>>> -- >>> >>> >>>>>> >>> >>> >>>>>> On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki >>> >>> >>>>>> >>> >>> >>>>>> wrote: >>> >>> >>>>>>> >>> >>> >>>>>>> I agree with Kevin and Kyle. Even if we decided to use >>> separate >>> >>> >>>>>>> tree >>> >>> >>>>>>> for neutron >>> >>> >>>>>>> plugins and drivers, they still will be regarded as part of >>> the >>> >>> >>>>>>> upstream. >>> >>> >>>>>>> These plugins/drivers need to prove they are well integrated >>> with >>> >>> >>>>>>> Neutron master >>> >>> >>>>>>> in some way and gating integration proves it is well tested >>> and >>> >>> >>>>>>> integrated. >>> >>> >>>>>>> I believe it is a reasonable assumption and requirement that >>> a >>> >>> >>>>>>> vendor >>> >>> >>>>>>> plugin/driver >>> >>> >>>>>>> is listed in the upstream docs. This is a same kind of >>> question >>> >>> >>>>>>> as >>> >>> >>>>>>> what vendor plugins >>> >>> >>>>>>> are tested and worth documented in the upstream docs. >>> >>> >>>>>>> I hope you work with the neutron team and run the third party >>> >>> >>>>>>> requirements. >>> >>> >>>>>>> >>> >>> >>>>>>> Thanks, >>> >>> >>>>>>> Akihiro >>> >>> >>>>>>> >>> >>> >>>>>>> On Tue, Oct 14, 2014 at 10:09 AM, Kyle Mestery >>> >>> >>>>>>> >>> >>> >>>>>>> wrote: >>> >>> >>>>>>> > On Mon, Oct 13, 2014 at 6:44 PM, Kevin Benton >>> >>> >>>>>>> > >>> >>> >>>>>>> > wrote: >>> >>> >>>>>>> >>>The OpenStack dev and docs team dont have to worry about >>> >>> >>>>>>> >>> gating/publishing/maintaining the vendor specific >>> >>> >>>>>>> >>> plugins/drivers. >>> >>> >>>>>>> >> >>> >>> >>>>>>> >> I disagree about the gating part. If a vendor wants to >>> have a >>> >>> >>>>>>> >> link >>> >>> >>>>>>> >> that >>> >>> >>>>>>> >> shows they are compatible with openstack, they should be >>> >>> >>>>>>> >> reporting >>> >>> >>>>>>> >> test >>> >>> >>>>>>> >> results on all patches. A link to a vendor driver in the >>> docs >>> >>> >>>>>>> >> should signify >>> >>> >>>>>>> >> some form of testing that the community is comfortable >>> with. >>> >>> >>>>>>> >> >>> >>> >>>>>>> > I agree with Kevin here. If you want to play upstream, in >>> >>> >>>>>>> > whatever >>> >>> >>>>>>> > form that takes by the end of Kilo, you have to work with >>> the >>> >>> >>>>>>> > existing >>> >>> >>>>>>> > third-party requirements and team to take advantage of >>> being a >>> >>> >>>>>>> > part >>> >>> >>>>>>> > of >>> >>> >>>>>>> > things like upstream docs. >>> >>> >>>>>>> > >>> >>> >>>>>>> > Thanks, >>> >>> >>>>>>> > Kyle >>> >>> >>>>>>> > >>> >>> >>>>>>> >> On Mon, Oct 13, 2014 at 11:33 AM, Vadivel Poonathan >>> >>> >>>>>>> >> wrote: >>> >>> >>>>>>> >>> >>> >>> >>>>>>> >>> Hi, >>> >>> >>>>>>> >>> >>> >>> >>>>>>> >>> If the plan is to move ALL existing vendor specific >>> >>> >>>>>>> >>> plugins/drivers >>> >>> >>>>>>> >>> out-of-tree, then having a place-holder within the >>> OpenStack >>> >>> >>>>>>> >>> domain would >>> >>> >>>>>>> >>> suffice, where the vendors can list their plugins/drivers >>> >>> >>>>>>> >>> along >>> >>> >>>>>>> >>> with their >>> >>> >>>>>>> >>> documentation as how to install and use etc. >>> >>> >>>>>>> >>> >>> >>> >>>>>>> >>> The main Openstack Neutron documentation page can >>> explain the >>> >>> >>>>>>> >>> plugin >>> >>> >>>>>>> >>> framework (ml2 type drivers, mechanism drivers, serviec >>> >>> >>>>>>> >>> plugin >>> >>> >>>>>>> >>> and so on) >>> >>> >>>>>>> >>> and its purpose/usage etc, then provide a link to refer >>> the >>> >>> >>>>>>> >>> currently >>> >>> >>>>>>> >>> supported vendor specific plugins/drivers for more >>> details. >>> >>> >>>>>>> >>> That >>> >>> >>>>>>> >>> way the >>> >>> >>>>>>> >>> documentation will be accurate to what is "in-tree" and >>> limit >>> >>> >>>>>>> >>> the >>> >>> >>>>>>> >>> documentation of external plugins/drivers to have just a >>> >>> >>>>>>> >>> reference link. So >>> >>> >>>>>>> >>> its now vendor's responsibility to keep their driver's >>> >>> >>>>>>> >>> up-to-date and their >>> >>> >>>>>>> >>> documentation accurate. The OpenStack dev and docs team >>> dont >>> >>> >>>>>>> >>> have >>> >>> >>>>>>> >>> to worry >>> >>> >>>>>>> >>> about gating/publishing/maintaining the vendor specific >>> >>> >>>>>>> >>> plugins/drivers. >>> >>> >>>>>>> >>> >>> >>> >>>>>>> >>> The built-in drivers such as LinuxBridge or OpenVSwitch >>> etc >>> >>> >>>>>>> >>> can >>> >>> >>>>>>> >>> continue >>> >>> >>>>>>> >>> to be "in-tree" and their documentation will be part of >>> main >>> >>> >>>>>>> >>> Neutron's docs. >>> >>> >>>>>>> >>> So the Neutron is guaranteed to work with built-in >>> >>> >>>>>>> >>> plugins/drivers as per >>> >>> >>>>>>> >>> the documentation and the user is informed to refer the >>> >>> >>>>>>> >>> "external >>> >>> >>>>>>> >>> vendor >>> >>> >>>>>>> >>> plug-in page" for additional/specific plugins/drivers. >>> >>> >>>>>>> >>> >>> >>> >>>>>>> >>> >>> >>> >>>>>>> >>> Thanks, >>> >>> >>>>>>> >>> Vad >>> >>> >>>>>>> >>> -- >>> >>> >>>>>>> >>> >>> >>> >>>>>>> >>> >>> >>> >>>>>>> >>> On Fri, Oct 10, 2014 at 8:10 PM, Anne Gentle >>> >>> >>>>>>> >>> >>> >>> >>>>>>> >>> wrote: >>> >>> >>>>>>> >>>> >>> >>> >>>>>>> >>>> >>> >>> >>>>>>> >>>> >>> >>> >>>>>>> >>>> On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton >>> >>> >>>>>>> >>>> wrote: >>> >>> >>>>>>> >>>>> >>> >>> >>>>>>> >>>>> I think you will probably have to wait until after the >>> >>> >>>>>>> >>>>> summit >>> >>> >>>>>>> >>>>> so we can >>> >>> >>>>>>> >>>>> see the direction that will be taken with the rest of >>> the >>> >>> >>>>>>> >>>>> in-tree >>> >>> >>>>>>> >>>>> drivers/plugins. It seems like we are moving towards >>> >>> >>>>>>> >>>>> removing >>> >>> >>>>>>> >>>>> all of them so >>> >>> >>>>>>> >>>>> we would definitely need a solution to documenting >>> >>> >>>>>>> >>>>> out-of-tree >>> >>> >>>>>>> >>>>> drivers as >>> >>> >>>>>>> >>>>> you suggested. >>> >>> >>>>>>> >>>>> >>> >>> >>>>>>> >>>>> However, I think the minimum requirements for having a >>> >>> >>>>>>> >>>>> driver >>> >>> >>>>>>> >>>>> being >>> >>> >>>>>>> >>>>> documented should be third-party testing of Neutron >>> >>> >>>>>>> >>>>> patches. >>> >>> >>>>>>> >>>>> Otherwise the >>> >>> >>>>>>> >>>>> docs will become littered with a bunch of links to >>> >>> >>>>>>> >>>>> drivers/plugins with no >>> >>> >>>>>>> >>>>> indication of what actually works, which ultimately >>> makes >>> >>> >>>>>>> >>>>> Neutron look bad. >>> >>> >>>>>>> >>>> >>> >>> >>>>>>> >>>> >>> >>> >>>>>>> >>>> This is my line of thinking as well, expanded to >>> "ultimately >>> >>> >>>>>>> >>>> makes >>> >>> >>>>>>> >>>> OpenStack docs look bad" -- a perception I want to >>> avoid. >>> >>> >>>>>>> >>>> >>> >>> >>>>>>> >>>> Keep the viewpoints coming. We have a crucial balancing >>> act >>> >>> >>>>>>> >>>> ahead: users >>> >>> >>>>>>> >>>> need to trust docs and trust the drivers. Ultimately the >>> >>> >>>>>>> >>>> responsibility for >>> >>> >>>>>>> >>>> the docs is in the hands of the driver contributors so >>> it >>> >>> >>>>>>> >>>> seems >>> >>> >>>>>>> >>>> those should >>> >>> >>>>>>> >>>> be on a domain name where drivers control publishing and >>> >>> >>>>>>> >>>> OpenStack docs are >>> >>> >>>>>>> >>>> not a gatekeeper, quality checker, reviewer, or >>> publisher. >>> >>> >>>>>>> >>>> >>> >>> >>>>>>> >>>> We have documented the status of hypervisor drivers on >>> an >>> >>> >>>>>>> >>>> OpenStack wiki >>> >>> >>>>>>> >>>> page. [1] To me, that type of list could be maintained >>> on >>> >>> >>>>>>> >>>> the >>> >>> >>>>>>> >>>> wiki page >>> >>> >>>>>>> >>>> better than in the docs themselves. Thoughts? Feelings? >>> More >>> >>> >>>>>>> >>>> discussion, >>> >>> >>>>>>> >>>> please. And thank you for the responses so far. >>> >>> >>>>>>> >>>> Anne >>> >>> >>>>>>> >>>> >>> >>> >>>>>>> >>>> [1] >>> https://wiki.openstack.org/wiki/HypervisorSupportMatrix >>> >>> >>>>>>> >>>> >>> >>> >>>>>>> >>>>> >>> >>> >>>>>>> >>>>> >>> >>> >>>>>>> >>>>> On Fri, Oct 10, 2014 at 1:28 PM, Vadivel Poonathan >>> >>> >>>>>>> >>>>> wrote: >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>>> Hi Anne, >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>>> Thanks for your immediate response!... >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>>> Just to clarify... I have developed and maintaining a >>> >>> >>>>>>> >>>>>> Neutron >>> >>> >>>>>>> >>>>>> plug-in >>> >>> >>>>>>> >>>>>> (ML2 mechanism_driver) since Grizzly and now it is >>> >>> >>>>>>> >>>>>> up-to-date >>> >>> >>>>>>> >>>>>> with Icehouse. >>> >>> >>>>>>> >>>>>> But it was never listed nor part of the main Openstack >>> >>> >>>>>>> >>>>>> releases. Now i would >>> >>> >>>>>>> >>>>>> like to have my plugin mentioned as "supported >>> >>> >>>>>>> >>>>>> plugin/mechanism_driver for >>> >>> >>>>>>> >>>>>> so and so vendor equipments" in the >>> docs.openstack.org, >>> >>> >>>>>>> >>>>>> but >>> >>> >>>>>>> >>>>>> without having >>> >>> >>>>>>> >>>>>> the actual plugin code to be posted in the main >>> Openstack >>> >>> >>>>>>> >>>>>> GIT >>> >>> >>>>>>> >>>>>> repository. >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>>> Reason is that I dont have plan/bandwidth to go thru >>> the >>> >>> >>>>>>> >>>>>> entire process >>> >>> >>>>>>> >>>>>> of new plugin blue-print/development/review/testing >>> etc as >>> >>> >>>>>>> >>>>>> required by the >>> >>> >>>>>>> >>>>>> Openstack development community. Bcos this is already >>> >>> >>>>>>> >>>>>> developed, tested and >>> >>> >>>>>>> >>>>>> released to some customers directly. Now I just want >>> to >>> >>> >>>>>>> >>>>>> get it >>> >>> >>>>>>> >>>>>> to the >>> >>> >>>>>>> >>>>>> official Openstack documentation, so that more people >>> can >>> >>> >>>>>>> >>>>>> get >>> >>> >>>>>>> >>>>>> this and use. >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>>> The plugin package is made available to public from >>> Ubuntu >>> >>> >>>>>>> >>>>>> repository >>> >>> >>>>>>> >>>>>> along with necessary documentation. So people can >>> directly >>> >>> >>>>>>> >>>>>> get >>> >>> >>>>>>> >>>>>> it from >>> >>> >>>>>>> >>>>>> Ubuntu repository and use it. All i need is to get >>> listed >>> >>> >>>>>>> >>>>>> in >>> >>> >>>>>>> >>>>>> the >>> >>> >>>>>>> >>>>>> docs.openstack.org so that people knows that it >>> exists and >>> >>> >>>>>>> >>>>>> can >>> >>> >>>>>>> >>>>>> be used with >>> >>> >>>>>>> >>>>>> any Openstack. >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>>> Pls. confrim whether this is something possible?... >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>>> Thanks again!.. >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>>> Vad >>> >>> >>>>>>> >>>>>> -- >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>>> On Fri, Oct 10, 2014 at 12:18 PM, Anne Gentle >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>>> wrote: >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>>> On Fri, Oct 10, 2014 at 2:11 PM, Vadivel Poonathan >>> >>> >>>>>>> >>>>>>> wrote: >>> >>> >>>>>>> >>>>>>>> >>> >>> >>>>>>> >>>>>>>> Hi, >>> >>> >>>>>>> >>>>>>>> >>> >>> >>>>>>> >>>>>>>> How to include a new vendor plug-in (aka >>> >>> >>>>>>> >>>>>>>> mechanism_driver in >>> >>> >>>>>>> >>>>>>>> ML2 >>> >>> >>>>>>> >>>>>>>> framework) into the Openstack documentation?.. In >>> other >>> >>> >>>>>>> >>>>>>>> words, is it >>> >>> >>>>>>> >>>>>>>> possible to include a new plug-in in the Openstack >>> >>> >>>>>>> >>>>>>>> documentation page >>> >>> >>>>>>> >>>>>>>> without having the actual plug-in code as part of >>> the >>> >>> >>>>>>> >>>>>>>> Openstack neutron >>> >>> >>>>>>> >>>>>>>> repository?... The actual plug-in is posted and >>> >>> >>>>>>> >>>>>>>> available >>> >>> >>>>>>> >>>>>>>> for the public to >>> >>> >>>>>>> >>>>>>>> download as Ubuntu package. But i need to mention >>> >>> >>>>>>> >>>>>>>> somewhere >>> >>> >>>>>>> >>>>>>>> in the Openstack >>> >>> >>>>>>> >>>>>>>> documentation that this new plugin is available for >>> the >>> >>> >>>>>>> >>>>>>>> public to use along >>> >>> >>>>>>> >>>>>>>> with its documentation. >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>>> We definitely want you to include pointers to vendor >>> >>> >>>>>>> >>>>>>> documentation in >>> >>> >>>>>>> >>>>>>> the OpenStack docs, but I'd prefer make sure they're >>> gate >>> >>> >>>>>>> >>>>>>> tested before they >>> >>> >>>>>>> >>>>>>> get listed on docs.openstack.org. Drivers change >>> enough >>> >>> >>>>>>> >>>>>>> release-to-release >>> >>> >>>>>>> >>>>>>> that it's difficult to keep up maintenance. >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>>> Lately I've been talking to driver contributors >>> >>> >>>>>>> >>>>>>> (hypervisor, >>> >>> >>>>>>> >>>>>>> storage, >>> >>> >>>>>>> >>>>>>> networking) about the out-of-tree changes possible. >>> I'd >>> >>> >>>>>>> >>>>>>> like >>> >>> >>>>>>> >>>>>>> to encourage >>> >>> >>>>>>> >>>>>>> even out-of-tree drivers to get listed, but to store >>> >>> >>>>>>> >>>>>>> their >>> >>> >>>>>>> >>>>>>> main documents >>> >>> >>>>>>> >>>>>>> outside of docs.openstack.org, if they are >>> gate-tested. >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>>> Anyone have other ideas here? >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>>> Looping in the OpenStack-docs mailing list also. >>> >>> >>>>>>> >>>>>>> Anne >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>>>> >>> >>> >>>>>>> >>>>>>>> Pls. provide some insights into whether it is >>> >>> >>>>>>> >>>>>>>> possible?.. >>> >>> >>>>>>> >>>>>>>> and any >>> >>> >>>>>>> >>>>>>>> further info on this?.. >>> >>> >>>>>>> >>>>>>>> >>> >>> >>>>>>> >>>>>>>> Thanks, >>> >>> >>>>>>> >>>>>>>> >>> >>> >>>>>>> >>>>>>>> Vad >>> >>> >>>>>>> >>>>>>>> >>> >>> >>>>>>> >>>>>>>> -- >>> >>> >>>>>>> >>>>>>>> >>> >>> >>>>>>> >>>>>>>> >>> >>> >>>>>>> >>>>>>>> _______________________________________________ >>> >>> >>>>>>> >>>>>>>> OpenStack-dev mailing list >>> >>> >>>>>>> >>>>>>>> OpenStack-dev at lists.openstack.org >>> >>> >>>>>>> >>>>>>>> >>> >>> >>>>>>> >>>>>>>> >>> >>> >>>>>>> >>>>>>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>>>>>> >>>>>>>> >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>>> _______________________________________________ >>> >>> >>>>>>> >>>>>>> OpenStack-dev mailing list >>> >>> >>>>>>> >>>>>>> OpenStack-dev at lists.openstack.org >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>>>>>> >>>>>>> >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>>> _______________________________________________ >>> >>> >>>>>>> >>>>>> OpenStack-dev mailing list >>> >>> >>>>>>> >>>>>> OpenStack-dev at lists.openstack.org >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>>>>>> >>>>>> >>> >>> >>>>>>> >>>>> >>> >>> >>>>>>> >>>>> >>> >>> >>>>>>> >>>>> >>> >>> >>>>>>> >>>>> -- >>> >>> >>>>>>> >>>>> Kevin Benton >>> >>> >>>>>>> >>>>> >>> >>> >>>>>>> >>>>> _______________________________________________ >>> >>> >>>>>>> >>>>> OpenStack-dev mailing list >>> >>> >>>>>>> >>>>> OpenStack-dev at lists.openstack.org >>> >>> >>>>>>> >>>>> >>> >>> >>>>>>> >>>>> >>> >>> >>>>>>> >>>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>>>>>> >>>>> >>> >>> >>>>>>> >>>> >>> >>> >>>>>>> >>>> >>> >>> >>>>>>> >>>> _______________________________________________ >>> >>> >>>>>>> >>>> OpenStack-dev mailing list >>> >>> >>>>>>> >>>> OpenStack-dev at lists.openstack.org >>> >>> >>>>>>> >>>> >>> >>> >>>>>>> >>>> >>> >>> >>>>>>> >>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>>>>>> >>>> >>> >>> >>>>>>> >>> >>> >>> >>>>>>> >>> >>> >>> >>>>>>> >>> _______________________________________________ >>> >>> >>>>>>> >>> OpenStack-dev mailing list >>> >>> >>>>>>> >>> OpenStack-dev at lists.openstack.org >>> >>> >>>>>>> >>> >>> >>> >>>>>>> >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>>>>>> >>> >>> >>> >>>>>>> >> >>> >>> >>>>>>> >> >>> >>> >>>>>>> >> >>> >>> >>>>>>> >> -- >>> >>> >>>>>>> >> Kevin Benton >>> >>> >>>>>>> >> >>> >>> >>>>>>> >> _______________________________________________ >>> >>> >>>>>>> >> OpenStack-dev mailing list >>> >>> >>>>>>> >> OpenStack-dev at lists.openstack.org >>> >>> >>>>>>> >> >>> >>> >>>>>>> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>>>>>> >> >>> >>> >>>>>>> > >>> >>> >>>>>>> > _______________________________________________ >>> >>> >>>>>>> > OpenStack-dev mailing list >>> >>> >>>>>>> > OpenStack-dev at lists.openstack.org >>> >>> >>>>>>> > >>> >>> >>>>>>> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>>>>>> >>> >>> >>>>>>> >>> >>> >>>>>>> >>> >>> >>>>>>> -- >>> >>> >>>>>>> Akihiro Motoki >>> >>> >>>>>>> >>> >>> >>>>>>> _______________________________________________ >>> >>> >>>>>>> OpenStack-dev mailing list >>> >>> >>>>>>> OpenStack-dev at lists.openstack.org >>> >>> >>>>>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>>>>> >>> >>> >>>>>> >>> >>> >>>>>> >>> >>> >>>>>> _______________________________________________ >>> >>> >>>>>> OpenStack-dev mailing list >>> >>> >>>>>> OpenStack-dev at lists.openstack.org >>> >>> >>>>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>>>>> >>> >>> >>>>> >>> >>> >>>>> >>> >>> >>>>> _______________________________________________ >>> >>> >>>>> OpenStack-dev mailing list >>> >>> >>>>> OpenStack-dev at lists.openstack.org >>> >>> >>>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>>>> >>> >>> >>>> >>> >>> >>>> >>> >>> >>>> _______________________________________________ >>> >>> >>>> OpenStack-dev mailing list >>> >>> >>>> OpenStack-dev at lists.openstack.org >>> >>> >>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>>> >>> >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> >>> >>> OpenStack-dev mailing list >>> >>> >>> OpenStack-dev at lists.openstack.org >>> >>> >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> >>> >> >>> >>> > >>> >> >>> >> >>> >> >>> >> _______________________________________________ >>> >> OpenStack-dev mailing list >>> >> OpenStack-dev at lists.openstack.org >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >>> > >>> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Dec 8 20:19:40 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 08 Dec 2014 15:19:40 -0500 Subject: [openstack-dev] [nova] Adding temporary code to nova to work around bugs in system utilities In-Reply-To: <20141203090022.GS84915@thor.bakeyournoodle.com> References: <20141203090022.GS84915@thor.bakeyournoodle.com> Message-ID: <548607DC.3020305@gmail.com> On 12/03/2014 04:00 AM, Tony Breeds wrote: > Hi All, > I'd like to accomplish 2 things with this message: > 1) Unblock (one way or another) https://review.openstack.org/#/c/123957 > 2) Create some form of consensus on when it's okay to add temporary code to > nova to work around bugs in external utilities. > > So some background on this specific issue. The issue was first reported in > July 2014 at [1] and then clarified at [2]. The synopsis of the bug is that > calling qemu-img convert -O raw /may/ generate a corrupt output file if the > source image isn't fully flushed to disk. The coreutils folk discovered > something similar in 2011 *sigh* > > The clear and correct solution is to ensure that qemu-img uses > FIEMAP_FLAG_SYNC. This in turn produces a measurable slowdown in that code > path, so additionally it's best if qemu-img uses an alternate method to > determine data status in a disk image. This has been done and will be included > in qemu 2.2.0 when it's released. These fixes prompted a more substantial > rework of that code in qemu. Which is awesome but not *required* to fix the > bug in qemu. > > While we wait for $distros to get the fixed qemu nova is still vulnerable to > the bug. To that end I proposed a work around in nova that forces images > retrieved from glance to disk with an fsync() prior to calling qemu-img on > them. I admit that this is ugly and has a performance impact. > > In order to reduce the impact of the fsync() I considered: > 1) Testing the qemu version and only fsync()ing on affected versions. > - Vendors will backport the fix to there version of qemu. The fixed version > will still claim to be 2.1.0 (for example) and therefore trigger the > fsync() when not required. Given how unreliable this will be I dismissed > it as an option > > 2) API Change > - In the case of this specific bug we only need to fsync() in certain > scenarios. It would be easy to add a flag to IMAGE_API.download() to > determine if this fsync() is required. This has the nice property of only > having a performance impact in the suspect case (personally I'll take > slow-and-correct over fast-and-buggy any day). My hesitation is that > after we've modified the API it's very hard to remove that change when we > decide the work around is redundant. > > 3) Config file option > - For many of the same reasons as the API change this seemed like a bad > idea. > > Does anyone have any other ideas? > > One thing that I haven't done is measure the impact of the fsync() on any > reasonable workload. This is mainly because I don't really know how. Sure I > could do some statistics in devstack but I don't really think they'd be > meaningful. Also the size of the image in glance is fairly important. An > fsync() of an 100Gb image is many times more painful than an 1Gb image. > > While in Paris I was asked to look at other code paths in nova where we use > qemu-img convert. I'm doing this analysis. To date I have some suspicions > that snapshot (and migration) are affected, but no data that confirms or > debases that. I continue to look at the appropriate code in nova, libvirt and > qemu. > > I understand that there is more work to be done in this area, and I'm happy to > do it. Having said that from where I sit that work is not directly related to > the bug that started this. > > As the idea is to remove this code as soon as all the distros we care about > have a fixed qemu I started an albeit brief discussion here[3] on which distros > are in that list. Armed with that list I have opened (or am in the process of > opening) bugs for each version of each distribution to make them aware of the > issue and the fix. I have a status page at [4]. > > okay I think I'm done raving. > > So moving forward: > > 1) So what should I do with the open review? I reviewed the patch. I don't mind the idea of a [workarounds] section of configuration options, but I had an issue with where that code was executed. > 2) What can we learn from this in terms of how we work around key utilities > that are not in our direct power to change. > - Is taking ugly code for "some time" okay? I understand that this is a > complex issue as we're relying on $developer to be around (or leave enough > information for those that follow) to determine when it's okay to remove > the ugliness. I think it would be fine to have a [workarounds] config section for just this purpose. Best, -jay From mestery at mestery.com Mon Dec 8 20:29:01 2014 From: mestery at mestery.com (Kyle Mestery) Date: Mon, 8 Dec 2014 14:29:01 -0600 Subject: [openstack-dev] [neutron] FYI: I just abandoned a large pile of specs Message-ID: Folks, not only is today Spec Proposal Deadline (SPD), I also made it "clear out all old specs" day. I went in and abandoned all specs which were still out there against Juno. If your spec was abandoned and you were miraculously going to propose a new version today during SPD, please do so. Otherwise, I think this will clear up the review queue quite a bit. Thanks! Kyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Dec 8 20:58:28 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 8 Dec 2014 15:58:28 -0500 Subject: [openstack-dev] [oslo] interesting problem with config filter Message-ID: <4BCA7D02-38D6-4B5B-A496-8EB259C1A792@doughellmann.com> As we?ve discussed a few times, we want to isolate applications from the configuration options defined by libraries. One way we have of doing that is the ConfigFilter class in oslo.config. When a regular ConfigOpts instance is wrapped with a filter, a library can register new options on the filter that are not visible to anything that doesn?t have the filter object. Unfortunately, the Neutron team has identified an issue with this approach. We have a bug report [1] from them about the way we?re using config filters in oslo.concurrency specifically, but the issue applies to their use everywhere. The neutron tests set the default for oslo.concurrency?s lock_path variable to ?$state_path/lock?, and the state_path option is defined in their application. With the filter in place, interpolation of $state_path to generate the lock_path value fails because state_path is not known to the ConfigFilter instance. The reverse would also happen (if the value of state_path was somehow defined to depend on lock_path), and that?s actually a bigger concern to me. A deployer should be able to use interpolation anywhere, and not worry about whether the options are in parts of the code that can see each other. The values are all in one file, as far as they know, and so interpolation should ?just work?. I see a few solutions: 1. Don?t use the config filter at all. 2. Make the config filter able to add new options and still see everything else that is already defined (only filter in one direction). 3. Leave things as they are, and make the error message better. Because of the deployment implications of using the filter, I?m inclined to go with choice 1 or 2. However, choice 2 leaves open the possibility of a deployer wanting to use the value of an option defined by one filtered set of code when defining another. I don?t know how frequently that might come up, but it seems like the error would be very confusing, especially if both options are set in the same config file. I think that leaves option 1, which means our plans for hiding options from applications need to be rethought. Does anyone else see another solution that I?m missing? Doug [1] https://bugs.launchpad.net/oslo.config/+bug/1399897 From fungi at yuggoth.org Mon Dec 8 21:12:24 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 8 Dec 2014 21:12:24 +0000 Subject: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support In-Reply-To: <20141208104536.GC14205@tesla.redhat.com> References: <5481FE7D.7020804@linux.vnet.ibm.com> <54820841.6080200@dague.net> <548211BC.6070900@linux.vnet.ibm.com> <20141208104536.GC14205@tesla.redhat.com> Message-ID: <20141208211224.GN2497@yuggoth.org> On 2014-12-08 11:45:36 +0100 (+0100), Kashyap Chamarthy wrote: > As Dan Berrang? noted, it's nearly impossible to reproduce this issue > independently outside of OpenStack Gating environment. I brought this up > at the recently concluded KVM Forum earlier this October. To debug this > any further, one of the QEMU block layer developers asked if we can get > QEMU instance running on Gate run under `gdb` (IIRC, danpb suggested > this too, previously) to get further tracing details. We document thoroughly how to reproduce the environments we use for testing OpenStack. There's nothing rarified about "a Gate run" that anyone with access to a public cloud provider would be unable to reproduce, save being able to run it over and over enough times to expose less frequent failures. > FWIW, I myself couldn't reproduce it independently via libvirt > alone or via QMP (QEMU Machine Protocol) commands. > > Dan's workaround ("enable it permanently, except for under the > gate") sounds sensible to me. [...] I'm dubious of this as it basically says "we know this breaks sometimes, so we're going to stop testing that it works at all and possibly let it get even more broken, but you should be safe to rely on it anyway." The QA team tries very hard to make our integration testing environment as closely as possible mimic real-world deployment configurations. If these sorts of bugs emerge more often because of, for example, resource constraints in the test environment then it should be entirely likely they'd also be seen in production with the same frequency if run on similarly constrained equipment. And as we've observed in the past, any code path we stop testing quickly accumulates new bugs that go unnoticed until they impact someone's production environment at 3am. -- Jeremy Stanley From marun at redhat.com Mon Dec 8 21:51:27 2014 From: marun at redhat.com (Maru Newby) Date: Mon, 8 Dec 2014 14:51:27 -0700 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: On Dec 7, 2014, at 10:51 AM, Gary Kotton wrote: > Hi Kyle, > I am not missing the point. I understand the proposal. I just think that it has some shortcomings (unless I misunderstand, which will certainly not be the first time and most definitely not the last). The thinning out is to have a shim in place. I understand this and this will be the entry point for the plugin. I do not have a concern for this. My concern is that we are not doing this with the ML2 off the bat. That should lead by example as it is our reference architecture. Lets not kid anyone, but we are going to hit some problems with the decomposition. I would prefer that it be done with the default implementation. Why? The proposal is to move vendor-specific logic out of the tree to increase vendor control over such code while decreasing load on reviewers. ML2 doesn?t contain vendor-specific logic - that?s the province of ML2 drivers - so it is not a good target for the proposed decomposition by itself. > ? Cause we will fix them quicker as it is something that prevent Neutron from moving forwards > ? We will just need to fix in one place first and not in N (where N is the vendor plugins) > ? This is a community effort ? so we will have a lot more eyes on it > ? It will provide a reference architecture for all new plugins that want to be added to the tree > ? It will provide a working example for plugin that are already in tree and are to be replaced by the shim > If we really want to do this, we can say freeze all development (which is just approvals for patches) for a few days so that we will can just focus on this. I stated what I think should be the process on the review. For those who do not feel like finding the link: > ? Create a stack forge project for ML2 > ? Create the shim in Neutron > ? Update devstack for the to use the two repos and the shim > When #3 is up and running we switch for that to be the gate. Then we start a stopwatch on all other plugins. As was pointed out on the spec (see Miguel?s comment on r15), the ML2 plugin and the OVS mechanism driver need to remain in the main Neutron repo for now. Neutron gates on ML2+OVS and landing a breaking change in the Neutron repo along with its corresponding fix to a separate ML2 repo would be all but impossible under the current integrated gating scheme. Plugins/drivers that do not gate Neutron have no such constraint. Maru > Sure, I?ll catch you on IRC tomorrow. I guess that you guys will bash out the details at the meetup. Sadly I will not be able to attend ? so you will have to delay on the tar and feathers. > Thanks > Gary > > > From: "mestery at mestery.com" > Reply-To: OpenStack List > Date: Sunday, December 7, 2014 at 7:19 PM > To: OpenStack List > Cc: "openstack at lists.openstack.org" > Subject: Re: [openstack-dev] [Neutron] Core/Vendor code decomposition > > Gary, you are still miss the point of this proposal. Please see my comments in review. We are not forcing things out of tree, we are thinning them. The text you quoted in the review makes that clear. We will look at further decomposing ML2 post Kilo, but we have to be realistic with what we can accomplish during Kilo. > > Find me on IRC Monday morning and we can discuss further if you still have questions and concerns. > > Thanks! > Kyle > > On Sun, Dec 7, 2014 at 2:08 AM, Gary Kotton wrote: >> Hi, >> I have raised my concerns on the proposal. I think that all plugins should be treated on an equal footing. My main concern is having the ML2 plugin in tree whilst the others will be moved out of tree will be problematic. I think that the model will be complete if the ML2 was also out of tree. This will help crystalize the idea and make sure that the model works correctly. >> Thanks >> Gary >> >> From: "Armando M." >> Reply-To: OpenStack List >> Date: Saturday, December 6, 2014 at 1:04 AM >> To: OpenStack List , "openstack at lists.openstack.org" >> Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition >> >> Hi folks, >> >> For a few weeks now the Neutron team has worked tirelessly on [1]. >> >> This initiative stems from the fact that as the project matures, evolution of processes and contribution guidelines need to evolve with it. This is to ensure that the project can keep on thriving in order to meet the needs of an ever growing community. >> >> The effort of documenting intentions, and fleshing out the various details of the proposal is about to reach an end, and we'll soon kick the tires to put the proposal into practice. Since the spec has grown pretty big, I'll try to capture the tl;dr below. >> >> If you have any comment please do not hesitate to raise them here and/or reach out to us. >> >> tl;dr >>> >> >> From the Kilo release, we'll initiate a set of steps to change the following areas: >> ? Code structure: every plugin or driver that exists or wants to exist as part of Neutron project is decomposed in an slim vendor integration (which lives in the Neutron repo), plus a bulkier vendor library (which lives in an independent publicly available repo); >> ? Contribution process: this extends to the following aspects: >> ? Design and Development: the process is largely unchanged for the part that pertains the vendor integration; the maintainer team is fully auto governed for the design and development of the vendor library; >> ? Testing and Continuous Integration: maintainers will be required to support their vendor integration with 3rd CI testing; the requirements for 3rd CI testing are largely unchanged; >> ? Defect management: the process is largely unchanged, issues affecting the vendor library can be tracked with whichever tool/process the maintainer see fit. In cases where vendor library fixes need to be reflected in the vendor integration, the usual OpenStack defect management apply. >> ? Documentation: there will be some changes to the way plugins and drivers are documented with the intention of promoting discoverability of the integrated solutions. >> ? Adoption and transition plan: we strongly advise maintainers to stay abreast of the developments of this effort, as their code, their CI, etc will be affected. The core team will provide guidelines and support throughout this cycle the ensure a smooth transition. >> To learn more, please refer to [1]. >> >> Many thanks, >> Armando >> >> [1] https://review.openstack.org/#/c/134680 >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Mon Dec 8 22:07:17 2014 From: melwittt at gmail.com (melanie witt) Date: Mon, 8 Dec 2014 14:07:17 -0800 Subject: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support In-Reply-To: <20141208211224.GN2497@yuggoth.org> References: <5481FE7D.7020804@linux.vnet.ibm.com> <54820841.6080200@dague.net> <548211BC.6070900@linux.vnet.ibm.com> <20141208104536.GC14205@tesla.redhat.com> <20141208211224.GN2497@yuggoth.org> Message-ID: On Dec 8, 2014, at 13:12, Jeremy Stanley wrote: > I'm dubious of this as it basically says "we know this breaks > sometimes, so we're going to stop testing that it works at all and > possibly let it get even more broken, but you should be safe to rely > on it anyway." +1, it seems bad to enable something everywhere *except* the gate. I prefer the original suggestion to include a config option that is by default disabled that a user can enable if they want. From what I understand, the feature works "most of the time" and I don't see why a user is guaranteed not to encounter the same conditions that happen in the gate. For that reason I think it makes sense to be an experimental, opt-in by config, feature. melanie (melwitt) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: From mikal at stillhq.com Mon Dec 8 22:10:42 2014 From: mikal at stillhq.com (Michael Still) Date: Tue, 9 Dec 2014 09:10:42 +1100 Subject: [openstack-dev] [Compute][Nova] Mid-cycle meetup details for kilo In-Reply-To: References: Message-ID: Just a reminder that registration for the Nova mid-cycle is now open. We're currently 50% "sold", so early signup will help us work out if we need to add more seats or not. https://www.eventbrite.com.au/e/openstack-nova-kilo-mid-cycle-developer-meetup-tickets-14767182039 Thanks, Michael On Thu, Dec 4, 2014 at 10:18 AM, Michael Still wrote: > Sigh, sorry. It is of course the Kilo meetup: > > https://www.eventbrite.com.au/e/openstack-nova-kilo-mid-cycle-developer-meetup-tickets-14767182039 > > Michael > > On Thu, Dec 4, 2014 at 10:16 AM, Michael Still wrote: >> I've just created the signup page for this event. Its here: >> >> https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-14767182039 >> >> Cheers, >> Michael >> >> On Wed, Oct 15, 2014 at 3:45 PM, Michael Still wrote: >>> Hi. >>> >>> I am pleased to announce details for the Kilo Compute mid-cycle >>> meetup, but first some background about how we got here. >>> >>> Two companies actively involved in OpenStack came forward with offers >>> to host the Compute meetup. However, one of those companies has >>> gracefully decided to wait until the L release because of the cold >>> conditions are their proposed location (think several feet of snow). >>> >>> So instead, we're left with California! >>> >>> The mid-cycle meetup will be from 26 to 28 January 2015, at the VMWare >>> offices in Palo Alto California. >>> >>> Thanks for VMWare for stepping up and offering to host. It sure does >>> make my life easy. >>> >>> More details will be forthcoming closer to the event, but I wanted to >>> give people as much notice as possible about dates and location so >>> they can start negotiating travel if they want to come. >>> >>> Cheers, >>> Michael >>> >>> -- >>> Rackspace Australia >> >> >> >> -- >> Rackspace Australia > > > > -- > Rackspace Australia -- Rackspace Australia From john.griffith8 at gmail.com Mon Dec 8 22:19:33 2014 From: john.griffith8 at gmail.com (John Griffith) Date: Mon, 8 Dec 2014 15:19:33 -0700 Subject: [openstack-dev] [Cinder] In-Reply-To: References: Message-ID: cinder service-list will show all the backends Cinder knows about and their status. On Sun, Dec 7, 2014 at 2:52 AM, Pradip Mukhopadhyay wrote: > Hi, > > > Is there a way to find out/list down the backends discovered for Cinder? > > > There is, I guess, no API for get the list of backends. > > > > Thanks, > Pradip > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zbitter at redhat.com Mon Dec 8 22:19:45 2014 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 08 Dec 2014 17:19:45 -0500 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> Message-ID: <54862401.3020508@redhat.com> On 08/12/14 07:00, Murugan, Visnusaran wrote: > > Hi Zane & Michael, > > Please have a look @ https://etherpad.openstack.org/p/execution-stream-and-aggregator-based-convergence > > Updated with a combined approach which does not require persisting graph and backup stack removal. Well, we still have to persist the dependencies of each version of a resource _somehow_, because otherwise we can't know how to clean them up in the correct order. But what I think you meant to say is that this approach doesn't require it to be persisted in a separate table where the rows are marked as traversed as we work through the graph. > This approach reduces DB queries by waiting for completion notification on a topic. The drawback I see is that delete stack stream will be huge as it will have the entire graph. We can always dump such data in ResourceLock.data Json and pass a simple flag "load_stream_from_db" to converge RPC call as a workaround for delete operation. This seems to be essentially equivalent to my 'SyncPoint' proposal[1], with the key difference that the data is stored in-memory in a Heat engine rather than the database. I suspect it's probably a mistake to move it in-memory for similar reasons to the argument Clint made against synchronising the marking off of dependencies in-memory. The database can handle that and the problem of making the DB robust against failures of a single machine has already been solved by someone else. If we do it in-memory we are just creating a single point of failure for not much gain. (I guess you could argue it doesn't matter, since if any Heat engine dies during the traversal then we'll have to kick off another one anyway, but it does limit our options if that changes in the future.) It's not clear to me how the 'streams' differ in practical terms from just passing a serialisation of the Dependencies object, other than being incomprehensible to me ;). The current Dependencies implementation (1) is a very generic implementation of a DAG, (2) works and has plenty of unit tests, (3) has, with I think one exception, a pretty straightforward API, (4) has a very simple serialisation, returned by the edges() method, which can be passed back into the constructor to recreate it, and (5) has an API that is to some extent relied upon by resources, and so won't likely be removed outright in any event. Whatever code we need to handle dependencies ought to just build on this existing implementation. I think the difference may be that the streams only include the *shortest* paths (there will often be more than one) to each resource. i.e. A <------- B <------- C ^ | | | +---------------------+ can just be written as: A <------- B <------- C because there's only one order in which that can execute anyway. (If we're going to do this though, we should just add a method to the dependencies.Graph class to delete redundant edges, not create a whole new data structure.) There is a big potential advantage here in that it reduces the theoretical maximum number of edges in the graph from O(n^2) to O(n). (Although in practice real templates are typically not likely to have such dense graphs.) There's a downside to this too though: say that A in the above diagram is replaced during an update. In that case not only B but also C will need to figure out what the latest version of A is. One option here is to pass that data along via B, but that will become very messy to implement in a non-trivial example. The other would be for C to go search in the database for resources with the same name as A and the current traversal_id marked as the latest. But that not only creates a concurrency problem we didn't have before (A could have been updated with a new traversal_id at some point after C had established that the current traversal was still valid but before it went looking for A), it also eliminates all of the performance gains from removing that edge in the first place. [1] https://github.com/zaneb/heat-convergence-prototype/blob/distributed-graph/converge/sync_point.py > To Stop current stack operation, we will use your traversal_id based approach. +1 :) > If in case you feel Aggregator model creates more queues, then we might have to poll DB to get resource status. (Which will impact performance adversely :) ) For the reasons given above I would vote for doing this in the DB. I agree there will be a performance penalty for doing so, because we'll be paying for robustness. > Lock table: name(Unique - Resource_id), stack_id, engine_id, data (Json to store stream dict) Based on our call on Thursday, I think you're taking the idea of the Lock table too literally. The point of referring to locks is that we can use the same concepts as the Lock table relies on to do atomic updates on a particular row of the database, and we can use those atomic updates to prevent race conditions when implementing SyncPoints/Aggregators/whatever you want to call them. It's not that we'd actually use the Lock table itself, which implements a mutex and therefore offers only a much slower and more stateful way of doing what we want (lock mutex, change data, unlock mutex). cheers, Zane. > Your thoughts. > Vishnu (irc: ckmvishnu) > Unmesh (irc: unmeshg) From john.griffith8 at gmail.com Mon Dec 8 22:22:13 2014 From: john.griffith8 at gmail.com (John Griffith) Date: Mon, 8 Dec 2014 15:22:13 -0700 Subject: [openstack-dev] [Cinder] Listing of backends In-Reply-To: References: Message-ID: On Sun, Dec 7, 2014 at 5:35 AM, Pradip Mukhopadhyay wrote: > Thanks! > > One more question. > > Is there any equivalent API to add keys to the volume-type? I understand we > have APIs for creating volume-type? But how about adding key-value pair (say > I want to add-key to the volume-type as backend-name="my_iscsi_backend" ? > > > Thanks, > Pradip > > > On Sun, Dec 7, 2014 at 4:25 PM, Duncan Thomas > wrote: >> >> See https://review.openstack.org/#/c/119938/ - now merged. I don't believe >> the python-cinderclient side work has been done yet, nor anything in >> Horizon, but the API itself is now there. >> >> On 7 December 2014 at 09:53, Pradip Mukhopadhyay >> wrote: >>> >>> Hi, >>> >>> >>> Is there a way to find out/list down the backends discovered for Cinder? >>> >>> >>> There is, I guess, no API for get the list of backends. >>> >>> >>> >>> Thanks, >>> Pradip >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >> -- >> Duncan Thomas >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Please try not to double post if you could... Again in answer to your first question, you can do "cinder service-list" to show the backends that are in use and their status. Extra-Specs are added with the "type-key" command, so say you have volume-type = foo: cinder type-key foo set key=value From sbalukoff at bluebox.net Mon Dec 8 22:23:06 2014 From: sbalukoff at bluebox.net (Stephen Balukoff) Date: Mon, 8 Dec 2014 14:23:06 -0800 Subject: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. In-Reply-To: References: <1416867738.3960.19.camel@localhost> <1417737958.4577.25.camel@localhost> Message-ID: So... I should probably note that I see the case where a user actually shares object as being the exception. I expect that 90% of deployments will never need to share objects, except for a few cases-- those cases (of 1:N) relationships are: * Loadbalancers must be able to have many Listeners * When L7 functionality is introduced, L7 policies must be able to refer to the same Pool under a single Listener. (That is to say, sharing Pools under the scope of a single Listener makes sense, but only after L7 policies are introduced.) I specifically see the following kind of sharing having near zero demand: * Listeners shared across multiple loadbalancers * Pools shared across multiple listeners * Members shared across multiple pools So, despite the fact that sharing doesn't make status reporting any more or less complex, I'm still in favor of starting with 1:1 relationships between most kinds of objects and then changing those to 1:N or M:N as we get user demand for this. As I said in my first response, allowing too many many to many relationships feels like a solution to a problem that doesn't really exist, and introduces a lot of unnecessary complexity. Stephen On Sun, Dec 7, 2014 at 11:43 PM, Samuel Bercovici wrote: > +1 > > > > > > *From:* Stephen Balukoff [mailto:sbalukoff at bluebox.net] > *Sent:* Friday, December 05, 2014 7:59 PM > > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - > Use Cases that led us to adopt this. > > > > German-- but the point is that sharing apparently has no effect on the > number of permutations for status information. The only difference here is > that without sharing it's more work for the user to maintain and modify > trees of objects. > > > > On Fri, Dec 5, 2014 at 9:36 AM, Eichberger, German < > german.eichberger at hp.com> wrote: > > Hi Brandon + Stephen, > > > > Having all those permutations (and potentially testing them) made us lean > against the sharing case in the first place. It?s just a lot of extra work > for only a small number of our customers. > > > > German > > > > *From:* Stephen Balukoff [mailto:sbalukoff at bluebox.net] > *Sent:* Thursday, December 04, 2014 9:17 PM > > > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - > Use Cases that led us to adopt this. > > > > Hi Brandon, > > > > Yeah, in your example, member1 could potentially have 8 different statuses > (and this is a small example!)... If that member starts flapping, it means > that every time it flaps there are 8 notifications being passed upstream. > > > > Note that this problem actually doesn't get any better if we're not > sharing objects but are just duplicating them (ie. not sharing objects but > the user makes references to the same back-end machine as 8 different > members.) > > > > To be honest, I don't see sharing entities at many levels like this being > the rule for most of our installations-- maybe a few percentage points of > installations will do an excessive sharing of members, but I doubt it. So > really, even though reporting status like this is likely to generate a > pretty big tree of data, I don't think this is actually a problem, eh. And > I don't see sharing entities actually reducing the workload of what needs > to happen behind the scenes. (It just allows us to conceal more of this > work from the user.) > > > > Stephen > > > > > > > > On Thu, Dec 4, 2014 at 4:05 PM, Brandon Logan > wrote: > > Sorry it's taken me a while to respond to this. > > So I wasn't thinking about this correctly. I was afraid you would have > to pass in a full tree of parent child representations to /loadbalancers > to update anything a load balancer it is associated to (including down > to members). However, after thinking about it, a user would just make > an association call on each object. For Example, associate member1 with > pool1, associate pool1 with listener1, then associate loadbalancer1 with > listener1. Updating is just as simple as updating each entity. > > This does bring up another problem though. If a listener can live on > many load balancers, and a pool can live on many listeners, and a member > can live on many pools, there's lot of permutations to keep track of for > status. you can't just link a member's status to a load balancer bc a > member can exist on many pools under that load balancer, and each pool > can exist under many listeners under that load balancer. For example, > say I have these: > > lb1 > lb2 > listener1 > listener2 > pool1 > pool2 > member1 > member2 > > lb1 -> [listener1, listener2] > lb2 -> [listener1] > listener1 -> [pool1, pool2] > listener2 -> [pool1] > pool1 -> [member1, member2] > pool2 -> [member1] > > member1 can now have a different statuses under pool1 and pool2. since > listener1 and listener2 both have pool1, this means member1 will now > have a different status for listener1 -> pool1 and listener2 -> pool2 > combination. And so forth for load balancers. > > Basically there's a lot of permutations and combinations to keep track > of with this model for statuses. Showing these in the body of load > balancer details can get quite large. > > I hope this makes sense because my brain is ready to explode. > > Thanks, > Brandon > > > On Thu, 2014-11-27 at 08:52 +0000, Samuel Bercovici wrote: > > Brandon, can you please explain further (1) bellow? > > > > -----Original Message----- > > From: Brandon Logan [mailto:brandon.logan at RACKSPACE.COM] > > Sent: Tuesday, November 25, 2014 12:23 AM > > To: openstack-dev at lists.openstack.org > > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - > Use Cases that led us to adopt this. > > > > My impression is that the statuses of each entity will be shown on a > detailed info request of a loadbalancer. The root level objects would not > have any statuses. For example a user makes a GET request to > /loadbalancers/{lb_id} and the status of every child of that load balancer > is show in a "status_tree" json object. For example: > > > > {"name": "loadbalancer1", > > "status_tree": > > {"listeners": > > [{"name": "listener1", "operating_status": "ACTIVE", > > "default_pool": > > {"name": "pool1", "status": "ACTIVE", > > "members": > > [{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}} > > > > Sam, correct me if I am wrong. > > > > I generally like this idea. I do have a few reservations with this: > > > > 1) Creating and updating a load balancer requires a full tree > configuration with the current extension/plugin logic in neutron. Since > updates will require a full tree, it means the user would have to know the > full tree configuration just to simply update a name. Solving this would > require nested child resources in the URL, which the current neutron > extension/plugin does not allow. Maybe the new one will. > > > > 2) The status_tree can get quite large depending on the number of > listeners and pools being used. This is a minor issue really as it will > make horizon's (or any other UI tool's) job easier to show statuses. > > > > Thanks, > > Brandon > > > > On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote: > > > Hi Samuel, > > > > > > > > > We've actually been avoiding having a deeper discussion about status > > > in Neutron LBaaS since this can get pretty hairy as the back-end > > > implementations get more complicated. I suspect managing that is > > > probably one of the bigger reasons we have disagreements around object > > > sharing. Perhaps it's time we discussed representing state "correctly" > > > (whatever that means), instead of a round-a-bout discussion about > > > object sharing (which, I think, is really just avoiding this issue)? > > > > > > > > > Do you have a proposal about how status should be represented > > > (possibly including a description of the state machine) if we collapse > > > everything down to be logical objects except the loadbalancer object? > > > (From what you're proposing, I suspect it might be too general to, for > > > example, represent the UP/DOWN status of members of a given pool.) > > > > > > > > > Also, from an haproxy perspective, sharing pools within a single > > > listener actually isn't a problem. That is to say, having the same > > > L7Policy pointing at the same pool is OK, so I personally don't have a > > > problem allowing sharing of objects within the scope of parent > > > objects. What do the rest of y'all think? > > > > > > > > > Stephen > > > > > > > > > > > > On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici > > > wrote: > > > Hi Stephen, > > > > > > > > > > > > 1. The issue is that if we do 1:1 and allow status/state > > > to proliferate throughout all objects we will then get an > > > issue to fix it later, hence even if we do not do sharing, I > > > would still like to have all objects besides LB be treated as > > > logical. > > > > > > 2. The 3rd use case bellow will not be reasonable without > > > pool sharing between different policies. Specifying different > > > pools which are the same for each policy make it non-started > > > to me. > > > > > > > > > > > > -Sam. > > > > > > > > > > > > > > > > > > > > > > > > From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] > > > Sent: Friday, November 21, 2014 10:26 PM > > > To: OpenStack Development Mailing List (not for usage > > > questions) > > > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects > > > in LBaaS - Use Cases that led us to adopt this. > > > > > > > > > > > > I think the idea was to implement 1:1 initially to reduce the > > > amount of code and operational complexity we'd have to deal > > > with in initial revisions of LBaaS v2. Many to many can be > > > simulated in this scenario, though it does shift the burden of > > > maintenance to the end user. It does greatly simplify the > > > initial code for v2, in any case, though. > > > > > > > > > > > > > > > > > > Did we ever agree to allowing listeners to be shared among > > > load balancers? I think that still might be a N:1 > > > relationship even in our latest models. > > > > > > > > > > > > > > > There's also the difficulty introduced by supporting different > > > flavors: Since flavors are essentially an association between > > > a load balancer object and a driver (with parameters), once > > > flavors are introduced, any sub-objects of a given load > > > balancer objects must necessarily be purely logical until they > > > are associated with a load balancer. I know there was talk of > > > forcing these objects to be sub-objects of a load balancer > > > which can't be accessed independently of the load balancer > > > (which would have much the same effect as what you discuss: > > > State / status only make sense once logical objects have an > > > instantiation somewhere.) However, the currently proposed API > > > treats most objects as root objects, which breaks this > > > paradigm. > > > > > > > > > > > > > > > > > > How we handle status and updates once there's an instantiation > > > of these logical objects is where we start getting into real > > > complexity. > > > > > > > > > > > > > > > > > > It seems to me there's a lot of complexity introduced when we > > > allow a lot of many to many relationships without a whole lot > > > of benefit in real-world deployment scenarios. In most cases, > > > objects are not going to be shared, and in those cases with > > > sufficiently complicated deployments in which shared objects > > > could be used, the user is likely to be sophisticated enough > > > and skilled enough to manage updating what are essentially > > > "copies" of objects, and would likely have an opinion about > > > how individual failures should be handled which wouldn't > > > necessarily coincide with what we developers of the system > > > would assume. That is to say, allowing too many many to many > > > relationships feels like a solution to a problem that doesn't > > > really exist, and introduces a lot of unnecessary complexity. > > > > > > > > > > > > > > > > > > In any case, though, I feel like we should walk before we run: > > > Implementing 1:1 initially is a good idea to get us rolling. > > > Whether we then implement 1:N or M:N after that is another > > > question entirely. But in any case, it seems like a bad idea > > > to try to start with M:N. > > > > > > > > > > > > > > > > > > Stephen > > > > > > > > > > > > > > > > > > > > > > > > On Thu, Nov 20, 2014 at 4:52 AM, Samuel Bercovici > > > wrote: > > > > > > Hi, > > > > > > Per discussion I had at OpenStack Summit/Paris with Brandon > > > and Doug, I would like to remind everyone why we choose to > > > follow a model where pools and listeners are shared (many to > > > many relationships). > > > > > > Use Cases: > > > 1. The same application is being exposed via different LB > > > objects. > > > For example: users coming from the internal "private" > > > organization network, have an LB1(private_VIP) --> > > > Listener1(TLS) -->Pool1 and user coming from the "internet", > > > have LB2(public_vip)-->Listener1(TLS)-->Pool1. > > > This may also happen to support ipv4 and ipv6: LB_v4(ipv4_VIP) > > > --> Listener1(TLS) -->Pool1 and LB_v6(ipv6_VIP) --> > > > Listener1(TLS) -->Pool1 > > > The operator would like to be able to manage the pool > > > membership in cases of updates and error in a single place. > > > > > > 2. The same group of servers is being used via different > > > listeners optionally also connected to different LB objects. > > > For example: users coming from the internal "private" > > > organization network, have an LB1(private_VIP) --> > > > Listener1(HTTP) -->Pool1 and user coming from the "internet", > > > have LB2(public_vip)-->Listener2(TLS)-->Pool1. > > > The LBs may use different flavors as LB2 needs TLS termination > > > and may prefer a different "stronger" flavor. > > > The operator would like to be able to manage the pool > > > membership in cases of updates and error in a single place. > > > > > > 3. The same group of servers is being used in several > > > different L7_Policies connected to a listener. Such listener > > > may be reused as in use case 1. > > > For example: LB1(VIP1)-->Listener_L7(TLS) > > > | > > > > > > +-->L7_Policy1(rules..)-->Pool1 > > > | > > > > > > +-->L7_Policy2(rules..)-->Pool2 > > > | > > > > > > +-->L7_Policy3(rules..)-->Pool1 > > > | > > > > > > +-->L7_Policy3(rules..)-->Reject > > > > > > > > > I think that the "key" issue handling correctly the > > > "provisioning" state and the operation state in a many to many > > > model. > > > This is an issue as we have attached status fields to each and > > > every object in the model. > > > A side effect of the above is that to understand the > > > "provisioning/operation" status one needs to check many > > > different objects. > > > > > > To remedy this, I would like to turn all objects besides the > > > LB to be logical objects. This means that the only place to > > > manage the status/state will be on the LB object. > > > Such status should be hierarchical so that logical object > > > attached to an LB, would have their status consumed out of the > > > LB object itself (in case of an error). > > > We also need to discuss how modifications of a logical object > > > will be "rendered" to the concrete LB objects. > > > You may want to revisit > > > > https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r > the "Logical Model + Provisioning Status + Operation Status + Statistics" > for a somewhat more detailed explanation albeit it uses the LBaaS v1 model > as a reference. > > > > > > Regards, > > > -Sam. > > > > > > > > > > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Stephen Balukoff > > > Blue Box Group, LLC > > > (800)613-4305 x807 > > > > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > > > -- > > > Stephen Balukoff > > > Blue Box Group, LLC > > > (800)613-4305 x807 > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > > Stephen Balukoff > Blue Box Group, LLC > (800)613-4305 x807 > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > > Stephen Balukoff > Blue Box Group, LLC > (800)613-4305 x807 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 -------------- next part -------------- An HTML attachment was scrubbed... URL: From devananda.vdv at gmail.com Mon Dec 8 22:23:58 2014 From: devananda.vdv at gmail.com (Devananda van der Veen) Date: Mon, 08 Dec 2014 22:23:58 +0000 Subject: [openstack-dev] [Ironic] Fuel agent proposal Message-ID: I'd like to raise this topic for a wider discussion outside of the hallway track and code reviews, where it has thus far mostly remained. In previous discussions, my understanding has been that the Fuel team sought to use Ironic to manage "pets" rather than "cattle" - and doing so required extending the API and the project's functionality in ways that no one else on the core team agreed with. Perhaps that understanding was wrong (or perhaps not), but in any case, there is now a proposal to add a FuelAgent driver to Ironic. The proposal claims this would meet that teams' needs without requiring changes to the core of Ironic. https://review.openstack.org/#/c/138115/ The Problem Description section calls out four things, which have all been discussed previously (some are here [0]). I would like to address each one, invite discussion on whether or not these are, in fact, problems facing Ironic (not whether they are problems for someone, somewhere), and then ask why these necessitate a new driver be added to the project. They are, for reference: 1. limited partition support 2. no software RAID support 3. no LVM support 4. no support for hardware that lacks a BMC #1. When deploying a partition image (eg, QCOW format), Ironic's PXE deploy driver performs only the minimal partitioning necessary to fulfill its mission as an OpenStack service: respect the user's request for root, swap, and ephemeral partition sizes. When deploying a whole-disk image, Ironic does not perform any partitioning -- such is left up to the operator who created the disk image. Support for arbitrarily complex partition layouts is not required by, nor does it facilitate, the goal of provisioning physical servers via a common cloud API. Additionally, as with #3 below, nothing prevents a user from creating more partitions in unallocated disk space once they have access to their instance. Therefor, I don't see how Ironic's minimal support for partitioning is a problem for the project. #2. There is no support for defining a RAID in Ironic today, at all, whether software or hardware. Several proposals were floated last cycle; one is under review right now for DRAC support [1], and there are multiple call outs for RAID building in the state machine mega-spec [2]. Any such support for hardware RAID will necessarily be abstract enough to support multiple hardware vendor's driver implementations and both in-band creation (via IPA) and out-of-band creation (via vendor tools). Given the above, it may become possible to add software RAID support to IPA in the future, under the same abstraction. This would closely tie the deploy agent to the images it deploys (the latter image's kernel would be dependent upon a software RAID built by the former), but this would necessarily be true for the proposed FuelAgent as well. I don't see this as a compelling reason to add a new driver to the project. Instead, we should (plan to) add support for software RAID to the deploy agent which is already part of the project. #3. LVM volumes can easily be added by a user (after provisioning) within unallocated disk space for non-root partitions. I have not yet seen a compelling argument for doing this within the provisioning phase. #4. There are already in-tree drivers [3] [4] [5] which do not require a BMC. One of these uses SSH to connect and run pre-determined commands. Like the spec proposal, which states at line 122, "Control via SSH access feature intended only for experiments in non-production environment," the current SSHPowerDriver is only meant for testing environments. We could probably extend this driver to do what the FuelAgent spec proposes, as far as remote power control for cheap always-on hardware in testing environments with a pre-shared key. (And if anyone wonders about a use case for Ironic without external power control ... I can only think of one situation where I would rationally ever want to have a control-plane agent running inside a user-instance: I am both the operator and the only user of the cloud.) ---------------- In summary, as far as I can tell, all of the problem statements upon which the FuelAgent proposal are based are solvable through incremental changes in existing drivers, or out of scope for the project entirely. As another software-based deploy agent, FuelAgent would duplicate the majority of the functionality which ironic-python-agent has today. Ironic's driver ecosystem benefits from a diversity of hardware-enablement drivers. Today, we have two divergent software deployment drivers which approach image deployment differently: "agent" drivers use a local agent to prepare a system and download the image; "pxe" drivers use a remote agent and copy the image over iSCSI. I don't understand how a second driver which duplicates the functionality we already have, and shares the same goals as the drivers we already have, is beneficial to the project. Doing the same thing twice just increases the burden on the team; we're all working on the same problems, so let's do it together. -Devananda [0] https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition [1] https://review.openstack.org/#/c/107981/ [2] https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst [3] http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py [4] http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py [5] http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.griffith8 at gmail.com Mon Dec 8 22:28:09 2014 From: john.griffith8 at gmail.com (John Griffith) Date: Mon, 8 Dec 2014 15:28:09 -0700 Subject: [openstack-dev] [cinder] HA issues In-Reply-To: <3895CB36EABD4E49B816E6081F3B00171F601C36@IRSMSX108.ger.corp.intel.com> References: <3895CB36EABD4E49B816E6081F3B00171F601C36@IRSMSX108.ger.corp.intel.com> Message-ID: On Mon, Dec 8, 2014 at 8:18 AM, Dulko, Michal wrote: > Hi all! > > > > At the summit during crossproject HA session there were multiple Cinder > issues mentioned. These can be found in this etherpad: > https://etherpad.openstack.org/p/kilo-crossproject-ha-integration > > > > Is there any ongoing effort to fix these issues? Is there an idea how to > approach any of them? > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Thanks for the nudge on this, personally I hadn't seen this. So the items are pretty vague, there are def plans to try and address a number of race conditions etc. I'm not aware of any specific plans to focus on HA from this perspective, or anybody stepping up to work on it but certainly would be great for somebody to dig in and start flushing this out. From kchamart at redhat.com Mon Dec 8 22:35:04 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 8 Dec 2014 23:35:04 +0100 Subject: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support In-Reply-To: <20141208211224.GN2497@yuggoth.org> References: <5481FE7D.7020804@linux.vnet.ibm.com> <54820841.6080200@dague.net> <548211BC.6070900@linux.vnet.ibm.com> <20141208104536.GC14205@tesla.redhat.com> <20141208211224.GN2497@yuggoth.org> Message-ID: <20141208223504.GE13521@tesla.redhat.com> On Mon, Dec 08, 2014 at 09:12:24PM +0000, Jeremy Stanley wrote: > On 2014-12-08 11:45:36 +0100 (+0100), Kashyap Chamarthy wrote: > > As Dan Berrang? noted, it's nearly impossible to reproduce this issue > > independently outside of OpenStack Gating environment. I brought this up > > at the recently concluded KVM Forum earlier this October. To debug this > > any further, one of the QEMU block layer developers asked if we can get > > QEMU instance running on Gate run under `gdb` (IIRC, danpb suggested > > this too, previously) to get further tracing details. > > We document thoroughly how to reproduce the environments we use for > testing OpenStack. Yep, documentation is appreciated. > There's nothing rarified about "a Gate run" that anyone with access to > a public cloud provider would be unable to reproduce, save being able > to run it over and over enough times to expose less frequent failures. Sure. To be fair, this was actually tried. At the risk of over discussing the topic, allow me to provide a bit more context, quoting Dan's email from an old thread[1] ("Thoughts on the patch test failure rate and moving forward" Jul 23, 2014) here for convenience: "In some of the harder gate bugs I've looked at (especially the infamous 'live snapshot' timeout bug), it has been damn hard to actually figure out what's wrong. AFAIK, no one has ever been able to reproduce it outside of the gate infrastructure. I've even gone as far as setting up identical Ubuntu VMs to the ones used in the gate on a local cloud, and running the tempest tests multiple times, but still can't reproduce what happens on the gate machines themselves :-( As such we're relying on code inspection and the collected log messages to try and figure out what might be wrong. The gate collects alot of info and publishes it, but in this case I have found the published logs to be insufficient - I needed to get the more verbose libvirtd.log file. devstack has the ability to turn this on via an environment variable, but it is disabled by default because it would add 3% to the total size of logs collected per gate job. There's no way for me to get that environment variable for devstack turned on for a specific review I want to test with. In the end I uploaded a change to nova which abused rootwrap to elevate privileges, install extra deb packages, reconfigure libvirtd logging and restart the libvirtd daemon. https://review.openstack.org/#/c/103066/11/etc/nova/rootwrap.d/compute.filters https://review.openstack.org/#/c/103066/11/nova/virt/libvirt/driver.py My next attack is to build a custom QEMU binary and hack nova further so that it can download my custom QEMU binary from a website onto the gate machine and run the test with it. Failing that I'm going to be hacking things to try to attach to QEMU in the gate with GDB and get stack traces. Anything is doable thanks to rootwrap giving us a way to elevate privileges from Nova, but it is a somewhat tedious approach." [1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041148.html To add to the above, from the bug, you can find in one of the plenty of invocations, the above issue _was_ reproduced once, albiet with questionable likelihood (details in the bug). So, it's not that what you're suggesting was never tried. But, from the above, you can clearly see what kind of convoluted methods you need to resort to. One concrete point from the above: it'd be very useful to have an env variable that can be toggled to enable libvirt/QEMU run under `gdb` for $REVIEW. (Sure, it's a patch that needs to be worked on. . .) [. . .] > The QA team tries very hard to make our integration testing > environment as closely as possible mimic real-world deployment > configurations. If these sorts of bugs emerge more often because of, > for example, resource constraints in the test environment then it > should be entirely likely they'd also be seen in production with the > same frequency if run on similarly constrained equipment. And as we've > observed in the past, any code path we stop testing quickly > accumulates new bugs that go unnoticed until they impact someone's > production environment at 3am. I realize you're raising the point that it should not be taken lightly -- hope the context provided in this email demonstrates that it's not the case. PS: FWIW, I do enable this codepath in my test environments (sure, it's not *representative*), but I'm yet to reproduce the bug. -- /kashyap From carol.l.barrett at intel.com Mon Dec 8 22:46:40 2014 From: carol.l.barrett at intel.com (Barrett, Carol L) Date: Mon, 8 Dec 2014 22:46:40 +0000 Subject: [openstack-dev] [Telco Work Group][Ecosystem & Collateral Team] Meeting information Message-ID: <2D352D0CD819F64F9715B1B89695400D5C5F0E74@ORSMSX113.amr.corp.intel.com> The Ecosystem and Collateral team of the Telco Work Group is meeting on Tuesday 12/9 at 8:00 Pacific Time. If you're interested in collaborating to accelerate Telco adoption of OpenStack through ecosystem engagements and development of collateral (case studies, reference architectures, etc), pls join. Call details: Access: (888) 875-9370, Bridge: 3; PC: 7053780 Etherpad for meeting notes: https://etherpad.openstack.org/p/12_9_TWG_Ecosystem_and_Collateral Thanks Carol -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Mon Dec 8 22:52:12 2014 From: ayoung at redhat.com (Adam Young) Date: Mon, 08 Dec 2014 17:52:12 -0500 Subject: [openstack-dev] [Keystone] OSAAA-Policy Message-ID: <54862B9C.8060809@redhat.com> The Policy libraray has been nominated for promotion from Oslo incubator. The Keystone team was formally known as the Identity Program, but now is Authentication, Authorization, and Audit, or AAA. Does the prefeix OSAAA for the library make sense? It should not be Keystone-policy. From mestery at mestery.com Mon Dec 8 22:56:43 2014 From: mestery at mestery.com (Kyle Mestery) Date: Mon, 8 Dec 2014 16:56:43 -0600 Subject: [openstack-dev] [neutron] Abandoned patches for neutron and python-neutronclient Message-ID: As part of a broader cleanup I've been doing today, I went and abandoned a bunch of patches in both the neutron and python-neutronclient gerrit queues today. All of these were more than 2 months old. If you plan to continue working on these, please activate them again and propose new changes. But this should help cleanup the queues a bit. Thanks! Kyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Mon Dec 8 23:05:50 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Mon, 8 Dec 2014 17:05:50 -0600 Subject: [openstack-dev] [Keystone] OSAAA-Policy In-Reply-To: <54862B9C.8060809@redhat.com> References: <54862B9C.8060809@redhat.com> Message-ID: I agree that this library should not have ?Keystone? in the name. This is more along the lines of pycadf, something that is housed under the OpenStack Identity Program but it is more interesting for general use-case than exclusively something that is tied to Keystone specifically. Cheers, Morgan --? Morgan Fainberg On December 8, 2014 at 4:55:20 PM, Adam Young (ayoung at redhat.com) wrote: The Policy libraray has been nominated for promotion from Oslo incubator. The Keystone team was formally known as the Identity Program, but now is Authentication, Authorization, and Audit, or AAA. Does the prefeix OSAAA for the library make sense? It should not be Keystone-policy. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mestery at mestery.com Mon Dec 8 23:13:50 2014 From: mestery at mestery.com (Kyle Mestery) Date: Mon, 8 Dec 2014 17:13:50 -0600 Subject: [openstack-dev] [neutron] mid-cycle arrival time Tuesday 12-9-2014 Message-ID: Folks, per a request from Jun, for tomorrow's mid-cycle, if you could all arrive no earlier than 8:45, that would be great. Since we're in the smaller meeting rooms and/or cafeteria area tomorrow, this will allow Jun and the other Adobe hosts to arrive in time and set up for us. Thanks! Kyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Mon Dec 8 23:23:10 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Mon, 8 Dec 2014 17:23:10 -0600 Subject: [openstack-dev] [Keystone] Mid-Cycle Meetup Dates/Time/Location In-Reply-To: <3A0B18C7-0BC9-4AB2-9FAC-DA4BFB5CC190@gmail.com> References: <3A0B18C7-0BC9-4AB2-9FAC-DA4BFB5CC190@gmail.com> Message-ID: As promised, we have an update for venue, recommended hotels, and an RSVP form. I want to thank Geekdom and Rackspace for helping to put together and host the Keystone Midcycle. All updated information can be found at:?https://www.morganfainberg.com/blog/2014/11/18/keystone-hackathon-kilo/ Cheers, Morgan --? Morgan Fainberg On November 18, 2014 at 2:57:58 PM, Morgan Fainberg (morgan.fainberg at gmail.com) wrote: I am happy to announce a bunch of the information for the Keystone mid-cycle meetup. The selection of dates, location, etc is based upon the great feedback I received from the earlier poll. Currently the only thing left up in the air is the specific venue and recommended hotel(s). Location: San Antonio, TX Dates: January 19, 20, 21 (~2 weeks prior to Kilo Milestone 2). Venue: TBD (we have a couple options in San Antonio, and will provide an update as soon as it?s all confirmed) Recommended Hotels: TBD (we are also working to get a Hotel discount again like last time, the recommended hotel will be based upon the final venue). I will be keeping the following page:?https://www.morganfainberg.com/blog/2014/11/18/keystone-hackathon-kilo/?up-to-date with hotel recommendations, venue specific details, etc. I expect to have the Venue and Hotel recommendations ready shortly (full RSVP form will be sent out as well once the venue and hotel are confirmed). I look forward to seeing everyone in January! Cheers, Morgan Fainberg -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Mon Dec 8 23:26:31 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Mon, 8 Dec 2014 17:26:31 -0600 Subject: [openstack-dev] [Keystone] OSAAA-Policy In-Reply-To: References: <54862B9C.8060809@redhat.com> Message-ID: As a quick note, the OpenStack Identity Program was not renamed to AAA (based upon the discussion with the TC) but increased scope to adopt Audit on top of the already included scope of Authorization and Authentication. Cheers, Morgan --? Morgan Fainberg On December 8, 2014 at 5:05:51 PM, Morgan Fainberg (morgan.fainberg at gmail.com) wrote: I agree that this library should not have ?Keystone? in the name. This is more along the lines of pycadf, something that is housed under the OpenStack Identity Program but it is more interesting for general use-case than exclusively something that is tied to Keystone specifically. Cheers, Morgan --? Morgan Fainberg On December 8, 2014 at 4:55:20 PM, Adam Young (ayoung at redhat.com) wrote: The Policy libraray has been nominated for promotion from Oslo incubator. The Keystone team was formally known as the Identity Program, but now is Authentication, Authorization, and Audit, or AAA. Does the prefeix OSAAA for the library make sense? It should not be Keystone-policy. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Mon Dec 8 23:38:04 2014 From: soulxu at gmail.com (Alex Xu) Date: Tue, 9 Dec 2014 07:38:04 +0800 Subject: [openstack-dev] [api] Using query string or request body to pass parameter In-Reply-To: <1418058695.6209.0.camel@einstein.kev> References: <54854039.3030405@linux.vnet.ibm.com> <1418058695.6209.0.camel@einstein.kev> Message-ID: Not sure all, nova is limited at https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L79 That under our control. Maybe not just ask question for delete, also for other method. 2014-12-09 1:11 GMT+08:00 Kevin L. Mitchell : > On Mon, 2014-12-08 at 14:07 +0800, Eli Qiao wrote: > > I wonder if we can use body in delete, currently , there isn't any > > case used in v2/v3 api. > > No, many frameworks raise an error if you try to include a body with a > DELETE request. > -- > Kevin L. Mitchell > Rackspace > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne at openstack.org Mon Dec 8 23:46:51 2014 From: anne at openstack.org (Anne Gentle) Date: Mon, 8 Dec 2014 17:46:51 -0600 Subject: [openstack-dev] Where should Schema files live? In-Reply-To: <5485F69D.8020705@redhat.com> References: <60A3427EF882A54BA0A1971AE6EF0388A535151D@ORD1EXD02.RACKSPACE.CORP> <244070269.6015180.1416519285449.JavaMail.zimbra@redhat.com> <60A3427EF882A54BA0A1971AE6EF0388A535222C@ORD1EXD02.RACKSPACE.CORP> <1641271722.6367928.1416582229769.JavaMail.zimbra@redhat.com> <60A3427EF882A54BA0A1971AE6EF0388A5352E0B@ORD1EXD02.RACKSPACE.CORP> <1357304690.9046926.1416937758405.JavaMail.zimbra@redhat.com> <60A3427EF882A54BA0A1971AE6EF0388A5360754@ORD1EXD02.RACKSPACE.CORP> <5485F69D.8020705@redhat.com> Message-ID: On Mon, Dec 8, 2014 at 1:06 PM, Adam Young wrote: > Isn't this what the API repos are for? Should EG the Keystone schemes be > served from > > https://github.com/openstack/identity-api/ > > > The -api repos will go away once we have the merges completed for replacing with the -specs repo info. I wondered if anyone would draw a connection to the API schemas. We haven't made a plan to maintain XSDs for every API and many teams don't have XSDs available. So far only Identity, Compute, and Databases have made XSD files. Should those also go into the specs repository? Thanks, Anne > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.mitchell at rackspace.com Mon Dec 8 23:58:47 2014 From: kevin.mitchell at rackspace.com (Kevin L. Mitchell) Date: Mon, 8 Dec 2014 17:58:47 -0600 Subject: [openstack-dev] [api] Using query string or request body to pass parameter In-Reply-To: References: <54854039.3030405@linux.vnet.ibm.com> <1418058695.6209.0.camel@einstein.kev> Message-ID: <1418083127.6209.11.camel@einstein.kev> On Tue, 2014-12-09 at 07:38 +0800, Alex Xu wrote: > Not sure all, nova is limited > at https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L79 > That under our control. It is, but the client frameworks aren't, and some of them prohibit sending a body with a DELETE request. Further, RFC7231 has this to say about DELETE request bodies: A payload within a DELETE request message has no defined semantics; sending a payload body on a DELETE request might cause some existing implementations to reject the request. (?4.3.5) I think we have to conclude that, if we need a request body, we cannot use the DELETE method. We can modify the operation, such as setting a "force" flag, with a query parameter on the URI, but a request body should be considered out of bounds with respect to DELETE. > Maybe not just ask question for delete, also for other method. > > 2014-12-09 1:11 GMT+08:00 Kevin L. Mitchell : > On Mon, 2014-12-08 at 14:07 +0800, Eli Qiao wrote: > > I wonder if we can use body in delete, currently , there isn't any > > case used in v2/v3 api. > > No, many frameworks raise an error if you try to include a body with a > DELETE request. > -- > Kevin L. Mitchell > Rackspace -- Kevin L. Mitchell Rackspace From gokrokvertskhov at mirantis.com Tue Dec 9 00:36:39 2014 From: gokrokvertskhov at mirantis.com (Georgy Okrokvertskhov) Date: Mon, 8 Dec 2014 16:36:39 -0800 Subject: [openstack-dev] [Mistral] Query on creating multiple resources In-Reply-To: References: <1418048204426.95510@persistent.com> <1418048476946.19042@persistent.com> <1418050120681.46696@persistent.com> <5485C560.9060605@redhat.com> Message-ID: Hi Sushma, Did you explore Heat templates? As Zane mentioned you can do this via Heat template without writing any workflows. Do you have any specific use cases which you can't solve with Heat template? Create VM workflow was a demo example. Mistral potentially can be used by Heat or other orchestration tools to do actual interaction with API, but for user it might be easier to use Heat functionality. Thanks, Georgy On Mon, Dec 8, 2014 at 7:54 AM, Nikolay Makhotkin wrote: > Hi, Sushma! > > Can we create multiple resources using a single task, like multiple >> keypairs or security-groups or networks etc? > > > Yes, we can. This feature is in the development now and it is considered > as experimental - > https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections > > Just clone the last master branch from mistral. > > You can specify "for-each" task property and provide the array of data to > your workflow: > > -------------------- > > version: '2.0' > > name: secgroup_actions > > workflows: > create_security_group: > type: direct > input: > - array_with_names_and_descriptions > > tasks: > create_secgroups: > > for-each: > > data: $.array_with_names_and_descriptions > action: nova.security_groups_create name={$.data.name} > description={$.data.description} > ------------ > > On Mon, Dec 8, 2014 at 6:36 PM, Zane Bitter wrote: > >> On 08/12/14 09:41, Sushma Korati wrote: >> >>> Can we create multiple resources using a single task, like multiple >>> keypairs or security-groups or networks etc? >>> >> >> Define them in a Heat template and create the Heat stack as a single task. >> >> - ZB >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Best Regards, > Nikolay > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mestery at mestery.com Tue Dec 9 00:38:57 2014 From: mestery at mestery.com (Kyle Mestery) Date: Mon, 8 Dec 2014 18:38:57 -0600 Subject: [openstack-dev] [neutron] services split starting today In-Reply-To: References: Message-ID: Reminder: Neutron is still frozen for commits while we work through the services split. We hope to be have this done tomorrow sometime so we can un-freeze neutron. Thanks! Kyle On Mon, Dec 8, 2014 at 10:19 AM, Doug Wiegley wrote: > To all neutron cores, > > Please do not approve any gerrit reviews for advanced services code for > the next few days. We will post again when those reviews can resume. > > Thanks, > Doug > > > > On 12/8/14, 8:49 AM, "Doug Wiegley" wrote: > > >Hi all, > > > >The neutron advanced services split is starting today at 9am PDT, as > >described here: > > > >https://review.openstack.org/#/c/136835/ > > > > > >.. The remove change from neutron can be seen here: > > > >https://review.openstack.org/#/c/139901/ > > > > > >.. While the new repos are being sorted out, advanced services will be > >broken, and services tempest tests will be disabled. Either grab Juno, or > >an earlier rev of neutron. > > > >The new repos are: neutron-lbaas, neutron-fwaas, neutron-vpnaas. > > > >Thanks, > >Doug > > > > > >_______________________________________________ > >OpenStack-dev mailing list > >OpenStack-dev at lists.openstack.org > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Tue Dec 9 00:43:45 2014 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 9 Dec 2014 11:43:45 +1100 Subject: [openstack-dev] [nova] Adding temporary code to nova to work around bugs in system utilities In-Reply-To: <548607DC.3020305@gmail.com> References: <20141203090022.GS84915@thor.bakeyournoodle.com> <548607DC.3020305@gmail.com> Message-ID: <20141209004344.GD19363@thor.bakeyournoodle.com> On Mon, Dec 08, 2014 at 03:19:40PM -0500, Jay Pipes wrote: > I reviewed the patch. I don't mind the idea of a [workarounds] section of > configuration options, but I had an issue with where that code was executed. Thanks. Replied. > I think it would be fine to have a [workarounds] config section for just > this purpose. Okay good to know. Again thanks. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From jim at jimrollenhagen.com Tue Dec 9 00:46:56 2014 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 08 Dec 2014 16:46:56 -0800 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: References: Message-ID: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> On December 8, 2014 2:23:58 PM PST, Devananda van der Veen wrote: >I'd like to raise this topic for a wider discussion outside of the >hallway >track and code reviews, where it has thus far mostly remained. > >In previous discussions, my understanding has been that the Fuel team >sought to use Ironic to manage "pets" rather than "cattle" - and doing >so >required extending the API and the project's functionality in ways that >no >one else on the core team agreed with. Perhaps that understanding was >wrong >(or perhaps not), but in any case, there is now a proposal to add a >FuelAgent driver to Ironic. The proposal claims this would meet that >teams' >needs without requiring changes to the core of Ironic. > >https://review.openstack.org/#/c/138115/ I think it's clear from the review that I share the opinions expressed in this email. That said (and hopefully without derailing the thread too much), I'm curious how this driver could do software RAID or LVM without modifying Ironic's API or data model. How would the agent know how these should be built? How would an operator or user tell Ironic what the disk/partition/volume layout would look like? And before it's said - no, I don't think vendor passthru API calls are an appropriate answer here. // jim > >The Problem Description section calls out four things, which have all >been >discussed previously (some are here [0]). I would like to address each >one, >invite discussion on whether or not these are, in fact, problems facing >Ironic (not whether they are problems for someone, somewhere), and then >ask >why these necessitate a new driver be added to the project. > > >They are, for reference: > >1. limited partition support > >2. no software RAID support > >3. no LVM support > >4. no support for hardware that lacks a BMC > >#1. > >When deploying a partition image (eg, QCOW format), Ironic's PXE deploy >driver performs only the minimal partitioning necessary to fulfill its >mission as an OpenStack service: respect the user's request for root, >swap, >and ephemeral partition sizes. When deploying a whole-disk image, >Ironic >does not perform any partitioning -- such is left up to the operator >who >created the disk image. > >Support for arbitrarily complex partition layouts is not required by, >nor >does it facilitate, the goal of provisioning physical servers via a >common >cloud API. Additionally, as with #3 below, nothing prevents a user from >creating more partitions in unallocated disk space once they have >access to >their instance. Therefor, I don't see how Ironic's minimal support for >partitioning is a problem for the project. > >#2. > >There is no support for defining a RAID in Ironic today, at all, >whether >software or hardware. Several proposals were floated last cycle; one is >under review right now for DRAC support [1], and there are multiple >call >outs for RAID building in the state machine mega-spec [2]. Any such >support >for hardware RAID will necessarily be abstract enough to support >multiple >hardware vendor's driver implementations and both in-band creation (via >IPA) and out-of-band creation (via vendor tools). > >Given the above, it may become possible to add software RAID support to >IPA >in the future, under the same abstraction. This would closely tie the >deploy agent to the images it deploys (the latter image's kernel would >be >dependent upon a software RAID built by the former), but this would >necessarily be true for the proposed FuelAgent as well. > >I don't see this as a compelling reason to add a new driver to the >project. >Instead, we should (plan to) add support for software RAID to the >deploy >agent which is already part of the project. > >#3. > >LVM volumes can easily be added by a user (after provisioning) within >unallocated disk space for non-root partitions. I have not yet seen a >compelling argument for doing this within the provisioning phase. > >#4. > >There are already in-tree drivers [3] [4] [5] which do not require a >BMC. >One of these uses SSH to connect and run pre-determined commands. Like >the >spec proposal, which states at line 122, "Control via SSH access >feature >intended only for experiments in non-production environment," the >current >SSHPowerDriver is only meant for testing environments. We could >probably >extend this driver to do what the FuelAgent spec proposes, as far as >remote >power control for cheap always-on hardware in testing environments with >a >pre-shared key. > >(And if anyone wonders about a use case for Ironic without external >power >control ... I can only think of one situation where I would rationally >ever >want to have a control-plane agent running inside a user-instance: I am >both the operator and the only user of the cloud.) > > >---------------- > >In summary, as far as I can tell, all of the problem statements upon >which >the FuelAgent proposal are based are solvable through incremental >changes >in existing drivers, or out of scope for the project entirely. As >another >software-based deploy agent, FuelAgent would duplicate the majority of >the >functionality which ironic-python-agent has today. > >Ironic's driver ecosystem benefits from a diversity of >hardware-enablement >drivers. Today, we have two divergent software deployment drivers which >approach image deployment differently: "agent" drivers use a local >agent to >prepare a system and download the image; "pxe" drivers use a remote >agent >and copy the image over iSCSI. I don't understand how a second driver >which >duplicates the functionality we already have, and shares the same goals >as >the drivers we already have, is beneficial to the project. > >Doing the same thing twice just increases the burden on the team; we're >all >working on the same problems, so let's do it together. > >-Devananda > > >[0] >https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition > >[1] https://review.openstack.org/#/c/107981/ > >[2] >https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst > > >[3] >http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py > >[4] >http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py > >[5] >http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py > > >------------------------------------------------------------------------ > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From blk at acm.org Tue Dec 9 01:03:58 2014 From: blk at acm.org (Brant Knudson) Date: Mon, 8 Dec 2014 19:03:58 -0600 Subject: [openstack-dev] [keystone][all] Max Complexity Check Considered Harmful Message-ID: Not too long ago projects added a maximum complexity check to tox.ini, for example keystone has "max-complexity=24". Seemed like a good idea at the time, but in a recent attempt to lower the maximum complexity check in keystone[1][2], I found that the maximum complexity check can actually lead to less understandable code. This is because the check includes an embedded function's "complexity" in the function that it's in. The way I would have lowered the complexity of the function in keystone is to extract the complex part into a new function. This can make the existing function much easier to understand for all the reasons that one defines a function for code. Since this new function is obviously only called from the function it's currently in, it makes sense to keep the new function inside the existing function. It's simpler to think about an embedded function because then you know it's only called from one place. The problem is, because of the existing complexity check behavior, this doesn't lower the "complexity" according to the complexity check, so you wind up putting the function as a new top-level, and now a reader is has to assume that the function could be called from anywhere and has to be much more cautious about changes to the function. Since the complexity check can lead to code that's harder to understand, it must be considered harmful and should be removed, at least until the incorrect behavior is corrected. [1] https://review.openstack.org/#/c/139835/ [2] https://review.openstack.org/#/c/139836/ [3] https://review.openstack.org/#/c/140188/ - Brant -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbalukoff at bluebox.net Tue Dec 9 01:48:21 2014 From: sbalukoff at bluebox.net (Stephen Balukoff) Date: Mon, 8 Dec 2014 17:48:21 -0800 Subject: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration In-Reply-To: References: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP> <9EDBC83C95615E4A97C964A30A3E18AC2F695F28@ORD1EXD01.RACKSPACE.CORP> Message-ID: For what it's worth, I know that the Octavia project will need something which can do more advanced layer-3 networking in order to deliver and ACTIVE-ACTIVE topology of load balancing VMs / containers / machines. That's still a "down the road" feature for us, but it would be great to be able to do more advanced layer-3 networking in earlier releases of Octavia as well. (Without this, we might have to go through back doors to get Neutron to do what we need it to, and I'd rather avoid that.) I'm definitely up for learning more about your proposal for this project, though I've not had any practical experience with Ryu yet. I would also like to see whether it's possible to do the sort of advanced layer-3 networking you've described without using OVS. (We have found that OVS tends to be not quite mature / stable enough for our needs and have moved most of our clouds to use ML2 / standard linux bridging.) Carl: I'll also take a look at the two gerrit reviews you've linked. Is this week's L3 meeting not happening then? (And man-- I wish it were an hour or two later in the day. Coming at y'all from PST timezone here.) Stephen On Mon, Dec 8, 2014 at 11:57 AM, Carl Baldwin wrote: > Ryan, > > I'll be traveling around the time of the L3 meeting this week. My > flight leaves 40 minutes after the meeting and I might have trouble > attending. It might be best to put it off a week or to plan another > time -- maybe Friday -- when we could discuss it in IRC or in a > Hangout. > > Carl > > On Mon, Dec 8, 2014 at 8:43 AM, Ryan Clevenger > wrote: > > Thanks for getting back Carl. I think we may be able to make this weeks > > meeting. Jason K?lker is the engineer doing all of the lifting on this > side. > > Let me get with him to review what you all have so far and check our > > availability. > > > > ________________________________________ > > > > Ryan Clevenger > > Manager, Cloud Engineering - US > > m: 678.548.7261 > > e: ryan.clevenger at rackspace.com > > > > ________________________________ > > From: Carl Baldwin [carl at ecbaldwin.net] > > Sent: Sunday, December 07, 2014 4:04 PM > > To: OpenStack Development Mailing List > > Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea > solicitation > > and collaboration > > > > Ryan, > > > > I have been working with the L3 sub team in this direction. Progress has > > been slow because of other priorities but we have made some. I have > written > > a blueprint detailing some changes needed to the code to enable the > > flexibility to one day run glaring ups on an l3 routed network [1]. > Jaime > > has been working on one that integrates ryu (or other speakers) with > neutron > > [2]. Dvr was also a step in this direction. > > > > I'd like to invite you to the l3 weekly meeting [3] to discuss further. > I'm > > very happy to see interest in this area and have someone new to > collaborate. > > > > Carl > > > > [1] https://review.openstack.org/#/c/88619/ > > [2] https://review.openstack.org/#/c/125401/ > > [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam > > > > On Dec 3, 2014 4:04 PM, "Ryan Clevenger" > > wrote: > >> > >> Hi, > >> > >> At Rackspace, we have a need to create a higher level networking service > >> primarily for the purpose of creating a Floating IP solution in our > >> environment. The current solutions for Floating IPs, being tied to > plugin > >> implementations, does not meet our needs at scale for the following > reasons: > >> > >> 1. Limited endpoint H/A mainly targeting failover only and not > >> multi-active endpoints, > >> 2. Lack of noisy neighbor and DDOS mitigation, > >> 3. IP fragmentation (with cells, public connectivity is terminated > inside > >> each cell leading to fragmentation and IP stranding when cell > CPU/Memory use > >> doesn't line up with allocated IP blocks. Abstracting public > connectivity > >> away from nova installations allows for much more efficient use of those > >> precious IPv4 blocks). > >> 4. Diversity in transit (multiple encapsulation and transit types on a > per > >> floating ip basis). > >> > >> We realize that network infrastructures are often unique and such a > >> solution would likely diverge from provider to provider. However, we > would > >> love to collaborate with the community to see if such a project could be > >> built that would meet the needs of providers at scale. We believe that, > at > >> its core, this solution would boil down to terminating north<->south > traffic > >> temporarily at a massively horizontally scalable centralized core and > then > >> encapsulating traffic east<->west to a specific host based on the > >> association setup via the current L3 router's extension's 'floatingips' > >> resource. > >> > >> Our current idea, involves using Open vSwitch for header rewriting and > >> tunnel encapsulation combined with a set of Ryu applications for > management: > >> > >> https://i.imgur.com/bivSdcC.png > >> > >> The Ryu application uses Ryu's BGP support to announce up to the Public > >> Routing layer individual floating ips (/32's or /128's) which are then > >> summarized and announced to the rest of the datacenter. If a particular > >> floating ip is experiencing unusually large traffic (DDOS, slashdot > effect, > >> etc.), the Ryu application could change the announcements up to the > Public > >> layer to shift that traffic to dedicated hosts setup for that purpose. > It > >> also announces a single /32 "Tunnel Endpoint" ip downstream to the > TunnelNet > >> Routing system which provides transit to and from the cells and their > >> hypervisors. Since traffic from either direction can then end up on any > of > >> the FLIP hosts, a simple flow table to modify the MAC and IP in either > the > >> SRC or DST fields (depending on traffic direction) allows the system to > be > >> completely stateless. We have proven this out (with static routing and > >> flows) to work reliably in a small lab setup. > >> > >> On the hypervisor side, we currently plumb networks into separate OVS > >> bridges. Another Ryu application would control the bridge that handles > >> overlay networking to selectively divert traffic destined for the > default > >> gateway up to the FLIP NAT systems, taking into account any configured > >> logical routing and local L2 traffic to pass out into the existing > overlay > >> fabric undisturbed. > >> > >> Adding in support for L2VPN EVPN > >> (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN > >> Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) > to the > >> Ryu BGP speaker will allow the hypervisor side Ryu application to > advertise > >> up to the FLIP system reachability information to take into account VM > >> failover, live-migrate, and supported encapsulation types. We believe > that > >> decoupling the tunnel endpoint discovery from the control plane > >> (Nova/Neutron) will provide for a more robust solution as well as allow > for > >> use outside of openstack if desired. > >> > >> ________________________________________ > >> > >> Ryan Clevenger > >> Manager, Cloud Engineering - US > >> m: 678.548.7261 > >> e: ryan.clevenger at rackspace.com > >> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 -------------- next part -------------- An HTML attachment was scrubbed... URL: From m4d.coder at gmail.com Tue Dec 9 02:25:00 2014 From: m4d.coder at gmail.com (W Chan) Date: Mon, 8 Dec 2014 18:25:00 -0800 Subject: [openstack-dev] [Mistral] Event Subscription Message-ID: Renat, On sending events to an "exchange", I mean an exchange on the transport (i.e. rabbitMQ exchange https://www.rabbitmq.com/tutorials/amqp-concepts.html). On implementation we can probably explore the notification feature in oslo.messaging. But on second thought, this would limit the consumers to trusted subsystems or services though. If we want the event consumers to be any 3rd party, including untrusted, then maybe we should keep it as HTTP calls. Winson -------------- next part -------------- An HTML attachment was scrubbed... URL: From dannchoi at cisco.com Tue Dec 9 02:43:18 2014 From: dannchoi at cisco.com (Danny Choi (dannchoi)) Date: Tue, 9 Dec 2014 02:43:18 +0000 Subject: [openstack-dev] [qa] How to delete a VM which is in ERROR state? Message-ID: Both ?delete? and ?force-delete? did not work for me; they failed to remove the VM. Danny Date: Sun, 7 Dec 2014 21:17:30 +0530 From: foss geek > To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [qa] How to delete a VM which is in ERROR state? Message-ID: > Content-Type: text/plain; charset="utf-8" Also try with nova force-delete after reset: $ nova help force-delete usage: nova force-delete Force delete a server. Positional arguments: Name or ID of server. -- Thanks & Regards E-Mail: thefossgeek at gmail.com IRC: neophy Blog : http://lmohanphy.livejournal.com/ On Sun, Dec 7, 2014 at 9:10 PM, foss geek > wrote: Have you tried to delete after reset? # nova reset-state --active # nova delete It works well for me if the VM state is error state. -- Thanks & Regards E-Mail: thefossgeek at gmail.com IRC: neophy Blog : http://lmohanphy.livejournal.com/ On Sun, Dec 7, 2014 at 7:17 PM, Danny Choi (dannchoi) > wrote: That does not work. It put the VM in ACTIVE Status, but in NOSTATE Power State. Subsequent delete still won?t remove the VM. +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ACTIVE | - | NOSTATE | | Regards, Danny -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Tue Dec 9 03:38:56 2014 From: mikal at stillhq.com (Michael Still) Date: Tue, 9 Dec 2014 14:38:56 +1100 Subject: [openstack-dev] [Compute][Nova] Mid-cycle meetup details for kilo In-Reply-To: References: Message-ID: Wow, now we're up to 75% "sold" in just five hours. So... If you're stalling on registering please don't, as it sounds like I might need to ask the venue for more seats. Thanks, Michael On Tue, Dec 9, 2014 at 9:10 AM, Michael Still wrote: > Just a reminder that registration for the Nova mid-cycle is now open. > We're currently 50% "sold", so early signup will help us work out if > we need to add more seats or not. > > https://www.eventbrite.com.au/e/openstack-nova-kilo-mid-cycle-developer-meetup-tickets-14767182039 > > Thanks, > Michael > > On Thu, Dec 4, 2014 at 10:18 AM, Michael Still wrote: >> Sigh, sorry. It is of course the Kilo meetup: >> >> https://www.eventbrite.com.au/e/openstack-nova-kilo-mid-cycle-developer-meetup-tickets-14767182039 >> >> Michael >> >> On Thu, Dec 4, 2014 at 10:16 AM, Michael Still wrote: >>> I've just created the signup page for this event. Its here: >>> >>> https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-14767182039 >>> >>> Cheers, >>> Michael >>> >>> On Wed, Oct 15, 2014 at 3:45 PM, Michael Still wrote: >>>> Hi. >>>> >>>> I am pleased to announce details for the Kilo Compute mid-cycle >>>> meetup, but first some background about how we got here. >>>> >>>> Two companies actively involved in OpenStack came forward with offers >>>> to host the Compute meetup. However, one of those companies has >>>> gracefully decided to wait until the L release because of the cold >>>> conditions are their proposed location (think several feet of snow). >>>> >>>> So instead, we're left with California! >>>> >>>> The mid-cycle meetup will be from 26 to 28 January 2015, at the VMWare >>>> offices in Palo Alto California. >>>> >>>> Thanks for VMWare for stepping up and offering to host. It sure does >>>> make my life easy. >>>> >>>> More details will be forthcoming closer to the event, but I wanted to >>>> give people as much notice as possible about dates and location so >>>> they can start negotiating travel if they want to come. >>>> >>>> Cheers, >>>> Michael >>>> >>>> -- >>>> Rackspace Australia >>> >>> >>> >>> -- >>> Rackspace Australia >> >> >> >> -- >> Rackspace Australia > > > > -- > Rackspace Australia -- Rackspace Australia From skywalker.nick at gmail.com Tue Dec 9 04:25:49 2014 From: skywalker.nick at gmail.com (Li Ma) Date: Tue, 09 Dec 2014 12:25:49 +0800 Subject: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps In-Reply-To: <546B4DC7.9090309@ubuntu.com> References: <546A058C.7090800@ubuntu.com> <1C39F2D6-C600-4BA5-8F95-79E2E7DA7172@doughellmann.com> <6F5E7E0C-656F-4616-A816-BC2E340299AD@doughellmann.com> <37ecc12f660ddca2b5a76930c06c1ce6@sileht.net> <546B4DC7.9090309@ubuntu.com> Message-ID: <548679CD.4030205@gmail.com> Hi all, I tried to deploy zeromq by devstack and it definitely failed with lots of problems, like dependencies, topics, matchmaker setup, etc. I've already registered a blueprint for devstack-zeromq [1]. Besides, I suggest to build a wiki page in order to trace all the workitems related with ZeroMQ. The general sections may be [Why ZeroMQ], [Current Bugs & Reviews], [Future Plan & Blueprints], [Discussions], [Resources], etc. Any comments? [1] https://blueprints.launchpad.net/devstack/+spec/zeromq cheers, Li Ma On 2014/11/18 21:46, James Page wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > On 18/11/14 00:55, Denis Makogon wrote: >> >> So if zmq driver support in devstack is fixed, we can easily add a >> new job to run them in the same way. >> >> >> Btw this is a good question. I will take look at current state of >> zmq in devstack. > I don't think its that far off and its broken rather than missing - > the rpc backend code needs updating to use oslo.messaging rather than > project specific copies of the rpc common codebase (pre oslo). > Devstack should be able to run with the local matchmaker in most > scenarios but it looks like there was support for the redis matchmaker > as well. > > If you could take some time to fixup that would be awesome! > > - -- > James Page > Ubuntu and Debian Developer > james.page at ubuntu.com > jamespage at debian.org > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1 > > iQIbBAEBCAAGBQJUa03HAAoJEL/srsug59jDdZQP+IeEvXAcfxNs2Tgvt5trnjgg > cnTrJPLbr6i/uIXKjRvNDSkJEdv//EjL/IRVRIf0ld09FpRnyKzUDMPq1CzFJqdo > 45RqFWwJ46NVA4ApLZVugJvKc4tlouZQvizqCBzDKA6yUsUoGmRpYFAQ3rN6Gs9h > Q/8XSAmHQF1nyTarxvylZgnqhqWX0p8n1+fckQeq2y7s3D3WxfM71ftiLrmQCWir > aPkH7/0qvW+XiOtBXVTXDb/7pocNZg+jtBkUcokORXbJCmiCN36DBXv9LPIYgfhe > /cC/wQFH4RUSkoj7SYPAafX4J2lTMjAd+GwdV6ppKy4DbPZdNty8c9cbG29KUK40 > TSCz8U3tUcaFGDQdBB5Kg85c1aYri6dmLxJlk7d8pOXLTb0bfnzdl+b6UsLkhXqB > P4Uc+IaV9vxoqmYZAzuqyWm9QriYlcYeaIJ9Ma5fN+CqxnIaCS7UbSxHj0yzTaUb > 4XgmcQBwHe22ouwBmk2RGzLc1Rv8EzMLbbrGhtTu459WnAZCrXOTPOCn54PoIgZD > bK/Om+nmTxepWD1lExHIYk3BXyZObxPO00UJHdxvSAIh45ROlh8jW8hQA9lJ9QVu > Cz775xVlh4DRYgenN34c2afOrhhdq4V1OmjYUBf5M4gS6iKa20LsMjp7NqT0jzzB > tRDFb67u28jxnIXR16g= > =+k0M > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From donald.d.dugger at intel.com Tue Dec 9 05:11:35 2014 From: donald.d.dugger at intel.com (Dugger, Donald D) Date: Tue, 9 Dec 2014 05:11:35 +0000 Subject: [openstack-dev] [gantt] Scheduler sub-group meeting agenda 12/9 Message-ID: <6AF484C0160C61439DE06F17668F3BCB53448566@ORSMSX114.amr.corp.intel.com> Meeting on #openstack-meeting at 1500 UTC (8:00AM MST) 1) Status on cleanup work - https://wiki.openstack.org/wiki/Gantt/kilo -- Don Dugger "Censeo Toto nos in Kansa esse decisse." - D. Gale Ph: 303/443-3786 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sushma_korati at persistent.com Tue Dec 9 05:44:28 2014 From: sushma_korati at persistent.com (Sushma Korati) Date: Tue, 9 Dec 2014 05:44:28 +0000 Subject: [openstack-dev] [Mistral] Query on creating multiple resources In-Reply-To: <1418050120681.46696@persistent.com> References: <1418048204426.95510@persistent.com>, <1418048476946.19042@persistent.com>, <1418050120681.46696@persistent.com> Message-ID: <1418104273303.62515@persistent.com> Hello All, Can we create multiple resources using a single task, like multiple keypairs or security-groups or networks etc? I am trying to extend the existing "create_vm" workflow, such that it accepts a list of security groups. In the workflow, before create_vm I am trying to create the security group if it does not exist. Just to test the security group functionality individually I wrote a sample workflow: -------------------- version: '2.0' name: secgroup_actions workflows: create_security_group: type: direct input: - name - description tasks: create_secgroups: action: nova.security_groups_create name={$.name} description={$.description} ------------ This is a straight forward workflow, but I am unable figure out how to pass multiple security groups to the above workflow. I tried passing multiple dicts in context file but it did not work. ------ { "name": "secgrp1", "description": "using mistral" }, { "name": "secgrp2", "description": "using mistral" } ----- Is there any way to modify this workflow such that it creates more than one security group? Please help. Regards, Sushma DISCLAIMER ========== This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails. -------------- next part -------------- An HTML attachment was scrubbed... URL: From keshava.a at hp.com Tue Dec 9 05:56:28 2014 From: keshava.a at hp.com (A, Keshava) Date: Tue, 9 Dec 2014 05:56:28 +0000 Subject: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration In-Reply-To: References: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP> <9EDBC83C95615E4A97C964A30A3E18AC2F695F28@ORD1EXD01.RACKSPACE.CORP> Message-ID: <891761EAFA335D44AD1FFDB9B4A8C063D7DF66@G4W3216.americas.hpqcorp.net> Stephen, Interesting to know what is ?ACTIVE-ACTIVE topology of load balancing VMs?. What is the scenario is it Service-VM (of NFV) or Tennant VM ? Curious to know the background of this thoughts . keshava From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] Sent: Tuesday, December 09, 2014 7:18 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration For what it's worth, I know that the Octavia project will need something which can do more advanced layer-3 networking in order to deliver and ACTIVE-ACTIVE topology of load balancing VMs / containers / machines. That's still a "down the road" feature for us, but it would be great to be able to do more advanced layer-3 networking in earlier releases of Octavia as well. (Without this, we might have to go through back doors to get Neutron to do what we need it to, and I'd rather avoid that.) I'm definitely up for learning more about your proposal for this project, though I've not had any practical experience with Ryu yet. I would also like to see whether it's possible to do the sort of advanced layer-3 networking you've described without using OVS. (We have found that OVS tends to be not quite mature / stable enough for our needs and have moved most of our clouds to use ML2 / standard linux bridging.) Carl: I'll also take a look at the two gerrit reviews you've linked. Is this week's L3 meeting not happening then? (And man-- I wish it were an hour or two later in the day. Coming at y'all from PST timezone here.) Stephen On Mon, Dec 8, 2014 at 11:57 AM, Carl Baldwin > wrote: Ryan, I'll be traveling around the time of the L3 meeting this week. My flight leaves 40 minutes after the meeting and I might have trouble attending. It might be best to put it off a week or to plan another time -- maybe Friday -- when we could discuss it in IRC or in a Hangout. Carl On Mon, Dec 8, 2014 at 8:43 AM, Ryan Clevenger > wrote: > Thanks for getting back Carl. I think we may be able to make this weeks > meeting. Jason K?lker is the engineer doing all of the lifting on this side. > Let me get with him to review what you all have so far and check our > availability. > > ________________________________________ > > Ryan Clevenger > Manager, Cloud Engineering - US > m: 678.548.7261 > e: ryan.clevenger at rackspace.com > > ________________________________ > From: Carl Baldwin [carl at ecbaldwin.net] > Sent: Sunday, December 07, 2014 4:04 PM > To: OpenStack Development Mailing List > Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation > and collaboration > > Ryan, > > I have been working with the L3 sub team in this direction. Progress has > been slow because of other priorities but we have made some. I have written > a blueprint detailing some changes needed to the code to enable the > flexibility to one day run glaring ups on an l3 routed network [1]. Jaime > has been working on one that integrates ryu (or other speakers) with neutron > [2]. Dvr was also a step in this direction. > > I'd like to invite you to the l3 weekly meeting [3] to discuss further. I'm > very happy to see interest in this area and have someone new to collaborate. > > Carl > > [1] https://review.openstack.org/#/c/88619/ > [2] https://review.openstack.org/#/c/125401/ > [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam > > On Dec 3, 2014 4:04 PM, "Ryan Clevenger" > > wrote: >> >> Hi, >> >> At Rackspace, we have a need to create a higher level networking service >> primarily for the purpose of creating a Floating IP solution in our >> environment. The current solutions for Floating IPs, being tied to plugin >> implementations, does not meet our needs at scale for the following reasons: >> >> 1. Limited endpoint H/A mainly targeting failover only and not >> multi-active endpoints, >> 2. Lack of noisy neighbor and DDOS mitigation, >> 3. IP fragmentation (with cells, public connectivity is terminated inside >> each cell leading to fragmentation and IP stranding when cell CPU/Memory use >> doesn't line up with allocated IP blocks. Abstracting public connectivity >> away from nova installations allows for much more efficient use of those >> precious IPv4 blocks). >> 4. Diversity in transit (multiple encapsulation and transit types on a per >> floating ip basis). >> >> We realize that network infrastructures are often unique and such a >> solution would likely diverge from provider to provider. However, we would >> love to collaborate with the community to see if such a project could be >> built that would meet the needs of providers at scale. We believe that, at >> its core, this solution would boil down to terminating north<->south traffic >> temporarily at a massively horizontally scalable centralized core and then >> encapsulating traffic east<->west to a specific host based on the >> association setup via the current L3 router's extension's 'floatingips' >> resource. >> >> Our current idea, involves using Open vSwitch for header rewriting and >> tunnel encapsulation combined with a set of Ryu applications for management: >> >> https://i.imgur.com/bivSdcC.png >> >> The Ryu application uses Ryu's BGP support to announce up to the Public >> Routing layer individual floating ips (/32's or /128's) which are then >> summarized and announced to the rest of the datacenter. If a particular >> floating ip is experiencing unusually large traffic (DDOS, slashdot effect, >> etc.), the Ryu application could change the announcements up to the Public >> layer to shift that traffic to dedicated hosts setup for that purpose. It >> also announces a single /32 "Tunnel Endpoint" ip downstream to the TunnelNet >> Routing system which provides transit to and from the cells and their >> hypervisors. Since traffic from either direction can then end up on any of >> the FLIP hosts, a simple flow table to modify the MAC and IP in either the >> SRC or DST fields (depending on traffic direction) allows the system to be >> completely stateless. We have proven this out (with static routing and >> flows) to work reliably in a small lab setup. >> >> On the hypervisor side, we currently plumb networks into separate OVS >> bridges. Another Ryu application would control the bridge that handles >> overlay networking to selectively divert traffic destined for the default >> gateway up to the FLIP NAT systems, taking into account any configured >> logical routing and local L2 traffic to pass out into the existing overlay >> fabric undisturbed. >> >> Adding in support for L2VPN EVPN >> (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN >> Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to the >> Ryu BGP speaker will allow the hypervisor side Ryu application to advertise >> up to the FLIP system reachability information to take into account VM >> failover, live-migrate, and supported encapsulation types. We believe that >> decoupling the tunnel endpoint discovery from the control plane >> (Nova/Neutron) will provide for a more robust solution as well as allow for >> use outside of openstack if desired. >> >> ________________________________________ >> >> Ryan Clevenger >> Manager, Cloud Engineering - US >> m: 678.548.7261 >> e: ryan.clevenger at rackspace.com >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sushma_korati at persistent.com Tue Dec 9 05:57:35 2014 From: sushma_korati at persistent.com (Sushma Korati) Date: Tue, 9 Dec 2014 05:57:35 +0000 Subject: [openstack-dev] [Mistral] Query on creating multiple resources In-Reply-To: References: <1418048204426.95510@persistent.com> <1418048476946.19042@persistent.com> <1418050120681.46696@persistent.com> <5485C560.9060605@redhat.com> , Message-ID: <1418105060569.62922@persistent.com> Hi, Thank you guys. Yes I am able to do this with heat, but I faced issues while trying the same with mistral. As suggested will try with the latest mistral branch. Thank you once again. Regards, Sushma ________________________________ From: Georgy Okrokvertskhov [mailto:gokrokvertskhov at mirantis.com] Sent: Tuesday, December 09, 2014 6:07 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Mistral] Query on creating multiple resources Hi Sushma, Did you explore Heat templates? As Zane mentioned you can do this via Heat template without writing any workflows. Do you have any specific use cases which you can't solve with Heat template? Create VM workflow was a demo example. Mistral potentially can be used by Heat or other orchestration tools to do actual interaction with API, but for user it might be easier to use Heat functionality. Thanks, Georgy On Mon, Dec 8, 2014 at 7:54 AM, Nikolay Makhotkin > wrote: Hi, Sushma! Can we create multiple resources using a single task, like multiple keypairs or security-groups or networks etc? Yes, we can. This feature is in the development now and it is considered as experimental - https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections Just clone the last master branch from mistral. You can specify "for-each" task property and provide the array of data to your workflow: -------------------- version: '2.0' name: secgroup_actions workflows: create_security_group: type: direct input: - array_with_names_and_descriptions tasks: create_secgroups: for-each: data: $.array_with_names_and_descriptions action: nova.security_groups_create name={$.data.name} description={$.data.description} ------------ On Mon, Dec 8, 2014 at 6:36 PM, Zane Bitter > wrote: On 08/12/14 09:41, Sushma Korati wrote: Can we create multiple resources using a single task, like multiple keypairs or security-groups or networks etc? Define them in a Heat template and create the Heat stack as a single task. - ZB _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best Regards, Nikolay _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 DISCLAIMER ========== This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails. -------------- next part -------------- An HTML attachment was scrubbed... URL: From trinath.somanchi at freescale.com Tue Dec 9 05:57:44 2014 From: trinath.somanchi at freescale.com (trinath.somanchi at freescale.com) Date: Tue, 9 Dec 2014 05:57:44 +0000 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: Congratulation Kevin and Henry ? -- Trinath Somanchi - B39208 trinath.somanchi at freescale.com | extn: 4048 From: Kyle Mestery [mailto:mestery at mestery.com] Sent: Monday, December 08, 2014 11:32 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] Changes to the core team On Tue, Dec 2, 2014 at 9:59 AM, Kyle Mestery > wrote: Now that we're in the thick of working hard on Kilo deliverables, I'd like to make some changes to the neutron core team. Reviews are the most important part of being a core reviewer, so we need to ensure cores are doing reviews. The stats for the 180 day period [1] indicate some changes are needed for cores who are no longer reviewing. First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from neutron-core. Bob and Nachi have been core members for a while now. They have contributed to Neutron over the years in reviews, code and leading sub-teams. I'd like to thank them for all that they have done over the years. I'd also like to propose that should they start reviewing more going forward the core team looks to fast track them back into neutron-core. But for now, their review stats place them below the rest of the team for 180 days. As part of the changes, I'd also like to propose two new members to neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have been very active in reviews, meetings, and code for a while now. Henry lead the DB team which fixed Neutron DB migrations during Juno. Kevin has been actively working across all of Neutron, he's done some great work on security fixes and stability fixes in particular. Their comments in reviews are insightful and they have helped to onboard new reviewers and taken the time to work with people on their patches. Existing neutron cores, please vote +1/-1 for the addition of Henry and Kevin to the core team. Enough time has passed now, and Kevin and Henry have received enough +1 votes. So I'd like to welcome them to the core team! Thanks, Kyle Thanks! Kyle [1] http://stackalytics.com/report/contribution/neutron-group/180 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yamamoto at valinux.co.jp Tue Dec 9 05:58:04 2014 From: yamamoto at valinux.co.jp (YAMAMOTO Takashi) Date: Tue, 9 Dec 2014 14:58:04 +0900 (JST) Subject: [openstack-dev] [Neutron][OVS] ovs-ofctl-to-python blueprint Message-ID: <20141209055804.536E57094A@kuma.localdomain> hi, here's a blueprint to make OVS agent use Ryu to talk with OVS. https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python https://review.openstack.org/#/c/138980/ (kilo spec) given that ML2/OVS is one of the most popular plugins and the proposal has a few possible controversial points, i want to ask wider opinions. - it introduces a new requirement for OVS agent. (Ryu) - it makes OVS agent require newer OVS version than it currently does. - what to do for xenapi support is still under investigation/research. - possible security impact. please comment on gerrit if you have any opinions. thank you. YAMAMOTO Takashi From soulxu at gmail.com Tue Dec 9 06:28:33 2014 From: soulxu at gmail.com (Alex Xu) Date: Tue, 9 Dec 2014 14:28:33 +0800 Subject: [openstack-dev] [api] Using query string or request body to pass parameter In-Reply-To: <1418083127.6209.11.camel@einstein.kev> References: <54854039.3030405@linux.vnet.ibm.com> <1418058695.6209.0.camel@einstein.kev> <1418083127.6209.11.camel@einstein.kev> Message-ID: Kevin, thanks for the info! I agree with you. RFC is the authority. use payload in the DELETE isn't good way. 2014-12-09 7:58 GMT+08:00 Kevin L. Mitchell : > On Tue, 2014-12-09 at 07:38 +0800, Alex Xu wrote: > > Not sure all, nova is limited > > at > https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L79 > > That under our control. > > It is, but the client frameworks aren't, and some of them prohibit > sending a body with a DELETE request. Further, RFC7231 has this to say > about DELETE request bodies: > > A payload within a DELETE request message has no defined semantics; > sending a payload body on a DELETE request might cause some > existing > implementations to reject the request. > > (?4.3.5) > > I think we have to conclude that, if we need a request body, we cannot > use the DELETE method. We can modify the operation, such as setting a > "force" flag, with a query parameter on the URI, but a request body > should be considered out of bounds with respect to DELETE. > > > Maybe not just ask question for delete, also for other method. > > > > 2014-12-09 1:11 GMT+08:00 Kevin L. Mitchell < > kevin.mitchell at rackspace.com>: > > On Mon, 2014-12-08 at 14:07 +0800, Eli Qiao wrote: > > > I wonder if we can use body in delete, currently , there isn't > any > > > case used in v2/v3 api. > > > > No, many frameworks raise an error if you try to include a body > with a > > DELETE request. > > -- > > Kevin L. Mitchell > > Rackspace > > -- > Kevin L. Mitchell > Rackspace > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xianchaobo at huawei.com Tue Dec 9 06:29:50 2014 From: xianchaobo at huawei.com (xianchaobo) Date: Tue, 9 Dec 2014 06:29:50 +0000 Subject: [openstack-dev] [Ironic] Some questions about Ironic service Message-ID: Hello, all I'm trying to install and configure Ironic service, something confused me. I create two neutron networks, public network and private network. Private network is used to deploy physical machines Public network is used to provide floating ip. (1) Private network type can be VLAN or VXLAN? (In install guide, the network type is flat) (2) The network of deployed physical machines can be managed by neutron? (3) Different tenants can have its own network to manage physical machines? (4) Does the ironic provide some mechanism for deployed physical machines to use storage such as shared storage,cinder volume? Thanks, XianChaobo -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpeeyush at linux.vnet.ibm.com Tue Dec 9 06:55:39 2014 From: gpeeyush at linux.vnet.ibm.com (Peeyush Gupta) Date: Tue, 09 Dec 2014 12:25:39 +0530 Subject: [openstack-dev] [Ironic] How to get past pxelinux.0 bootloader? Message-ID: <54869CEB.4040602@linux.vnet.ibm.com> Hi all, So, I have set up a devstack ironic setup for baremetal deployment. I have been able to deploy a baremetal node successfully using pxe_ipmitool driver. Now, I am trying to boot a server where I already have a bootloader i.e. I don't need pxelinux to go and fetch kernel and initrd images for me. I want to transfer them directly. I checked out the code and figured out that there are dhcp opts available, that are modified using pxe_utils.py, changing it didn't help. Then I moved to ironic.conf, but here also I only see an option to add pxe_bootfile_name, which is exactly what I want to avoid. Can anyone please help me with this situation? I don't want to go through pxelinux.0 bootloader, I just directly want to transfer kernel and initrd images. Thanks. -- Peeyush Gupta gpeeyush at linux.vnet.ibm.com From sudhakar-babu.gariganti at hp.com Tue Dec 9 07:07:20 2014 From: sudhakar-babu.gariganti at hp.com (Gariganti, Sudhakar Babu) Date: Tue, 9 Dec 2014 07:07:20 +0000 Subject: [openstack-dev] [neutron] Changes to the core team In-Reply-To: References: Message-ID: Congrats Kevin and Henry ?. From: Kyle Mestery [mailto:mestery at mestery.com] Sent: Monday, December 08, 2014 11:32 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] Changes to the core team On Tue, Dec 2, 2014 at 9:59 AM, Kyle Mestery > wrote: Now that we're in the thick of working hard on Kilo deliverables, I'd like to make some changes to the neutron core team. Reviews are the most important part of being a core reviewer, so we need to ensure cores are doing reviews. The stats for the 180 day period [1] indicate some changes are needed for cores who are no longer reviewing. First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from neutron-core. Bob and Nachi have been core members for a while now. They have contributed to Neutron over the years in reviews, code and leading sub-teams. I'd like to thank them for all that they have done over the years. I'd also like to propose that should they start reviewing more going forward the core team looks to fast track them back into neutron-core. But for now, their review stats place them below the rest of the team for 180 days. As part of the changes, I'd also like to propose two new members to neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have been very active in reviews, meetings, and code for a while now. Henry lead the DB team which fixed Neutron DB migrations during Juno. Kevin has been actively working across all of Neutron, he's done some great work on security fixes and stability fixes in particular. Their comments in reviews are insightful and they have helped to onboard new reviewers and taken the time to work with people on their patches. Existing neutron cores, please vote +1/-1 for the addition of Henry and Kevin to the core team. Enough time has passed now, and Kevin and Henry have received enough +1 votes. So I'd like to welcome them to the core team! Thanks, Kyle Thanks! Kyle [1] http://stackalytics.com/report/contribution/neutron-group/180 -------------- next part -------------- An HTML attachment was scrubbed... URL: From SamuelB at Radware.com Tue Dec 9 07:28:03 2014 From: SamuelB at Radware.com (Samuel Bercovici) Date: Tue, 9 Dec 2014 07:28:03 +0000 Subject: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. In-Reply-To: References: <1416867738.3960.19.camel@localhost> <1417737958.4577.25.camel@localhost> Message-ID: Hi, I agree that the most important thing is to conclude how status properties are being managed and handled so it will remain consistent as we move along. I am fine with starting with a simple model and expending as need to be. The L7 implementation is ready waiting for the rest of the model to get in so pool sharing under a listener is something that we should solve now. I think that pool sharing under listeners connected to the same LB is more common that what you describe. -Sam. From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] Sent: Tuesday, December 09, 2014 12:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. So... I should probably note that I see the case where a user actually shares object as being the exception. I expect that 90% of deployments will never need to share objects, except for a few cases-- those cases (of 1:N) relationships are: * Loadbalancers must be able to have many Listeners * When L7 functionality is introduced, L7 policies must be able to refer to the same Pool under a single Listener. (That is to say, sharing Pools under the scope of a single Listener makes sense, but only after L7 policies are introduced.) I specifically see the following kind of sharing having near zero demand: * Listeners shared across multiple loadbalancers * Pools shared across multiple listeners * Members shared across multiple pools So, despite the fact that sharing doesn't make status reporting any more or less complex, I'm still in favor of starting with 1:1 relationships between most kinds of objects and then changing those to 1:N or M:N as we get user demand for this. As I said in my first response, allowing too many many to many relationships feels like a solution to a problem that doesn't really exist, and introduces a lot of unnecessary complexity. Stephen On Sun, Dec 7, 2014 at 11:43 PM, Samuel Bercovici > wrote: +1 From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] Sent: Friday, December 05, 2014 7:59 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. German-- but the point is that sharing apparently has no effect on the number of permutations for status information. The only difference here is that without sharing it's more work for the user to maintain and modify trees of objects. On Fri, Dec 5, 2014 at 9:36 AM, Eichberger, German > wrote: Hi Brandon + Stephen, Having all those permutations (and potentially testing them) made us lean against the sharing case in the first place. It?s just a lot of extra work for only a small number of our customers. German From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] Sent: Thursday, December 04, 2014 9:17 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. Hi Brandon, Yeah, in your example, member1 could potentially have 8 different statuses (and this is a small example!)... If that member starts flapping, it means that every time it flaps there are 8 notifications being passed upstream. Note that this problem actually doesn't get any better if we're not sharing objects but are just duplicating them (ie. not sharing objects but the user makes references to the same back-end machine as 8 different members.) To be honest, I don't see sharing entities at many levels like this being the rule for most of our installations-- maybe a few percentage points of installations will do an excessive sharing of members, but I doubt it. So really, even though reporting status like this is likely to generate a pretty big tree of data, I don't think this is actually a problem, eh. And I don't see sharing entities actually reducing the workload of what needs to happen behind the scenes. (It just allows us to conceal more of this work from the user.) Stephen On Thu, Dec 4, 2014 at 4:05 PM, Brandon Logan > wrote: Sorry it's taken me a while to respond to this. So I wasn't thinking about this correctly. I was afraid you would have to pass in a full tree of parent child representations to /loadbalancers to update anything a load balancer it is associated to (including down to members). However, after thinking about it, a user would just make an association call on each object. For Example, associate member1 with pool1, associate pool1 with listener1, then associate loadbalancer1 with listener1. Updating is just as simple as updating each entity. This does bring up another problem though. If a listener can live on many load balancers, and a pool can live on many listeners, and a member can live on many pools, there's lot of permutations to keep track of for status. you can't just link a member's status to a load balancer bc a member can exist on many pools under that load balancer, and each pool can exist under many listeners under that load balancer. For example, say I have these: lb1 lb2 listener1 listener2 pool1 pool2 member1 member2 lb1 -> [listener1, listener2] lb2 -> [listener1] listener1 -> [pool1, pool2] listener2 -> [pool1] pool1 -> [member1, member2] pool2 -> [member1] member1 can now have a different statuses under pool1 and pool2. since listener1 and listener2 both have pool1, this means member1 will now have a different status for listener1 -> pool1 and listener2 -> pool2 combination. And so forth for load balancers. Basically there's a lot of permutations and combinations to keep track of with this model for statuses. Showing these in the body of load balancer details can get quite large. I hope this makes sense because my brain is ready to explode. Thanks, Brandon On Thu, 2014-11-27 at 08:52 +0000, Samuel Bercovici wrote: > Brandon, can you please explain further (1) bellow? > > -----Original Message----- > From: Brandon Logan [mailto:brandon.logan at RACKSPACE.COM] > Sent: Tuesday, November 25, 2014 12:23 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. > > My impression is that the statuses of each entity will be shown on a detailed info request of a loadbalancer. The root level objects would not have any statuses. For example a user makes a GET request to /loadbalancers/{lb_id} and the status of every child of that load balancer is show in a "status_tree" json object. For example: > > {"name": "loadbalancer1", > "status_tree": > {"listeners": > [{"name": "listener1", "operating_status": "ACTIVE", > "default_pool": > {"name": "pool1", "status": "ACTIVE", > "members": > [{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}} > > Sam, correct me if I am wrong. > > I generally like this idea. I do have a few reservations with this: > > 1) Creating and updating a load balancer requires a full tree configuration with the current extension/plugin logic in neutron. Since updates will require a full tree, it means the user would have to know the full tree configuration just to simply update a name. Solving this would require nested child resources in the URL, which the current neutron extension/plugin does not allow. Maybe the new one will. > > 2) The status_tree can get quite large depending on the number of listeners and pools being used. This is a minor issue really as it will make horizon's (or any other UI tool's) job easier to show statuses. > > Thanks, > Brandon > > On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote: > > Hi Samuel, > > > > > > We've actually been avoiding having a deeper discussion about status > > in Neutron LBaaS since this can get pretty hairy as the back-end > > implementations get more complicated. I suspect managing that is > > probably one of the bigger reasons we have disagreements around object > > sharing. Perhaps it's time we discussed representing state "correctly" > > (whatever that means), instead of a round-a-bout discussion about > > object sharing (which, I think, is really just avoiding this issue)? > > > > > > Do you have a proposal about how status should be represented > > (possibly including a description of the state machine) if we collapse > > everything down to be logical objects except the loadbalancer object? > > (From what you're proposing, I suspect it might be too general to, for > > example, represent the UP/DOWN status of members of a given pool.) > > > > > > Also, from an haproxy perspective, sharing pools within a single > > listener actually isn't a problem. That is to say, having the same > > L7Policy pointing at the same pool is OK, so I personally don't have a > > problem allowing sharing of objects within the scope of parent > > objects. What do the rest of y'all think? > > > > > > Stephen > > > > > > > > On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici > > > wrote: > > Hi Stephen, > > > > > > > > 1. The issue is that if we do 1:1 and allow status/state > > to proliferate throughout all objects we will then get an > > issue to fix it later, hence even if we do not do sharing, I > > would still like to have all objects besides LB be treated as > > logical. > > > > 2. The 3rd use case bellow will not be reasonable without > > pool sharing between different policies. Specifying different > > pools which are the same for each policy make it non-started > > to me. > > > > > > > > -Sam. > > > > > > > > > > > > > > > > From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] > > Sent: Friday, November 21, 2014 10:26 PM > > To: OpenStack Development Mailing List (not for usage > > questions) > > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects > > in LBaaS - Use Cases that led us to adopt this. > > > > > > > > I think the idea was to implement 1:1 initially to reduce the > > amount of code and operational complexity we'd have to deal > > with in initial revisions of LBaaS v2. Many to many can be > > simulated in this scenario, though it does shift the burden of > > maintenance to the end user. It does greatly simplify the > > initial code for v2, in any case, though. > > > > > > > > > > > > Did we ever agree to allowing listeners to be shared among > > load balancers? I think that still might be a N:1 > > relationship even in our latest models. > > > > > > > > > > There's also the difficulty introduced by supporting different > > flavors: Since flavors are essentially an association between > > a load balancer object and a driver (with parameters), once > > flavors are introduced, any sub-objects of a given load > > balancer objects must necessarily be purely logical until they > > are associated with a load balancer. I know there was talk of > > forcing these objects to be sub-objects of a load balancer > > which can't be accessed independently of the load balancer > > (which would have much the same effect as what you discuss: > > State / status only make sense once logical objects have an > > instantiation somewhere.) However, the currently proposed API > > treats most objects as root objects, which breaks this > > paradigm. > > > > > > > > > > > > How we handle status and updates once there's an instantiation > > of these logical objects is where we start getting into real > > complexity. > > > > > > > > > > > > It seems to me there's a lot of complexity introduced when we > > allow a lot of many to many relationships without a whole lot > > of benefit in real-world deployment scenarios. In most cases, > > objects are not going to be shared, and in those cases with > > sufficiently complicated deployments in which shared objects > > could be used, the user is likely to be sophisticated enough > > and skilled enough to manage updating what are essentially > > "copies" of objects, and would likely have an opinion about > > how individual failures should be handled which wouldn't > > necessarily coincide with what we developers of the system > > would assume. That is to say, allowing too many many to many > > relationships feels like a solution to a problem that doesn't > > really exist, and introduces a lot of unnecessary complexity. > > > > > > > > > > > > In any case, though, I feel like we should walk before we run: > > Implementing 1:1 initially is a good idea to get us rolling. > > Whether we then implement 1:N or M:N after that is another > > question entirely. But in any case, it seems like a bad idea > > to try to start with M:N. > > > > > > > > > > > > Stephen > > > > > > > > > > > > > > > > On Thu, Nov 20, 2014 at 4:52 AM, Samuel Bercovici > > > wrote: > > > > Hi, > > > > Per discussion I had at OpenStack Summit/Paris with Brandon > > and Doug, I would like to remind everyone why we choose to > > follow a model where pools and listeners are shared (many to > > many relationships). > > > > Use Cases: > > 1. The same application is being exposed via different LB > > objects. > > For example: users coming from the internal "private" > > organization network, have an LB1(private_VIP) --> > > Listener1(TLS) -->Pool1 and user coming from the "internet", > > have LB2(public_vip)-->Listener1(TLS)-->Pool1. > > This may also happen to support ipv4 and ipv6: LB_v4(ipv4_VIP) > > --> Listener1(TLS) -->Pool1 and LB_v6(ipv6_VIP) --> > > Listener1(TLS) -->Pool1 > > The operator would like to be able to manage the pool > > membership in cases of updates and error in a single place. > > > > 2. The same group of servers is being used via different > > listeners optionally also connected to different LB objects. > > For example: users coming from the internal "private" > > organization network, have an LB1(private_VIP) --> > > Listener1(HTTP) -->Pool1 and user coming from the "internet", > > have LB2(public_vip)-->Listener2(TLS)-->Pool1. > > The LBs may use different flavors as LB2 needs TLS termination > > and may prefer a different "stronger" flavor. > > The operator would like to be able to manage the pool > > membership in cases of updates and error in a single place. > > > > 3. The same group of servers is being used in several > > different L7_Policies connected to a listener. Such listener > > may be reused as in use case 1. > > For example: LB1(VIP1)-->Listener_L7(TLS) > > | > > > > +-->L7_Policy1(rules..)-->Pool1 > > | > > > > +-->L7_Policy2(rules..)-->Pool2 > > | > > > > +-->L7_Policy3(rules..)-->Pool1 > > | > > > > +-->L7_Policy3(rules..)-->Reject > > > > > > I think that the "key" issue handling correctly the > > "provisioning" state and the operation state in a many to many > > model. > > This is an issue as we have attached status fields to each and > > every object in the model. > > A side effect of the above is that to understand the > > "provisioning/operation" status one needs to check many > > different objects. > > > > To remedy this, I would like to turn all objects besides the > > LB to be logical objects. This means that the only place to > > manage the status/state will be on the LB object. > > Such status should be hierarchical so that logical object > > attached to an LB, would have their status consumed out of the > > LB object itself (in case of an error). > > We also need to discuss how modifications of a logical object > > will be "rendered" to the concrete LB objects. > > You may want to revisit > > https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r the "Logical Model + Provisioning Status + Operation Status + Statistics" for a somewhat more detailed explanation albeit it uses the LBaaS v1 model as a reference. > > > > Regards, > > -Sam. > > > > > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > -- > > > > Stephen Balukoff > > Blue Box Group, LLC > > (800)613-4305 x807 > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > -- > > Stephen Balukoff > > Blue Box Group, LLC > > (800)613-4305 x807 > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 -------------- next part -------------- An HTML attachment was scrubbed... URL: From m4d.coder at gmail.com Tue Dec 9 07:39:38 2014 From: m4d.coder at gmail.com (W Chan) Date: Mon, 8 Dec 2014 23:39:38 -0800 Subject: [openstack-dev] [Mistral] Action context passed to all action executions by default Message-ID: Renat, Is there any reason why Mistral do not pass action context such as workflow ID, execution ID, task ID, and etc to all of the action executions? I think it makes a lot of sense for that information to be made available by default. The action can then decide what to do with the information. It doesn't require a special signature in the __init__ method of the Action classes. What do you think? Thanks. Winson -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Dec 9 08:38:50 2014 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 09 Dec 2014 09:38:50 +0100 Subject: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC Message-ID: <5486B51A.6080001@openstack.org> Dear PTLs, cross-project liaisons and anyone else interested, We'll have a cross-project meeting Tuesday at 21:00 UTC, with the following agenda: * Convergence on specs process (johnthetubaguy) * Approval process differences * Path structure differences * specs.o.o aspect differences (toc) * osprofiler config options (kragniz) * Glance uses a different name from other projects * Consensus on what name to use * Open discussion & announcements See you there ! For more details, please see: https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting -- Thierry Carrez (ttx) From rakhmerov at mirantis.com Tue Dec 9 08:48:10 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Tue, 9 Dec 2014 14:48:10 +0600 Subject: [openstack-dev] [Mistral] Query on creating multiple resources In-Reply-To: References: <1418048204426.95510@persistent.com> <1418048476946.19042@persistent.com> <1418050120681.46696@persistent.com> <5485C560.9060605@redhat.com> Message-ID: <8CC8FE04-056A-4574-926B-BAF5B7749C84@mirantis.com> Hey, I think it?s a question of what the final goal is. For just creating security groups as a resource I think Georgy and Zane are right, just use Heat. If the goal is to try Mistral or have this simple workflow as part of more complex then it?s totally fine to use Mistral. Sorry, I?m probably biased because Mistral is our baby :). Anyway, Nikolay has already answered the question technically, this ?for-each? feature will be available officially in about 2 weeks. > Create VM workflow was a demo example. Mistral potentially can be used by Heat or other orchestration tools to do actual interaction with API, but for user it might be easier to use Heat functionality. I kind of disagree with that statement. Mistral can be used by whoever finds its useful for their needs. Standard ?create_instance? workflow (which is in ?resources/workflows/create_instance.yaml?) is not so a demo example as well. It does a lot of good stuff you may really need in your case (e.g. retry policies). Even though it?s true that it has some limitations we?re aware of. For example, when it comes to configuring a network for newly created instance it?s now missing network related parameters to be able to alter behavior. One more thing: Now only will Heat be able to call Mistral somewhere underneath the surface. Mistral has already integration with Heat to be able to call it if needed and there?s a plan to make it even more useful and usable. Thanks Renat Akhmerov @ Mirantis Inc. From rakhmerov at mirantis.com Tue Dec 9 08:49:32 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Tue, 9 Dec 2014 14:49:32 +0600 Subject: [openstack-dev] [Mistral] Query on creating multiple resources In-Reply-To: <1418105060569.62922@persistent.com> References: <1418048204426.95510@persistent.com> <1418048476946.19042@persistent.com> <1418050120681.46696@persistent.com> <5485C560.9060605@redhat.com> <, > <1418105060569.62922@persistent.com> Message-ID: <33D7E423-9FDD-4D3D-9D0B-65ADA937852F@mirantis.com> No problem, let us know if you have any other questions. Renat Akhmerov @ Mirantis Inc. > On 09 Dec 2014, at 11:57, Sushma Korati wrote: > > > Hi, > > Thank you guys. > > Yes I am able to do this with heat, but I faced issues while trying the same with mistral. > As suggested will try with the latest mistral branch. Thank you once again. > > Regards, > Sushma > > > > > From: Georgy Okrokvertskhov [mailto:gokrokvertskhov at mirantis.com ] > Sent: Tuesday, December 09, 2014 6:07 AM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [Mistral] Query on creating multiple resources > > Hi Sushma, > > Did you explore Heat templates? As Zane mentioned you can do this via Heat template without writing any workflows. > Do you have any specific use cases which you can't solve with Heat template? > > Create VM workflow was a demo example. Mistral potentially can be used by Heat or other orchestration tools to do actual interaction with API, but for user it might be easier to use Heat functionality. > > Thanks, > Georgy > > On Mon, Dec 8, 2014 at 7:54 AM, Nikolay Makhotkin > wrote: > Hi, Sushma! > > Can we create multiple resources using a single task, like multiple keypairs or security-groups or networks etc? > > Yes, we can. This feature is in the development now and it is considered as experimental -https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections > > Just clone the last master branch from mistral. > > You can specify "for-each" task property and provide the array of data to your workflow: > > -------------------- > version: '2.0' > > name: secgroup_actions > > workflows: > create_security_group: > type: direct > input: > - array_with_names_and_descriptions > > tasks: > create_secgroups: > for-each: > data: $.array_with_names_and_descriptions > action: nova.security_groups_create name={$.data.name } description={$.data.description} > ------------ > > On Mon, Dec 8, 2014 at 6:36 PM, Zane Bitter > wrote: > On 08/12/14 09:41, Sushma Korati wrote: > Can we create multiple resources using a single task, like multiple > keypairs or security-groups or networks etc? > > Define them in a Heat template and create the Heat stack as a single task. > > - ZB > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Best Regards, > Nikolay > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Georgy Okrokvertskhov > Architect, > OpenStack Platform Products, > Mirantis > http://www.mirantis.com > Tel. +1 650 963 9828 > Mob. +1 650 996 3284 > DISCLAIMER ========== This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails. > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Tue Dec 9 08:52:38 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Tue, 9 Dec 2014 14:52:38 +0600 Subject: [openstack-dev] [Mistral] Event Subscription In-Reply-To: References: Message-ID: <17C46BD9-F6F6-43AE-883E-2512EE698CDD@mirantis.com> Ok, got it. So my general suggestion here is: let's keep it as simple as possible for now, create something that works and then let?s see how to improve it. And yes, consumers may be and mostly will be 3rd parties. Thanks Renat Akhmerov @ Mirantis Inc. > On 09 Dec 2014, at 08:25, W Chan wrote: > > Renat, > > On sending events to an "exchange", I mean an exchange on the transport (i.e. rabbitMQ exchange https://www.rabbitmq.com/tutorials/amqp-concepts.html ). On implementation we can probably explore the notification feature in oslo.messaging. But on second thought, this would limit the consumers to trusted subsystems or services though. If we want the event consumers to be any 3rd party, including untrusted, then maybe we should keep it as HTTP calls. > > Winson > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Tue Dec 9 09:22:28 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Tue, 9 Dec 2014 15:22:28 +0600 Subject: [openstack-dev] [Mistral] Action context passed to all action executions by default In-Reply-To: References: Message-ID: <74548DCF-F417-4EA8-B3AE-CDD3024A0753@mirantis.com> Hi Winson, I think it makes perfect sense. The reason I think is mostly historical and this can be reviewed now. Can you pls file a BP and describe your suggested design on that? I mean how we need to alter interface Action etc. Thanks Renat Akhmerov @ Mirantis Inc. > On 09 Dec 2014, at 13:39, W Chan wrote: > > Renat, > > Is there any reason why Mistral do not pass action context such as workflow ID, execution ID, task ID, and etc to all of the action executions? I think it makes a lot of sense for that information to be made available by default. The action can then decide what to do with the information. It doesn't require a special signature in the __init__ method of the Action classes. What do you think? > > Thanks. > Winson > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From joehuang at huawei.com Tue Dec 9 09:33:47 2014 From: joehuang at huawei.com (joehuang) Date: Tue, 9 Dec 2014 09:33:47 +0000 Subject: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC In-Reply-To: <5486B51A.6080001@openstack.org> References: <5486B51A.6080001@openstack.org> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541FB758@szxema505-mbs.china.huawei.com> Hi, If time is available, how about adding one agenda to guide the direction for cascading to move forward? Thanks in advance. The topic is : " Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases. " -------------------------------------------------------------------------------------------------------------------------------- In the 40 minutes cross-project summit session "Approaches for scaling out"[1], almost 100 peoples attended the meeting, and the conclusion is that cells can not cover the use cases and requirements which the OpenStack cascading solution[2] aim to address, the background including use cases and requirements is also described in the mail. After the summit, we just ported the PoC[3] source code from IceHouse based to Juno based. Now, let's move forward: The major task is to introduce new driver/agent to existing core projects, for the core idea of cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. a). Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases. b). Volunteer as the cross project coordinator. c). Volunteers for implementation and CI. (Already 6 engineers working on cascading in the project StackForge/tricircle) Background of OpenStack cascading vs cells: 1. Use cases a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" ), establishing globally addressable tenants which result in efficient services deployment. b). Telefonica use case[5], create virtual DC( data center) cross multiple physical DCs with seamless experience. c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV cloud, it's in nature the cloud will be distributed but inter-connected in many data centers. 2.requirements a). The operator has multiple sites cloud; each site can use one or multiple vendor's OpenStack distributions. b). Each site with its own requirements and upgrade schedule while maintaining standard OpenStack API c). The multi-site cloud must provide unified resource management with global Open API exposed, for example create virtual DC cross multiple physical DCs with seamless experience. Although a prosperity orchestration layer could be developed for the multi-site cloud, but it's prosperity API in the north bound interface. The cloud operators want the ecosystem friendly global open API for the mutli-site cloud for global access. 3. What problems does cascading solve that cells doesn't cover: OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core architecture idea of OpenStack cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks (from different vendor's distribution, or different version ) which may located in different sites (or data centers ) through the OpenStack API, meanwhile the cloud still expose OpenStack API as the north-bound API in the cloud level. 4. Why cells can't do that: Cells provide the scale out capability to Nova, but from the OpenStack as a whole point of view, it's still working like one OpenStack instance. a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. This approach provides the multi-site cloud with one unified API endpoint and unified resource management, but consolidation of multi-vendor/multi-version OpenStack instances across one or more data centers cannot be fulfilled. b). Each site installed one child cell and accompanied standalone Cinder, Neutron(or Nova-network), Glance, Ceilometer. This approach makes multi-vendor/multi-version OpenStack distribution co-existence in multi-site seem to be feasible, but the requirement for unified API endpoint and unified resource management cannot be fulfilled. Cross Neutron networking automation is also missing, which should otherwise be done manually or use proprietary orchestration layer. For more information about cascading and cells, please refer to the discussion thread before Paris Summit [7]. [1]Approaches for scaling out: https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack [2]OpenStack cascading solution: https://wiki.openstack.org/wiki/OpenStack_cascading_solution [3]Cascading PoC: https://github.com/stackforge/tricircle [4]Vodafone use case (9'02" to 12'30"): https://www.youtube.com/watch?v=-KOJYvhmxQI [5]Telefonica use case: http://www.telefonica.com/en/descargas/mwc/present_20140224.pdf [6]ETSI NFV use cases: http://www.etsi.org/deliver/etsi_gs/nfv/001_099/001/01.01.01_60/gs_nfv001v010101p.pdf [7]Cascading thread before design summit: http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-td54115.html Best Regards Chaoyi Huang ( Joe Huang ) -----Original Message----- From: Thierry Carrez [mailto:thierry at openstack.org] Sent: Tuesday, December 09, 2014 4:39 PM To: OpenStack Development Mailing List Subject: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC Dear PTLs, cross-project liaisons and anyone else interested, We'll have a cross-project meeting Tuesday at 21:00 UTC, with the following agenda: * Convergence on specs process (johnthetubaguy) * Approval process differences * Path structure differences * specs.o.o aspect differences (toc) * osprofiler config options (kragniz) * Glance uses a different name from other projects * Consensus on what name to use * Open discussion & announcements See you there ! For more details, please see: https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting -- Thierry Carrez (ttx) _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Kevin.Fox at pnnl.gov Tue Dec 9 09:52:11 2014 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 9 Dec 2014 09:52:11 +0000 Subject: [openstack-dev] [Ironic] Some questions about Ironic service Message-ID: <1A3C52DFCD06494D8528644858247BF017815FE1@EX10MBOX03.pnnl.gov> No to questions 1, 3, and 4. Yes to 2, but very minimally. ________________________________ From: xianchaobo Sent: Monday, December 08, 2014 10:29:50 PM To: openstack-dev at lists.openstack.org Cc: Luohao (brian) Subject: [openstack-dev] [Ironic] Some questions about Ironic service Hello, all I?m trying to install and configure Ironic service, something confused me. I create two neutron networks, public network and private network. Private network is used to deploy physical machines Public network is used to provide floating ip. (1) Private network type can be VLAN or VXLAN? (In install guide, the network type is flat) (2) The network of deployed physical machines can be managed by neutron? (3) Different tenants can have its own network to manage physical machines? (4) Does the ironic provide some mechanism for deployed physical machines to use storage such as shared storage,cinder volume? Thanks, XianChaobo -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxime.leroy at 6wind.com Tue Dec 9 09:53:19 2014 From: maxime.leroy at 6wind.com (Maxime Leroy) Date: Tue, 9 Dec 2014 10:53:19 +0100 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver Message-ID: Hi there. I would like some clarification regarding support of out-of-tree plugin in nova and in neutron. First, on neutron side, there are mechanisms to support out-of-tree plugin for L2 plugin (core_plugin) and ML2 mech driver (stevedore/entrypoint). Most of ML2/L2 plugins need to have a specific vif driver. As the vif_driver configuration option in nova has been removed of Juno, it's not possible to have anymore external Mech driver/L2 plugin. The nova community takes the decision to stop supporting VIF driver classes as a public extension point. (ref http://lists.openstack.org/pipermail/openstack-dev/2014-August/043174.html) At the opposite, the neutron community still continues to support external L2/ML2 mechdriver plugin. And more, the decision to put out-of-the-tree the monolithic plugins and ML2 Mechanism Drivers has been taken in the Paris summit (ref https://review.openstack.org/#/c/134680/15/specs/kilo/core-vendor-decomposition.rst) I am feeling a bit confused about these two opposite decisions of the two communities. What am I missing ? I have also proposed a blueprint to have a new plugin mechanism in nova to load external vif driver. (nova-specs: https://review.openstack.org/#/c/136827/ and nova (rfc patch): https://review.openstack.org/#/c/136857/) >From my point-of-view of a developer having a plugin framework for internal/external vif driver seems to be a good thing. It makes the code more modular and introduce a clear api for vif driver classes. So far, it raises legitimate questions concerning API stability and public API that request a wider discussion on the ML (as asking by John Garbut). I think having a plugin mechanism and a clear api for vif driver is not going against this policy: http://docs.openstack.org/developer/nova/devref/policies.html#out-of-tree-support. There is no needs to have a stable API. It is up to the owner of the external VIF driver to ensure that its driver is supported by the latest API. And not the nova community to manage a stable API for this external VIF driver. Does it make senses ? Considering the network V2 API, L2/ML2 mechanism driver and VIF driver need to exchange information such as: binding:vif_type and binding:vif_details. >From my understanding, 'binding:vif_type' and 'binding:vif_details' as a field part of the public network api. There is no validation constraints for these fields (see http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html), it means that any value is accepted by the API. So, the values set in 'binding:vif_type' and 'binding:vif_details' are not part of the public API. Is my understanding correct ? What other reasons am I missing to not have VIF driver classes as a public extension point ? Thanks in advance for your help. Maxime From Kevin.Fox at pnnl.gov Tue Dec 9 09:54:21 2014 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 9 Dec 2014 09:54:21 +0000 Subject: [openstack-dev] [Ironic] How to get past pxelinux.0 bootloader? In-Reply-To: <54869CEB.4040602@linux.vnet.ibm.com> References: <54869CEB.4040602@linux.vnet.ibm.com> Message-ID: <1A3C52DFCD06494D8528644858247BF017815FEF@EX10MBOX03.pnnl.gov> You probably want to use the agent driver, not the pxe one. It lets you use bootloaders from the image. ________________________________ From: Peeyush Gupta Sent: Monday, December 08, 2014 10:55:39 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Ironic] How to get past pxelinux.0 bootloader? Hi all, So, I have set up a devstack ironic setup for baremetal deployment. I have been able to deploy a baremetal node successfully using pxe_ipmitool driver. Now, I am trying to boot a server where I already have a bootloader i.e. I don't need pxelinux to go and fetch kernel and initrd images for me. I want to transfer them directly. I checked out the code and figured out that there are dhcp opts available, that are modified using pxe_utils.py, changing it didn't help. Then I moved to ironic.conf, but here also I only see an option to add pxe_bootfile_name, which is exactly what I want to avoid. Can anyone please help me with this situation? I don't want to go through pxelinux.0 bootloader, I just directly want to transfer kernel and initrd images. Thanks. -- Peeyush Gupta gpeeyush at linux.vnet.ibm.com _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rprikhodchenko at mirantis.com Tue Dec 9 10:09:02 2014 From: rprikhodchenko at mirantis.com (Roman Prykhodchenko) Date: Tue, 9 Dec 2014 11:09:02 +0100 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> Message-ID: <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> It is true that IPA and FuelAgent share a lot of functionality in common. However there is a major difference between them which is that they are intended to be used to solve a different problem. IPA is a solution for provision-use-destroy-use_by_different_user use-case and is really great for using it for providing BM nodes for other OS services or in services like Rackspace OnMetal. FuelAgent itself serves for provision-use-use-?-use use-case like Fuel or TripleO have. Those two use-cases require concentration on different details in first place. For instance for IPA proper decommissioning is more important than advanced disk management, but for FuelAgent priorities are opposite because of obvious reasons. Putting all functionality to a single driver and a single agent may cause conflicts in priorities and make a lot of mess inside both the driver and the agent. Actually previously changes to IPA were blocked right because of this conflict of priorities. Therefore replacing FuelAgent by IPA in where FuelAgent is used currently does not seem like a good option because come people (and I?m not talking about Mirantis) might loose required features because of different priorities. Having two separate drivers along with two separate agents for those different use-cases will allow to have two independent teams that are concentrated on what?s really important for a specific use-case. I don?t see any problem in overlapping functionality if it?s used differently. P. S. I realise that people may be also confused by the fact that FuelAgent is actually called like that and is used only in Fuel atm. Our point is to make it a simple, powerful and what?s more important a generic tool for provisioning. It is not bound to Fuel or Mirantis and if it will cause confusion in the future we will even be happy to give it a different and less confusing name. P. P. S. Some of the points of this integration do not look generic enough or nice enough. We look pragmatic on the stuff and are trying to implement what?s possible to implement as the first step. For sure this is going to have a lot more steps to make it better and more generic. > On 09 Dec 2014, at 01:46, Jim Rollenhagen wrote: > > > > On December 8, 2014 2:23:58 PM PST, Devananda van der Veen > wrote: >> I'd like to raise this topic for a wider discussion outside of the >> hallway >> track and code reviews, where it has thus far mostly remained. >> >> In previous discussions, my understanding has been that the Fuel team >> sought to use Ironic to manage "pets" rather than "cattle" - and doing >> so >> required extending the API and the project's functionality in ways that >> no >> one else on the core team agreed with. Perhaps that understanding was >> wrong >> (or perhaps not), but in any case, there is now a proposal to add a >> FuelAgent driver to Ironic. The proposal claims this would meet that >> teams' >> needs without requiring changes to the core of Ironic. >> >> https://review.openstack.org/#/c/138115/ > > I think it's clear from the review that I share the opinions expressed in this email. > > That said (and hopefully without derailing the thread too much), I'm curious how this driver could do software RAID or LVM without modifying Ironic's API or data model. How would the agent know how these should be built? How would an operator or user tell Ironic what the disk/partition/volume layout would look like? > > And before it's said - no, I don't think vendor passthru API calls are an appropriate answer here. > > // jim > >> >> The Problem Description section calls out four things, which have all >> been >> discussed previously (some are here [0]). I would like to address each >> one, >> invite discussion on whether or not these are, in fact, problems facing >> Ironic (not whether they are problems for someone, somewhere), and then >> ask >> why these necessitate a new driver be added to the project. >> >> >> They are, for reference: >> >> 1. limited partition support >> >> 2. no software RAID support >> >> 3. no LVM support >> >> 4. no support for hardware that lacks a BMC >> >> #1. >> >> When deploying a partition image (eg, QCOW format), Ironic's PXE deploy >> driver performs only the minimal partitioning necessary to fulfill its >> mission as an OpenStack service: respect the user's request for root, >> swap, >> and ephemeral partition sizes. When deploying a whole-disk image, >> Ironic >> does not perform any partitioning -- such is left up to the operator >> who >> created the disk image. >> >> Support for arbitrarily complex partition layouts is not required by, >> nor >> does it facilitate, the goal of provisioning physical servers via a >> common >> cloud API. Additionally, as with #3 below, nothing prevents a user from >> creating more partitions in unallocated disk space once they have >> access to >> their instance. Therefor, I don't see how Ironic's minimal support for >> partitioning is a problem for the project. >> >> #2. >> >> There is no support for defining a RAID in Ironic today, at all, >> whether >> software or hardware. Several proposals were floated last cycle; one is >> under review right now for DRAC support [1], and there are multiple >> call >> outs for RAID building in the state machine mega-spec [2]. Any such >> support >> for hardware RAID will necessarily be abstract enough to support >> multiple >> hardware vendor's driver implementations and both in-band creation (via >> IPA) and out-of-band creation (via vendor tools). >> >> Given the above, it may become possible to add software RAID support to >> IPA >> in the future, under the same abstraction. This would closely tie the >> deploy agent to the images it deploys (the latter image's kernel would >> be >> dependent upon a software RAID built by the former), but this would >> necessarily be true for the proposed FuelAgent as well. >> >> I don't see this as a compelling reason to add a new driver to the >> project. >> Instead, we should (plan to) add support for software RAID to the >> deploy >> agent which is already part of the project. >> >> #3. >> >> LVM volumes can easily be added by a user (after provisioning) within >> unallocated disk space for non-root partitions. I have not yet seen a >> compelling argument for doing this within the provisioning phase. >> >> #4. >> >> There are already in-tree drivers [3] [4] [5] which do not require a >> BMC. >> One of these uses SSH to connect and run pre-determined commands. Like >> the >> spec proposal, which states at line 122, "Control via SSH access >> feature >> intended only for experiments in non-production environment," the >> current >> SSHPowerDriver is only meant for testing environments. We could >> probably >> extend this driver to do what the FuelAgent spec proposes, as far as >> remote >> power control for cheap always-on hardware in testing environments with >> a >> pre-shared key. >> >> (And if anyone wonders about a use case for Ironic without external >> power >> control ... I can only think of one situation where I would rationally >> ever >> want to have a control-plane agent running inside a user-instance: I am >> both the operator and the only user of the cloud.) >> >> >> ---------------- >> >> In summary, as far as I can tell, all of the problem statements upon >> which >> the FuelAgent proposal are based are solvable through incremental >> changes >> in existing drivers, or out of scope for the project entirely. As >> another >> software-based deploy agent, FuelAgent would duplicate the majority of >> the >> functionality which ironic-python-agent has today. >> >> Ironic's driver ecosystem benefits from a diversity of >> hardware-enablement >> drivers. Today, we have two divergent software deployment drivers which >> approach image deployment differently: "agent" drivers use a local >> agent to >> prepare a system and download the image; "pxe" drivers use a remote >> agent >> and copy the image over iSCSI. I don't understand how a second driver >> which >> duplicates the functionality we already have, and shares the same goals >> as >> the drivers we already have, is beneficial to the project. >> >> Doing the same thing twice just increases the burden on the team; we're >> all >> working on the same problems, so let's do it together. >> >> -Devananda >> >> >> [0] >> https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition >> >> [1] https://review.openstack.org/#/c/107981/ >> >> [2] >> https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst >> >> >> [3] >> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py >> >> [4] >> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py >> >> [5] >> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From gpeeyush at linux.vnet.ibm.com Tue Dec 9 10:21:39 2014 From: gpeeyush at linux.vnet.ibm.com (Peeyush Gupta) Date: Tue, 09 Dec 2014 15:51:39 +0530 Subject: [openstack-dev] [Ironic] How to get past pxelinux.0 bootloader? In-Reply-To: <1A3C52DFCD06494D8528644858247BF017815FEF@EX10MBOX03.pnnl.gov> References: <54869CEB.4040602@linux.vnet.ibm.com> <1A3C52DFCD06494D8528644858247BF017815FEF@EX10MBOX03.pnnl.gov> Message-ID: <5486CD33.4000602@linux.vnet.ibm.com> So, basically if I am using pxe driver, I would "have to" provide pxelinux.0? On 12/09/2014 03:24 PM, Fox, Kevin M wrote: > You probably want to use the agent driver, not the pxe one. It lets you use bootloaders from the image. > > ________________________________ > From: Peeyush Gupta > Sent: Monday, December 08, 2014 10:55:39 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: [openstack-dev] [Ironic] How to get past pxelinux.0 bootloader? > > Hi all, > > So, I have set up a devstack ironic setup for baremetal deployment. I > have been able to deploy a baremetal node successfully using > pxe_ipmitool driver. Now, I am trying to boot a server where I already > have a bootloader i.e. I don't need pxelinux to go and fetch kernel and > initrd images for me. I want to transfer them directly. > > I checked out the code and figured out that there are dhcp opts > available, that are modified using pxe_utils.py, changing it didn't > help. Then I moved to ironic.conf, but here also I only see an option to > add pxe_bootfile_name, which is exactly what I want to avoid. Can anyone > please help me with this situation? I don't want to go through > pxelinux.0 bootloader, I just directly want to transfer kernel and > initrd images. > > Thanks. > > -- > Peeyush Gupta > gpeeyush at linux.vnet.ibm.com > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Peeyush Gupta gpeeyush at linux.vnet.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Dec 9 10:32:26 2014 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 09 Dec 2014 11:32:26 +0100 Subject: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC In-Reply-To: <5E7A3D1BF5FD014E86E5F971CF446EFF541FB758@szxema505-mbs.china.huawei.com> References: <5486B51A.6080001@openstack.org> <5E7A3D1BF5FD014E86E5F971CF446EFF541FB758@szxema505-mbs.china.huawei.com> Message-ID: <5486CFBA.8030204@openstack.org> joehuang wrote: > If time is available, how about adding one agenda to guide the direction for cascading to move forward? Thanks in advance. > > The topic is : " Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases. " Hi Joe, we close the agenda one day before the meeting to let people arrange their attendance based on the published agenda. I added your topic in the backlog for next week agenda: https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting Regards, -- Thierry Carrez (ttx) From majopela at redhat.com Tue Dec 9 10:33:01 2014 From: majopela at redhat.com (=?utf-8?Q?Miguel_=C3=81ngel_Ajo?=) Date: Tue, 9 Dec 2014 11:33:01 +0100 Subject: [openstack-dev] [neutron] mid-cycle "hot reviews" Message-ID: <7A64F4A9F9054721A45DB25C9E5A181B@redhat.com> Hi all! It would be great if you could use this thread to post hot reviews on stuff that it?s being worked out during the mid-cycle, where others from different timezones could participate. I know posting reviews to the list is not permitted, but I think an exception in this case would be beneficial. Best regards, Miguel ?ngel Ajo -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.trifonov at gmail.com Tue Dec 9 10:36:08 2014 From: t.trifonov at gmail.com (Tihomir Trifonov) Date: Tue, 9 Dec 2014 12:36:08 +0200 Subject: [openstack-dev] [horizon] REST and Django In-Reply-To: <547DD08A.7000402@redhat.com> References: <547DD08A.7000402@redhat.com> Message-ID: Sorry for the late reply, just few thoughts on the matter. IMO the REST middleware should be as thin as possible. And I mean thin in terms of processing - it should not do pre/post processing of the requests, but just unpack/pack. So here is an example: instead of making AJAX calls that contain instructions: ?? > POST --json --data {"action": "delete", "data": [ {"name": > "item1"}, {"name": "item2"}, {"name": "item3" ]} I think a better approach is just to pack/unpack batch commands, and leave execution to the frontend/backend and not middleware: ?? > POST --json --data {" > ?batch > ": > ?[ > {? > " > ? > action" > ? : "delete"? > , > ?"payload": ? > {"name": "item1"} > ?, > {? > " > ? > action" > ? : "delete"? > , > ? > "payload": > ? > {"name": "item > ?2 > "} > ?, > {? > " > ? > action" > ? : "delete"? > , > ? > "payload": > ? > {"name": "item > ?3 > "} > ? ] ? > ? > ? > } ?The idea is that the middleware should not know the actual data. It should ideally just unpack the data: ??responses = [] > for cmd in > ? ? > ? > ? > request.POST['batch']:? > ? > ??responses > ?.append(? > ? > getattr(controller, cmd['action'] > ?)(** > cmd['?payload'] > ?)?) > > ?return responses? > ?and the frontend(JS) will just send batches of simple commands, and will receive a list of responses for each command in the batch. The error handling will be done in the frontend?(JS) as well. ? For the more complex example of 'put()' where we have dependent objects: project = api.keystone.tenant_get(request, id) > kwargs = self._tenant_kwargs_from_DATA(request.DATA, enabled=None) > api.keystone.tenant_update(request, project, **kwargs) In practice the project data should be already present in the frontend(assuming that we already loaded it to render the project form/view), so ? ? POST --json --data {" ?batch ": ?[ {? " ? action" ? : "tenant_update"? , ?"payload": ? {"project": js_project_object.id, "name": "some name", "prop1": "some prop", "prop2": "other prop, etc."} ? ? ] ? ? ? }? So in general we don't need to recreate the full state on each REST call, if we make the Frontent full-featured application. This way - the frontend will construct the object, will hold the cached value, and will just send the needed requests as single ones or in batches, will receive the response from the API backend, and will render the results. The whole processing logic will be held in the Frontend(JS), while the middleware will just act as proxy(un/packer). This way we will maintain just the logic in the frontend, and will not need to duplicate some logic in the middleware. On Tue, Dec 2, 2014 at 4:45 PM, Adam Young wrote: > On 12/02/2014 12:39 AM, Richard Jones wrote: > > On Mon Dec 01 2014 at 4:18:42 PM Thai Q Tran wrote: > >> I agree that keeping the API layer thin would be ideal. I should add >> that having discrete API calls would allow dynamic population of table. >> However, I will make a case where it *might* be necessary to add >> additional APIs. Consider that you want to delete 3 items in a given table. >> >> If you do this on the client side, you would need to perform: n * (1 API >> request + 1 AJAX request) >> If you have some logic on the server side that batch delete actions: n * >> (1 API request) + 1 AJAX request >> >> Consider the following: >> n = 1, client = 2 trips, server = 2 trips >> n = 3, client = 6 trips, server = 4 trips >> n = 10, client = 20 trips, server = 11 trips >> n = 100, client = 200 trips, server 101 trips >> >> As you can see, this does not scale very well.... something to consider... >> > This is not something Horizon can fix. Horizon can make matters worse, > but cannot make things better. > > If you want to delete 3 users, Horizon still needs to make 3 distinct > calls to Keystone. > > To fix this, we need either batch calls or a standard way to do multiples > of the same operation. > > The unified API effort it the right place to drive this. > > > > > > > > Yep, though in the above cases the client is still going to be hanging, > waiting for those server-backend calls, with no feedback until it's all > done. I would hope that the client-server call overhead is minimal, but I > guess that's probably wishful thinking when in the land of random Internet > users hitting some provider's Horizon :) > > So yeah, having mulled it over myself I agree that it's useful to have > batch operations implemented in the POST handler, the most common operation > being DELETE. > > Maybe one day we could transition to a batch call with user feedback > using a websocket connection. > > > Richard > >> [image: Inactive hide details for Richard Jones ---11/27/2014 05:38:53 >> PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S > Jones ---11/27/2014 05:38:53 PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp, >> Travis S wrote: >> >> From: Richard Jones >> To: "Tripp, Travis S" , OpenStack List < >> openstack-dev at lists.openstack.org> >> Date: 11/27/2014 05:38 PM >> Subject: Re: [openstack-dev] [horizon] REST and Django >> ------------------------------ >> >> >> >> >> On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S <*travis.tripp at hp.com* >> > wrote: >> >> Hi Richard, >> >> You are right, we should put this out on the main ML, so copying >> thread out to there. ML: FYI that this started after some impromptu IRC >> discussions about a specific patch led into an impromptu google hangout >> discussion with all the people on the thread below. >> >> >> Thanks Travis! >> >> >> >> As I mentioned in the review[1], Thai and I were mainly discussing >> the possible performance implications of network hops from client to >> horizon server and whether or not any aggregation should occur server side. >> In other words, some views require several APIs to be queried before any >> data can displayed and it would eliminate some extra network requests from >> client to server if some of the data was first collected on the server side >> across service APIs. For example, the launch instance wizard will need to >> collect data from quite a few APIs before even the first step is displayed >> (I?ve listed those out in the blueprint [2]). >> >> The flip side to that (as you also pointed out) is that if we keep >> the API?s fine grained then the wizard will be able to optimize in one >> place the calls for data as it is needed. For example, the first step may >> only need half of the API calls. It also could lead to perceived >> performance increases just due to the wizard making a call for different >> data independently and displaying it as soon as it can. >> >> >> Indeed, looking at the current launch wizard code it seems like you >> wouldn't need to load all that data for the wizard to be displayed, since >> only some subset of it would be necessary to display any given panel of the >> wizard. >> >> >> >> I tend to lean towards your POV and starting with discrete API calls >> and letting the client optimize calls. If there are performance problems >> or other reasons then doing data aggregation on the server side could be >> considered at that point. >> >> >> I'm glad to hear it. I'm a fan of optimising when necessary, and not >> beforehand :) >> >> >> >> Of course if anybody is able to do some performance testing between >> the two approaches then that could affect the direction taken. >> >> >> I would certainly like to see us take some measurements when performance >> issues pop up. Optimising without solid metrics is bad idea :) >> >> >> Richard >> >> >> >> [1] >> *https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py* >> >> [2] >> *https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign* >> >> >> -Travis >> >> *From: *Richard Jones <*r1chardj0n3s at gmail.com* >> > >> * Date: *Wednesday, November 26, 2014 at 11:55 PM >> * To: *Travis Tripp <*travis.tripp at hp.com* >, Thai >> Q Tran/Silicon Valley/IBM <*tqtran at us.ibm.com* >, >> David Lyle <*dklyle0 at gmail.com* >, Maxime Vidori < >> *maxime.vidori at enovance.com* >, >> "Wroblewski, Szymon" <*szymon.wroblewski at intel.com* >> >, "Wood, Matthew David (HP Cloud - >> Horizon)" <*matt.wood at hp.com* >, "Chen, Shaoquan" < >> *sean.chen2 at hp.com* >, "Farina, Matt (HP Cloud)" < >> *matthew.farina at hp.com* >, Cindy Lu/Silicon >> Valley/IBM <*clu at us.ibm.com* >, Justin >> Pomeroy/Rochester/IBM <*jpomero at us.ibm.com* >, >> Neill Cox <*neill.cox at ingenious.com.au* > >> * Subject: *Re: REST and Django >> >> I'm not sure whether this is the appropriate place to discuss this, >> or whether I should be posting to the list under [Horizon] but I think we >> need to have a clear idea of what goes in the REST API and what goes in the >> client (angular) code. >> >> In my mind, the thinner the REST API the better. Indeed if we can get >> away with proxying requests through without touching any *client code, that >> would be great. >> >> Coding additional logic into the REST API means that a developer >> would need to look in two places, instead of one, to determine what was >> happening for a particular call. If we keep it thin then the API presented >> to the client developer is very, very similar to the API presented by the >> services. Minimum surprise. >> >> Your thoughts? >> >> >> Richard >> >> >> On Wed Nov 26 2014 at 2:40:52 PM Richard Jones < >> *r1chardj0n3s at gmail.com* > wrote: >> >> >> Thanks for the great summary, Travis. >> >> I've completed the work I pledged this morning, so now the REST >> API change set has: >> >> - no rest framework dependency >> - AJAX scaffolding in openstack_dashboard.api.rest.utils >> - code in openstack_dashboard/api/rest/ >> - renamed the API from "identity" to "keystone" to be consistent >> - added a sample of testing, mostly for my own sanity to check >> things were working >> >> *https://review.openstack.org/#/c/136676* >> >> >> >> Richard >> >> On Wed Nov 26 2014 at 12:18:25 PM Tripp, Travis S < >> *travis.tripp at hp.com* > wrote: >> >> >> Hello all, >> >> Great discussion on the REST urls today! I think that we are on >> track to come to a common REST API usage pattern. To provide quick summary: >> >> We all agreed that going to a straight REST pattern rather than >> through tables was a good idea. We discussed using direct get / post in >> Django views like what Max originally used[1][2] and Thai also started[3] >> with the identity table rework or to go with djangorestframework [5] like >> what Richard was prototyping with[4]. >> >> The main things we would use from Django Rest Framework were >> built in JSON serialization (avoid boilerplate), better exception handling, >> and some request wrapping. However, we all weren?t sure about the need for >> a full new framework just for that. At the end of the conversation, we >> decided that it was a cleaner approach, but Richard would see if he could >> provide some utility code to do that much for us without requiring the full >> framework. David voiced that he doesn?t want us building out a whole >> framework on our own either. >> >> So, Richard will do some investigation during his day today and >> get back to us. Whatever the case, we?ll get a patch in horizon for the >> base dependency (framework or Richard?s utilities) that both Thai?s work >> and the launch instance work is dependent upon. We?ll build REST style >> API?s using the same pattern. We will likely put the rest api?s in >> horizon/openstack_dashboard/api/rest/. >> >> [1] >> *https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/keypair.py* >> >> [2] >> *https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/launch.py* >> >> [3] >> *https://review.openstack.org/#/c/133767/8/openstack_dashboard/dashboards/identity/users/views.py* >> >> [4] >> *https://review.openstack.org/#/c/136676/4/openstack_dashboard/rest_api/identity.py* >> >> [5] *http://www.django-rest-framework.org/* >> >> >> Thanks, >> >> >> Travis_______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Tihomir Trifonov -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From matthew.gilliard at gmail.com Tue Dec 9 10:46:12 2014 From: matthew.gilliard at gmail.com (Matthew Gilliard) Date: Tue, 9 Dec 2014 10:46:12 +0000 Subject: [openstack-dev] People of OpenStack (and their IRC nicks) Message-ID: Sometimes, I want to ask the author of a patch about it on IRC. However, there doesn't seem to be a reliable way to find out someone's IRC handle. The potential for useful conversation is sometimes missed. Unless there's a better alternative which I didn't find, https://wiki.openstack.org/wiki/People seems to fulfill that purpose, but is neither complete nor accurate. What do people think about this? Should we put more effort into keeping the People wiki up-to-date? That's a(nother) manual process though - can we autogenerate it somehow? Matthew From nicolas.trangez at scality.com Tue Dec 9 10:53:05 2014 From: nicolas.trangez at scality.com (Nicolas Trangez) Date: Tue, 09 Dec 2014 11:53:05 +0100 Subject: [openstack-dev] People of OpenStack (and their IRC nicks) In-Reply-To: References: Message-ID: <1418122385.31992.74.camel@linode2.nicolast.be> On Tue, 2014-12-09 at 10:46 +0000, Matthew Gilliard wrote: > can we autogenerate it somehow? Maybe some 'irc_nick' field in Stackalytic's default_data.json could be added and used to populate such page? Nicolas From dbailey at hp.com Tue Dec 9 11:13:13 2014 From: dbailey at hp.com (Bailey, Darragh) Date: Tue, 9 Dec 2014 11:13:13 +0000 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: References: Message-ID: <5486D947.4090209@hp.com> Hi Eduard, I would check the trigger settings in the job, particularly which "type" of pattern matching is being used for the branches. Found it tends to be the spot that catches most people out when configuring jobs with the Gerrit Trigger plugin. If you're looking to trigger against all branches then you would want "Type: Path" and "Pattern: **" appearing in the UI. If you have sufficient access using the 'Query and Trigger Gerrit Patches' page accessible from the main view will make it easier to confirm that your Jenkins instance can actually see changes in gerrit for the given project (which should mean that it can see the corresponding events as well). Can also use the same page to re-trigger for PatchsetCreated events to see if you've set the patterns on the job correctly. Regards, Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" - Unknown On 08/12/14 14:33, Eduard Matei wrote: > Resending this to dev ML as it seems i get quicker response :) > > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: > Patchset Created", chose as server the configured Gerrit server that > was previously tested, then added the project openstack-dev/sandbox > and saved. > I made a change on dev sandbox repo but couldn't trigger my job. > > Any ideas? > > Thanks, > Eduard > > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei > > wrote: > > Hello everyone, > > Thanks to the latest changes to the creation of service accounts > process we're one step closer to setting up our own CI platform > for Cinder. > > So far we've got: > - Jenkins master (with Gerrit plugin) and slave (with DevStack and > our storage solution) > - Service account configured and tested (can manually connect to > review.openstack.org and get events > and publish comments) > > Next step would be to set up a job to do the actual testing, this > is where we're stuck. > Can someone please point us to a clear example on how a job should > look like (preferably for testing Cinder on Kilo)? Most links > we've found are broken, or tools/scripts are no longer working. > Also, we cannot change the Jenkins master too much (it's owned by > Ops team and they need a list of tools/scripts to review before > installing/running so we're not allowed to experiment). > > Thanks, > Eduard > > -- > > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom > they are addressed.If you are not the named addressee or an > employee or agent responsible for delivering this message to the > named addressee, you are hereby notified that you are not > authorized to read, print, retain, copy or disseminate this > message or any part of it. If you have received this email in > error we request you to notify us by reply e-mail and to delete > all electronic files of the message. If you are not the intended > recipient you are notified that disclosing, copying, distributing > or taking any action in reliance on the contents of this > information is strictly prohibited. E-mail transmission cannot be > guaranteed to be secure or error free as information could be > intercepted, corrupted, lost, destroyed, arrive late or > incomplete, or contain viruses. The sender therefore does not > accept liability for any errors or omissions in the content of > this message, and shall have no liability for any loss or damage > suffered by the user, which arise as a result of e-mail transmission. > > > > > -- > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom they > are addressed.If you are not the named addressee or an employee or > agent responsible for delivering this message to the named addressee, > you are hereby notified that you are not authorized to read, print, > retain, copy or disseminate this message or any part of it. If you > have received this email in error we request you to notify us by reply > e-mail and to delete all electronic files of the message. If you are > not the intended recipient you are notified that disclosing, copying, > distributing or taking any action in reliance on the contents of this > information is strictly prohibited. E-mail transmission cannot be > guaranteed to be secure or error free as information could be > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or > contain viruses. The sender therefore does not accept liability for > any errors or omissions in the content of this message, and shall have > no liability for any loss or damage suffered by the user, which arise > as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra From sean at dague.net Tue Dec 9 11:39:43 2014 From: sean at dague.net (Sean Dague) Date: Tue, 09 Dec 2014 06:39:43 -0500 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 Message-ID: <5486DF7F.7080706@dague.net> I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely. 1 - the entire H8* group. This doesn't function on python code, it functions on git commit message, which makes it tough to run locally. It also would be a reason to prevent us from not rerunning tests on commit message changes (something we could do after the next gerrit update). 2 - the entire H3* group - because of this - https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm A look at the H3* code shows that it's terribly complicated, and is often full of bugs (a few bit us last week). I'd rather just delete it and move on. -Sean -- Sean Dague http://dague.net From eduard.matei at cloudfounders.com Tue Dec 9 11:59:15 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Tue, 9 Dec 2014 13:59:15 +0200 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: <5486D947.4090209@hp.com> References: <5486D947.4090209@hp.com> Message-ID: Hi Darragh, thanks for your input I double checked the job settings and fixed it: - build triggers is set to Gerrit event - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin and tested separately) - Trigger on: Patchset Created - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: Type: Path, Pattern: ** (was Type Plain on both) Now the job is triggered by commit on openstack-dev/sandbox :) Regarding the Query and Trigger Gerrit Patches, i found my patch using query: status:open project:openstack-dev/sandbox change:139585 and i can trigger it manually and it executes the job. But i still have the problem: what should the job do? It doesn't actually do anything, it doesn't run tests or comment on the patch. Do you have an example of job? Thanks, Eduard On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh wrote: > Hi Eduard, > > > I would check the trigger settings in the job, particularly which "type" > of pattern matching is being used for the branches. Found it tends to be > the spot that catches most people out when configuring jobs with the > Gerrit Trigger plugin. If you're looking to trigger against all branches > then you would want "Type: Path" and "Pattern: **" appearing in the UI. > > If you have sufficient access using the 'Query and Trigger Gerrit > Patches' page accessible from the main view will make it easier to > confirm that your Jenkins instance can actually see changes in gerrit > for the given project (which should mean that it can see the > corresponding events as well). Can also use the same page to re-trigger > for PatchsetCreated events to see if you've set the patterns on the job > correctly. > > Regards, > Darragh Bailey > > "Nothing is foolproof to a sufficiently talented fool" - Unknown > > On 08/12/14 14:33, Eduard Matei wrote: > > Resending this to dev ML as it seems i get quicker response :) > > > > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: > > Patchset Created", chose as server the configured Gerrit server that > > was previously tested, then added the project openstack-dev/sandbox > > and saved. > > I made a change on dev sandbox repo but couldn't trigger my job. > > > > Any ideas? > > > > Thanks, > > Eduard > > > > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei > > > > wrote: > > > > Hello everyone, > > > > Thanks to the latest changes to the creation of service accounts > > process we're one step closer to setting up our own CI platform > > for Cinder. > > > > So far we've got: > > - Jenkins master (with Gerrit plugin) and slave (with DevStack and > > our storage solution) > > - Service account configured and tested (can manually connect to > > review.openstack.org and get events > > and publish comments) > > > > Next step would be to set up a job to do the actual testing, this > > is where we're stuck. > > Can someone please point us to a clear example on how a job should > > look like (preferably for testing Cinder on Kilo)? Most links > > we've found are broken, or tools/scripts are no longer working. > > Also, we cannot change the Jenkins master too much (it's owned by > > Ops team and they need a list of tools/scripts to review before > > installing/running so we're not allowed to experiment). > > > > Thanks, > > Eduard > > > > -- > > > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > > intended solely for the use of the individual or entity to whom > > they are addressed.If you are not the named addressee or an > > employee or agent responsible for delivering this message to the > > named addressee, you are hereby notified that you are not > > authorized to read, print, retain, copy or disseminate this > > message or any part of it. If you have received this email in > > error we request you to notify us by reply e-mail and to delete > > all electronic files of the message. If you are not the intended > > recipient you are notified that disclosing, copying, distributing > > or taking any action in reliance on the contents of this > > information is strictly prohibited. E-mail transmission cannot be > > guaranteed to be secure or error free as information could be > > intercepted, corrupted, lost, destroyed, arrive late or > > incomplete, or contain viruses. The sender therefore does not > > accept liability for any errors or omissions in the content of > > this message, and shall have no liability for any loss or damage > > suffered by the user, which arise as a result of e-mail transmission. > > > > > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > > intended solely for the use of the individual or entity to whom they > > are addressed.If you are not the named addressee or an employee or > > agent responsible for delivering this message to the named addressee, > > you are hereby notified that you are not authorized to read, print, > > retain, copy or disseminate this message or any part of it. If you > > have received this email in error we request you to notify us by reply > > e-mail and to delete all electronic files of the message. If you are > > not the intended recipient you are notified that disclosing, copying, > > distributing or taking any action in reliance on the contents of this > > information is strictly prohibited. E-mail transmission cannot be > > guaranteed to be secure or error free as information could be > > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or > > contain viruses. The sender therefore does not accept liability for > > any errors or omissions in the content of this message, and shall have > > no liability for any loss or damage suffered by the user, which arise > > as a result of e-mail transmission. > > > > > > _______________________________________________ > > OpenStack-Infra mailing list > > OpenStack-Infra at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vkozhukalov at mirantis.com Tue Dec 9 12:01:07 2014 From: vkozhukalov at mirantis.com (Vladimir Kozhukalov) Date: Tue, 9 Dec 2014 16:01:07 +0400 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> Message-ID: Just a short explanation of Fuel use case. Fuel use case is not a cloud. Fuel is a deployment tool. We install OS on bare metal servers and on VMs and then configure this OS using Puppet. We have been using Cobbler as our OS provisioning tool since the beginning of Fuel. However, Cobbler assumes using native OS installers (Anaconda and Debian-installer). For some reasons we decided to switch to image based approach for installing OS. One of Fuel features is the ability to provide advanced partitioning schemes (including software RAIDs, LVM). Native installers are quite difficult to customize in the field of partitioning (that was one of the reasons to switch to image based approach). Moreover, we'd like to implement even more flexible user experience. We'd like to allow user to choose which hard drives to use for root FS, for allocating DB. We'd like user to be able to put root FS over LV or MD device (including stripe, mirror, multipath). We'd like user to be able to choose which hard drives are bootable (if any), which options to use for mounting file systems. Many many various cases are possible. If you ask why we'd like to support all those cases, the answer is simple: because our users want us to support all those cases. Obviously, many of those cases can not be implemented as image internals, some cases can not be also implemented on configuration stage (placing root fs on lvm device). As far as those use cases were rejected to be implemented in term of IPA, we implemented so called Fuel Agent. Important Fuel Agent features are: * It does not have REST API * it has executable entry point[s] * It uses local json file as it's input * It is planned to implement ability to download input data via HTTP (kind of metadata service) * It is designed to be agnostic to input data format, not only Fuel format (data drivers) * It is designed to be agnostic to image format (tar images, file system images, disk images, currently fs images) * It is designed to be agnostic to image compression algorithm (currently gzip) * It is designed to be agnostic to image downloading protocol (currently local file and HTTP link) So, it is clear that being motivated by Fuel, Fuel Agent is quite independent and generic. And we are open for new use cases. According Fuel itself, our nearest plan is to get rid of Cobbler because in the case of image based approach it is huge overhead. The question is which tool we can use instead of Cobbler. We need power management, we need TFTP management, we need DHCP management. That is exactly what Ironic is able to do. Frankly, we can implement power/TFTP/DHCP management tool independently, but as Devananda said, we're all working on the same problems, so let's do it together. Power/TFTP/DHCP management is where we are working on the same problems, but IPA and Fuel Agent are about different use cases. This case is not just Fuel, any mature deployment case require advanced partition/fs management. However, for me it is OK, if it is easily possible to use Ironic with external drivers (not merged to Ironic and not tested on Ironic CI). AFAIU, this spec https://review.openstack.org/#/c/138115/ does not assume changing Ironic API and core. Jim asked about how Fuel Agent will know about advanced disk partitioning scheme if API is not changed. The answer is simple: Ironic is supposed to send a link to metadata service (http or local file) where Fuel Agent can download input json data. As Roman said, we try to be pragmatic and suggest something which does not break anything. All changes are supposed to be encapsulated into a driver. No API and core changes. We have resources to support, test and improve this driver. This spec is just a zero step. Further steps are supposed to improve driver so as to make it closer to Ironic abstractions. For Ironic that means widening use cases and user community. But, as I already said, we are OK if Ironic does not need this feature. Vladimir Kozhukalov On Tue, Dec 9, 2014 at 1:09 PM, Roman Prykhodchenko < rprikhodchenko at mirantis.com> wrote: > It is true that IPA and FuelAgent share a lot of functionality in common. > However there is a major difference between them which is that they are > intended to be used to solve a different problem. > > IPA is a solution for provision-use-destroy-use_by_different_user use-case > and is really great for using it for providing BM nodes for other OS > services or in services like Rackspace OnMetal. FuelAgent itself serves for > provision-use-use-?-use use-case like Fuel or TripleO have. > > Those two use-cases require concentration on different details in first > place. For instance for IPA proper decommissioning is more important than > advanced disk management, but for FuelAgent priorities are opposite because > of obvious reasons. > > Putting all functionality to a single driver and a single agent may cause > conflicts in priorities and make a lot of mess inside both the driver and > the agent. Actually previously changes to IPA were blocked right because of > this conflict of priorities. Therefore replacing FuelAgent by IPA in where > FuelAgent is used currently does not seem like a good option because come > people (and I?m not talking about Mirantis) might loose required features > because of different priorities. > > Having two separate drivers along with two separate agents for those > different use-cases will allow to have two independent teams that are > concentrated on what?s really important for a specific use-case. I don?t > see any problem in overlapping functionality if it?s used differently. > > > P. S. > I realise that people may be also confused by the fact that FuelAgent is > actually called like that and is used only in Fuel atm. Our point is to > make it a simple, powerful and what?s more important a generic tool for > provisioning. It is not bound to Fuel or Mirantis and if it will cause > confusion in the future we will even be happy to give it a different and > less confusing name. > > P. P. S. > Some of the points of this integration do not look generic enough or nice > enough. We look pragmatic on the stuff and are trying to implement what?s > possible to implement as the first step. For sure this is going to have a > lot more steps to make it better and more generic. > > > On 09 Dec 2014, at 01:46, Jim Rollenhagen wrote: > > > > On December 8, 2014 2:23:58 PM PST, Devananda van der Veen < > devananda.vdv at gmail.com> wrote: > > I'd like to raise this topic for a wider discussion outside of the > hallway > track and code reviews, where it has thus far mostly remained. > > In previous discussions, my understanding has been that the Fuel team > sought to use Ironic to manage "pets" rather than "cattle" - and doing > so > required extending the API and the project's functionality in ways that > no > one else on the core team agreed with. Perhaps that understanding was > wrong > (or perhaps not), but in any case, there is now a proposal to add a > FuelAgent driver to Ironic. The proposal claims this would meet that > teams' > needs without requiring changes to the core of Ironic. > > https://review.openstack.org/#/c/138115/ > > > I think it's clear from the review that I share the opinions expressed in > this email. > > That said (and hopefully without derailing the thread too much), I'm > curious how this driver could do software RAID or LVM without modifying > Ironic's API or data model. How would the agent know how these should be > built? How would an operator or user tell Ironic what the > disk/partition/volume layout would look like? > > And before it's said - no, I don't think vendor passthru API calls are an > appropriate answer here. > > // jim > > > The Problem Description section calls out four things, which have all > been > discussed previously (some are here [0]). I would like to address each > one, > invite discussion on whether or not these are, in fact, problems facing > Ironic (not whether they are problems for someone, somewhere), and then > ask > why these necessitate a new driver be added to the project. > > > They are, for reference: > > 1. limited partition support > > 2. no software RAID support > > 3. no LVM support > > 4. no support for hardware that lacks a BMC > > #1. > > When deploying a partition image (eg, QCOW format), Ironic's PXE deploy > driver performs only the minimal partitioning necessary to fulfill its > mission as an OpenStack service: respect the user's request for root, > swap, > and ephemeral partition sizes. When deploying a whole-disk image, > Ironic > does not perform any partitioning -- such is left up to the operator > who > created the disk image. > > Support for arbitrarily complex partition layouts is not required by, > nor > does it facilitate, the goal of provisioning physical servers via a > common > cloud API. Additionally, as with #3 below, nothing prevents a user from > creating more partitions in unallocated disk space once they have > access to > their instance. Therefor, I don't see how Ironic's minimal support for > partitioning is a problem for the project. > > #2. > > There is no support for defining a RAID in Ironic today, at all, > whether > software or hardware. Several proposals were floated last cycle; one is > under review right now for DRAC support [1], and there are multiple > call > outs for RAID building in the state machine mega-spec [2]. Any such > support > for hardware RAID will necessarily be abstract enough to support > multiple > hardware vendor's driver implementations and both in-band creation (via > IPA) and out-of-band creation (via vendor tools). > > Given the above, it may become possible to add software RAID support to > IPA > in the future, under the same abstraction. This would closely tie the > deploy agent to the images it deploys (the latter image's kernel would > be > dependent upon a software RAID built by the former), but this would > necessarily be true for the proposed FuelAgent as well. > > I don't see this as a compelling reason to add a new driver to the > project. > Instead, we should (plan to) add support for software RAID to the > deploy > agent which is already part of the project. > > #3. > > LVM volumes can easily be added by a user (after provisioning) within > unallocated disk space for non-root partitions. I have not yet seen a > compelling argument for doing this within the provisioning phase. > > #4. > > There are already in-tree drivers [3] [4] [5] which do not require a > BMC. > One of these uses SSH to connect and run pre-determined commands. Like > the > spec proposal, which states at line 122, "Control via SSH access > feature > intended only for experiments in non-production environment," the > current > SSHPowerDriver is only meant for testing environments. We could > probably > extend this driver to do what the FuelAgent spec proposes, as far as > remote > power control for cheap always-on hardware in testing environments with > a > pre-shared key. > > (And if anyone wonders about a use case for Ironic without external > power > control ... I can only think of one situation where I would rationally > ever > want to have a control-plane agent running inside a user-instance: I am > both the operator and the only user of the cloud.) > > > ---------------- > > In summary, as far as I can tell, all of the problem statements upon > which > the FuelAgent proposal are based are solvable through incremental > changes > in existing drivers, or out of scope for the project entirely. As > another > software-based deploy agent, FuelAgent would duplicate the majority of > the > functionality which ironic-python-agent has today. > > Ironic's driver ecosystem benefits from a diversity of > hardware-enablement > drivers. Today, we have two divergent software deployment drivers which > approach image deployment differently: "agent" drivers use a local > agent to > prepare a system and download the image; "pxe" drivers use a remote > agent > and copy the image over iSCSI. I don't understand how a second driver > which > duplicates the functionality we already have, and shares the same goals > as > the drivers we already have, is beneficial to the project. > > Doing the same thing twice just increases the burden on the team; we're > all > working on the same problems, so let's do it together. > > -Devananda > > > [0] > https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition > > [1] https://review.openstack.org/#/c/107981/ > > [2] > > https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst > > > [3] > > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py > > [4] > > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py > > [5] > > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py > > > ------------------------------------------------------------------------ > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From isviridov at mirantis.com Tue Dec 9 12:02:08 2014 From: isviridov at mirantis.com (isviridov) Date: Tue, 09 Dec 2014 14:02:08 +0200 Subject: [openstack-dev] [MagnetoDB][api] Hello from the API WG In-Reply-To: References: Message-ID: <5486E4C0.9030300@mirantis.com> Hello API WG, It is nice to see such warm welcoming! We have started using the flag in order to join the API WG initiative and get feedback about our current API and the guidance for future improvements. Current API is described at RTD [1] as a text, but structured form is always better for analysis. I'll take a look at Swagger and think that it is great idea to use it. Please consider me as a liaison from MagnetoDB project in API WG, I'm interested in attending meeting at 1600 UTC. Thank you! With best regards, Ilya Sviridov [1] http://magnetodb.readthedocs.org/en/latest/api_reference.html 08.12.2014 16:58, Everett Toews ?????: > Hello MagnetoDB! > > During the latest meeting [1] of the API Working Group (WG) we noticed that MagnetoDB made use of the APIImpact flag [2]. That?s excellent and exactly how we were hoping the use of flag as a discovery mechanism would work! > > We were wondering if the MagentoDB team would like to designate a cross-project liaison [3] for the API WG? > > We would communicate with that person a bit more closely and figure out how we can best help your project. Perhaps they could attend an API WG Meeting [4] to get started. > > One thing that came up during the meeting was my suggestion that, if MagnetoDB had an API definition (like Swagger [5]), we could review the API design independently of the source code that implements the API. There are many other benefits of an API definition for documentation, testing, validation, and client creation. > > Does an API definition exist for the MagnetoDB API or would you be interested in creating one? > > Either way we?d like to hear your thoughts on the subject. > > Cheers, > Everett > > P.S. Just to set expectations properly, please note that review of the API by the WG does not endorse the project in any way. We?re just trying to help design better APIs that are consistent with the rest of the OpenStack APIs. > > [1] http://eavesdrop.openstack.org/meetings/api_wg/2014/api_wg.2014-12-04-16.01.html > [2] https://review.openstack.org/#/c/138059/ > [3] https://wiki.openstack.org/wiki/CrossProjectLiaisons#API_Working_Group > [4] https://wiki.openstack.org/wiki/Meetings/API-WG > [5] http://swagger.io/ > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From yzveryanskyy at mirantis.com Tue Dec 9 12:05:03 2014 From: yzveryanskyy at mirantis.com (Yuriy Zveryanskyy) Date: Tue, 09 Dec 2014 14:05:03 +0200 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: References: Message-ID: <5486E56F.9060606@mirantis.com> Good day Ironicers. I do not want to discuss questions like "Is feature X good for release Y?" or "Is feature Z in Ironic scope or not?". I want to get an answer for this: Is Ironic a flexible, easy extendable and user-oriented solution for deployment? Yes, it is I think. IPA is the great software, but Fuel Agent proposes a different and alternative way for deploying. Devananda wrote about "pets" and "cattle", and maybe some want to manage "pets" rather than "cattle"? Let users do a choice. We do not plan to change any Ironic API for the driver, internal or external (as opposed to IPA, this was done for it). If there will be no one for Fuel Agent's driver support I think this driver should be removed from Ironic tree (I heard this practice is used in Linux kernel). On 12/09/2014 12:23 AM, Devananda van der Veen wrote: > > I'd like to raise this topic for a wider discussion outside of the > hallway track and code reviews, where it has thus far mostly remained. > > > In previous discussions, my understanding has been that the Fuel team > sought to use Ironic to manage "pets" rather than "cattle" - and doing > so required extending the API and the project's functionality in ways > that no one else on the core team agreed with. Perhaps that > understanding was wrong (or perhaps not), but in any case, there is > now a proposal to add a FuelAgent driver to Ironic. The proposal > claims this would meet that teams' needs without requiring changes to > the core of Ironic. > > > https://review.openstack.org/#/c/138115/ > > > The Problem Description section calls out four things, which have all > been discussed previously (some are here [0]). I would like to address > each one, invite discussion on whether or not these are, in fact, > problems facing Ironic (not whether they are problems for someone, > somewhere), and then ask why these necessitate a new driver be added > to the project. > > > They are, for reference: > > > 1. limited partition support > > 2. no software RAID support > > 3. no LVM support > > 4. no support for hardware that lacks a BMC > > > #1. > > When deploying a partition image (eg, QCOW format), Ironic's PXE > deploy driver performs only the minimal partitioning necessary to > fulfill its mission as an OpenStack service: respect the user's > request for root, swap, and ephemeral partition sizes. When deploying > a whole-disk image, Ironic does not perform any partitioning -- such > is left up to the operator who created the disk image. > > > Support for arbitrarily complex partition layouts is not required by, > nor does it facilitate, the goal of provisioning physical servers via > a common cloud API. Additionally, as with #3 below, nothing prevents a > user from creating more partitions in unallocated disk space once they > have access to their instance. Therefor, I don't see how Ironic's > minimal support for partitioning is a problem for the project. > > > #2. > > There is no support for defining a RAID in Ironic today, at all, > whether software or hardware. Several proposals were floated last > cycle; one is under review right now for DRAC support [1], and there > are multiple call outs for RAID building in the state machine > mega-spec [2]. Any such support for hardware RAID will necessarily be > abstract enough to support multiple hardware vendor's driver > implementations and both in-band creation (via IPA) and out-of-band > creation (via vendor tools). > > > Given the above, it may become possible to add software RAID support > to IPA in the future, under the same abstraction. This would closely > tie the deploy agent to the images it deploys (the latter image's > kernel would be dependent upon a software RAID built by the former), > but this would necessarily be true for the proposed FuelAgent as well. > > > I don't see this as a compelling reason to add a new driver to the > project. Instead, we should (plan to) add support for software RAID to > the deploy agent which is already part of the project. > > > #3. > > LVM volumes can easily be added by a user (after provisioning) within > unallocated disk space for non-root partitions. I have not yet seen a > compelling argument for doing this within the provisioning phase. > > > #4. > > There are already in-tree drivers [3] [4] [5] which do not require a > BMC. One of these uses SSH to connect and run pre-determined commands. > Like the spec proposal, which states at line 122, "Control via SSH > access feature intended only for experiments in non-production > environment," the current SSHPowerDriver is only meant for testing > environments. We could probably extend this driver to do what the > FuelAgent spec proposes, as far as remote power control for cheap > always-on hardware in testing environments with a pre-shared key. > > > (And if anyone wonders about a use case for Ironic without external > power control ... I can only think of one situation where I would > rationally ever want to have a control-plane agent running inside a > user-instance: I am both the operator and the only user of the cloud.) > > > ---------------- > > > In summary, as far as I can tell, all of the problem statements upon > which the FuelAgent proposal are based are solvable through > incremental changes in existing drivers, or out of scope for the > project entirely. As another software-based deploy agent, FuelAgent > would duplicate the majority of the functionality which > ironic-python-agent has today. > > > Ironic's driver ecosystem benefits from a diversity of > hardware-enablement drivers. Today, we have two divergent software > deployment drivers which approach image deployment differently: > "agent" drivers use a local agent to prepare a system and download the > image; "pxe" drivers use a remote agent and copy the image over iSCSI. > I don't understand how a second driver which duplicates the > functionality we already have, and shares the same goals as the > drivers we already have, is beneficial to the project. > > > Doing the same thing twice just increases the burden on the team; > we're all working on the same problems, so let's do it together. > > > -Devananda > > > > [0] > https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition > > > [1] https://review.openstack.org/#/c/107981/ > > > [2] > https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst > > > [3] > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py > > [4] > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py > > [5] > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py > > > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sahid.ferdjaoui at redhat.com Tue Dec 9 12:32:43 2014 From: sahid.ferdjaoui at redhat.com (Sahid Orentino Ferdjaoui) Date: Tue, 9 Dec 2014 13:32:43 +0100 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <5486DF7F.7080706@dague.net> References: <5486DF7F.7080706@dague.net> Message-ID: <20141209123243.GA9099@redhat.redhat.com> On Tue, Dec 09, 2014 at 06:39:43AM -0500, Sean Dague wrote: > I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely. > > 1 - the entire H8* group. This doesn't function on python code, it > functions on git commit message, which makes it tough to run locally. It > also would be a reason to prevent us from not rerunning tests on commit > message changes (something we could do after the next gerrit update). -1, We probably want to recommend a git commit message more stronger formatted mainly about the first line which is the most important. It should reflect which part of the code the commit is attended to update that gives the ability for contributors to quickly see on what the submission is related; An example with Nova which is quite big: api, compute, doc, scheduler, virt, vmware, libvirt, objects... We should to use a prefix in the first line of commit message. There is a large number of commits waiting for reviews, that can help contributors with a knowledge in a particular domain to identify quickly which one to pick. > 2 - the entire H3* group - because of this - > https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm > > A look at the H3* code shows that it's terribly complicated, and is > often full of bugs (a few bit us last week). I'd rather just delete it > and move on. > > -Sean > > -- > Sean Dague > http://dague.net > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dtantsur at redhat.com Tue Dec 9 12:51:28 2014 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 09 Dec 2014 13:51:28 +0100 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> Message-ID: <5486F050.808@redhat.com> Hi folks, Thank you for additional explanation, it does clarify things a bit. I'd like to note, however, that you talk a lot about how _different_ Fuel Agent is from what Ironic does now. I'd like actually to know how well it's going to fit into what Ironic does (in additional to your specific use cases). Hence my comments inline: On 12/09/2014 01:01 PM, Vladimir Kozhukalov wrote: > Just a short explanation of Fuel use case. > > Fuel use case is not a cloud. Fuel is a deployment tool. We install OS > on bare metal servers and on VMs > and then configure this OS using Puppet. We have been using Cobbler as > our OS provisioning tool since the beginning of Fuel. > However, Cobbler assumes using native OS installers (Anaconda and > Debian-installer). For some reasons we decided to > switch to image based approach for installing OS. > > One of Fuel features is the ability to provide advanced partitioning > schemes (including software RAIDs, LVM). > Native installers are quite difficult to customize in the field of > partitioning > (that was one of the reasons to switch to image based approach). > Moreover, we'd like to implement even more > flexible user experience. We'd like to allow user to choose which hard > drives to use for root FS, for > allocating DB. We'd like user to be able to put root FS over LV or MD > device (including stripe, mirror, multipath). > We'd like user to be able to choose which hard drives are bootable (if > any), which options to use for mounting file systems. > Many many various cases are possible. If you ask why we'd like to > support all those cases, the answer is simple: > because our users want us to support all those cases. > Obviously, many of those cases can not be implemented as image > internals, some cases can not be also implemented on > configuration stage (placing root fs on lvm device). > > As far as those use cases were rejected to be implemented in term of > IPA, we implemented so called Fuel Agent. > Important Fuel Agent features are: > > * It does not have REST API I would not call it a feature :-P Speaking seriously, if you agent is a long-running thing and it gets it's configuration from e.g. JSON file, how can Ironic notify it of any changes? > * it has executable entry point[s] > * It uses local json file as it's input > * It is planned to implement ability to download input data via HTTP > (kind of metadata service) > * It is designed to be agnostic to input data format, not only Fuel > format (data drivers) > * It is designed to be agnostic to image format (tar images, file system > images, disk images, currently fs images) > * It is designed to be agnostic to image compression algorithm > (currently gzip) > * It is designed to be agnostic to image downloading protocol (currently > local file and HTTP link) Does it support Glance? I understand it's HTTP, but it requires authentication. > > So, it is clear that being motivated by Fuel, Fuel Agent is quite > independent and generic. And we are open for > new use cases. My favorite use case is hardware introspection (aka getting data required for scheduling from a node automatically). Any ideas on this? (It's not a priority for this discussion, just curious). > > According Fuel itself, our nearest plan is to get rid of Cobbler because > in the case of image based approach it is huge overhead. The question is > which tool we can use instead of Cobbler. We need power management, > we need TFTP management, we need DHCP management. That is > exactly what Ironic is able to do. Frankly, we can implement power/TFTP/DHCP > management tool independently, but as Devananda said, we're all working > on the same problems, > so let's do it together. Power/TFTP/DHCP management is where we are > working on the same problems, > but IPA and Fuel Agent are about different use cases. This case is not > just Fuel, any mature > deployment case require advanced partition/fs management. Taking into consideration that you're doing a generic OS installation tool... yeah, it starts to make some sense. For cloud advanced partition is definitely a "pet" case. However, for > me it is OK, if it is easily possible > to use Ironic with external drivers (not merged to Ironic and not tested > on Ironic CI). > > AFAIU, this spec https://review.openstack.org/#/c/138115/ does not > assume changing Ironic API and core. > Jim asked about how Fuel Agent will know about advanced disk > partitioning scheme if API is not > changed. The answer is simple: Ironic is supposed to send a link to > metadata service (http or local file) > where Fuel Agent can download input json data. That's not about not changing Ironic. Changing Ironic is ok for reasonable use cases - we do a huge change right now to accommodate zapping, hardware introspection and RAID configuration. I actually have problems with this particular statement. It does not sound like Fuel Agent will integrate enough with Ironic. This JSON file: who is going to generate it? In the most popular use case we're driven by Nova. Will Nova generate this file? If the answer is "generate it manually for every node", it's too much a "pet" case for me personally. > > As Roman said, we try to be pragmatic and suggest something which does > not break anything. All changes > are supposed to be encapsulated into a driver. No API and core changes. > We have resources to support, test > and improve this driver. This spec is just a zero step. Further steps > are supposed to improve driver > so as to make it closer to Ironic abstractions. Honestly I think you should at least write a roadmap for it - see my comments above. About testing and support: are you providing a 3rdparty CI for it? It would be a big plus as to me: we already have troubles with drivers broken accidentally. > > For Ironic that means widening use cases and user community. But, as I > already said, > we are OK if Ironic does not need this feature. I don't think we should through away your hardware provision use case, but I personally would like to see how well Fuel Agent is going to play with how Ironic and Nova operate. > > Vladimir Kozhukalov > > On Tue, Dec 9, 2014 at 1:09 PM, Roman Prykhodchenko > > wrote: > > It is true that IPA and FuelAgent share a lot of functionality in > common. However there is a major difference between them which is > that they are intended to be used to solve a different problem. > > IPA is a solution for provision-use-destroy-use_by_different_user > use-case and is really great for using it for providing BM nodes for > other OS services or in services like Rackspace OnMetal. FuelAgent > itself serves for provision-use-use-?-use use-case like Fuel or > TripleO have. > > Those two use-cases require concentration on different details in > first place. For instance for IPA proper decommissioning is more > important than advanced disk management, but for FuelAgent > priorities are opposite because of obvious reasons. > > Putting all functionality to a single driver and a single agent may > cause conflicts in priorities and make a lot of mess inside both the > driver and the agent. Actually previously changes to IPA were > blocked right because of this conflict of priorities. Therefore > replacing FuelAgent by IPA in where FuelAgent is used currently does > not seem like a good option because come people (and I?m not talking > about Mirantis) might loose required features because of different > priorities. > > Having two separate drivers along with two separate agents for those > different use-cases will allow to have two independent teams that > are concentrated on what?s really important for a specific use-case. > I don?t see any problem in overlapping functionality if it?s used > differently. > > > P. S. > I realise that people may be also confused by the fact that > FuelAgent is actually called like that and is used only in Fuel atm. > Our point is to make it a simple, powerful and what?s more important > a generic tool for provisioning. It is not bound to Fuel or Mirantis > and if it will cause confusion in the future we will even be happy > to give it a different and less confusing name. > > P. P. S. > Some of the points of this integration do not look generic enough or > nice enough. We look pragmatic on the stuff and are trying to > implement what?s possible to implement as the first step. For sure > this is going to have a lot more steps to make it better and more > generic. > > >> On 09 Dec 2014, at 01:46, Jim Rollenhagen > > wrote: >> >> >> >> On December 8, 2014 2:23:58 PM PST, Devananda van der Veen >> > wrote: >>> I'd like to raise this topic for a wider discussion outside of the >>> hallway >>> track and code reviews, where it has thus far mostly remained. >>> >>> In previous discussions, my understanding has been that the Fuel team >>> sought to use Ironic to manage "pets" rather than "cattle" - and >>> doing >>> so >>> required extending the API and the project's functionality in >>> ways that >>> no >>> one else on the core team agreed with. Perhaps that understanding was >>> wrong >>> (or perhaps not), but in any case, there is now a proposal to add a >>> FuelAgent driver to Ironic. The proposal claims this would meet that >>> teams' >>> needs without requiring changes to the core of Ironic. >>> >>> https://review.openstack.org/#/c/138115/ >> >> I think it's clear from the review that I share the opinions >> expressed in this email. >> >> That said (and hopefully without derailing the thread too much), >> I'm curious how this driver could do software RAID or LVM without >> modifying Ironic's API or data model. How would the agent know how >> these should be built? How would an operator or user tell Ironic >> what the disk/partition/volume layout would look like? >> >> And before it's said - no, I don't think vendor passthru API calls >> are an appropriate answer here. >> >> // jim >> >>> >>> The Problem Description section calls out four things, which have all >>> been >>> discussed previously (some are here [0]). I would like to address >>> each >>> one, >>> invite discussion on whether or not these are, in fact, problems >>> facing >>> Ironic (not whether they are problems for someone, somewhere), >>> and then >>> ask >>> why these necessitate a new driver be added to the project. >>> >>> >>> They are, for reference: >>> >>> 1. limited partition support >>> >>> 2. no software RAID support >>> >>> 3. no LVM support >>> >>> 4. no support for hardware that lacks a BMC >>> >>> #1. >>> >>> When deploying a partition image (eg, QCOW format), Ironic's PXE >>> deploy >>> driver performs only the minimal partitioning necessary to >>> fulfill its >>> mission as an OpenStack service: respect the user's request for root, >>> swap, >>> and ephemeral partition sizes. When deploying a whole-disk image, >>> Ironic >>> does not perform any partitioning -- such is left up to the operator >>> who >>> created the disk image. >>> >>> Support for arbitrarily complex partition layouts is not required by, >>> nor >>> does it facilitate, the goal of provisioning physical servers via a >>> common >>> cloud API. Additionally, as with #3 below, nothing prevents a >>> user from >>> creating more partitions in unallocated disk space once they have >>> access to >>> their instance. Therefor, I don't see how Ironic's minimal >>> support for >>> partitioning is a problem for the project. >>> >>> #2. >>> >>> There is no support for defining a RAID in Ironic today, at all, >>> whether >>> software or hardware. Several proposals were floated last cycle; >>> one is >>> under review right now for DRAC support [1], and there are multiple >>> call >>> outs for RAID building in the state machine mega-spec [2]. Any such >>> support >>> for hardware RAID will necessarily be abstract enough to support >>> multiple >>> hardware vendor's driver implementations and both in-band >>> creation (via >>> IPA) and out-of-band creation (via vendor tools). >>> >>> Given the above, it may become possible to add software RAID >>> support to >>> IPA >>> in the future, under the same abstraction. This would closely tie the >>> deploy agent to the images it deploys (the latter image's kernel >>> would >>> be >>> dependent upon a software RAID built by the former), but this would >>> necessarily be true for the proposed FuelAgent as well. >>> >>> I don't see this as a compelling reason to add a new driver to the >>> project. >>> Instead, we should (plan to) add support for software RAID to the >>> deploy >>> agent which is already part of the project. >>> >>> #3. >>> >>> LVM volumes can easily be added by a user (after provisioning) within >>> unallocated disk space for non-root partitions. I have not yet seen a >>> compelling argument for doing this within the provisioning phase. >>> >>> #4. >>> >>> There are already in-tree drivers [3] [4] [5] which do not require a >>> BMC. >>> One of these uses SSH to connect and run pre-determined commands. >>> Like >>> the >>> spec proposal, which states at line 122, "Control via SSH access >>> feature >>> intended only for experiments in non-production environment," the >>> current >>> SSHPowerDriver is only meant for testing environments. We could >>> probably >>> extend this driver to do what the FuelAgent spec proposes, as far as >>> remote >>> power control for cheap always-on hardware in testing >>> environments with >>> a >>> pre-shared key. >>> >>> (And if anyone wonders about a use case for Ironic without external >>> power >>> control ... I can only think of one situation where I would >>> rationally >>> ever >>> want to have a control-plane agent running inside a >>> user-instance: I am >>> both the operator and the only user of the cloud.) >>> >>> >>> ---------------- >>> >>> In summary, as far as I can tell, all of the problem statements upon >>> which >>> the FuelAgent proposal are based are solvable through incremental >>> changes >>> in existing drivers, or out of scope for the project entirely. As >>> another >>> software-based deploy agent, FuelAgent would duplicate the >>> majority of >>> the >>> functionality which ironic-python-agent has today. >>> >>> Ironic's driver ecosystem benefits from a diversity of >>> hardware-enablement >>> drivers. Today, we have two divergent software deployment drivers >>> which >>> approach image deployment differently: "agent" drivers use a local >>> agent to >>> prepare a system and download the image; "pxe" drivers use a remote >>> agent >>> and copy the image over iSCSI. I don't understand how a second driver >>> which >>> duplicates the functionality we already have, and shares the same >>> goals >>> as >>> the drivers we already have, is beneficial to the project. >>> >>> Doing the same thing twice just increases the burden on the team; >>> we're >>> all >>> working on the same problems, so let's do it together. >>> >>> -Devananda >>> >>> >>> [0] >>> https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition >>> >>> [1] https://review.openstack.org/#/c/107981/ >>> >>> [2] >>> https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst >>> >>> >>> [3] >>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py >>> >>> [4] >>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py >>> >>> [5] >>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py >>> >>> >>> ------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From joehuang at huawei.com Tue Dec 9 12:57:53 2014 From: joehuang at huawei.com (joehuang) Date: Tue, 9 Dec 2014 12:57:53 +0000 Subject: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC In-Reply-To: <5486CFBA.8030204@openstack.org> References: <5486B51A.6080001@openstack.org> <5E7A3D1BF5FD014E86E5F971CF446EFF541FB758@szxema505-mbs.china.huawei.com>, <5486CFBA.8030204@openstack.org> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541FC872@szxema505-mbs.china.huawei.com> Hello, Thierry, That sounds great. Best Regards Chaoyi Huang ( joehuang ) ________________________________________ From: Thierry Carrez [thierry at openstack.org] Sent: 09 December 2014 18:32 To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC joehuang wrote: > If time is available, how about adding one agenda to guide the direction for cascading to move forward? Thanks in advance. > > The topic is : " Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases. " Hi Joe, we close the agenda one day before the meeting to let people arrange their attendance based on the published agenda. I added your topic in the backlog for next week agenda: https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting Regards, -- Thierry Carrez (ttx) _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sahid.ferdjaoui at redhat.com Tue Dec 9 12:58:28 2014 From: sahid.ferdjaoui at redhat.com (Sahid Orentino Ferdjaoui) Date: Tue, 9 Dec 2014 13:58:28 +0100 Subject: [openstack-dev] People of OpenStack (and their IRC nicks) In-Reply-To: References: Message-ID: <20141209125828.GA10149@redhat.redhat.com> On Tue, Dec 09, 2014 at 10:46:12AM +0000, Matthew Gilliard wrote: > Sometimes, I want to ask the author of a patch about it on IRC. > However, there doesn't seem to be a reliable way to find out someone's > IRC handle. The potential for useful conversation is sometimes > missed. Unless there's a better alternative which I didn't find, > https://wiki.openstack.org/wiki/People seems to fulfill that purpose, > but is neither complete nor accurate. > > What do people think about this? Should we put more effort into > keeping the People wiki up-to-date? That's a(nother) manual process > though - can we autogenerate it somehow? We probably don't want to maintain an other page of Wiki. We can recommend in how to contribute to fill correctly the IRC field in launchpad.NET since OpenStack is closely related. https://wiki.openstack.org/wiki/How_To_Contribute s. > Matthew > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean at dague.net Tue Dec 9 13:16:34 2014 From: sean at dague.net (Sean Dague) Date: Tue, 09 Dec 2014 08:16:34 -0500 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <20141209123243.GA9099@redhat.redhat.com> References: <5486DF7F.7080706@dague.net> <20141209123243.GA9099@redhat.redhat.com> Message-ID: <5486F632.2020706@dague.net> On 12/09/2014 07:32 AM, Sahid Orentino Ferdjaoui wrote: > On Tue, Dec 09, 2014 at 06:39:43AM -0500, Sean Dague wrote: >> I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely. >> >> 1 - the entire H8* group. This doesn't function on python code, it >> functions on git commit message, which makes it tough to run locally. It >> also would be a reason to prevent us from not rerunning tests on commit >> message changes (something we could do after the next gerrit update). > > -1, We probably want to recommend a git commit message more stronger > formatted mainly about the first line which is the most important. It > should reflect which part of the code the commit is attended to update > that gives the ability for contributors to quickly see on what the > submission is related; > > An example with Nova which is quite big: api, compute, > doc, scheduler, virt, vmware, libvirt, objects... > > We should to use a prefix in the first line of commit message. There > is a large number of commits waiting for reviews, that can help > contributors with a knowledge in a particular domain to identify > quickly which one to pick. And how exactly do you expect a machine to decide if that's done correctly? -Sean > >> 2 - the entire H3* group - because of this - >> https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm >> >> A look at the H3* code shows that it's terribly complicated, and is >> often full of bugs (a few bit us last week). I'd rather just delete it >> and move on. >> >> -Sean >> >> -- >> Sean Dague >> http://dague.net >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Sean Dague http://dague.net From julien at danjou.info Tue Dec 9 14:02:58 2014 From: julien at danjou.info (Julien Danjou) Date: Tue, 09 Dec 2014 15:02:58 +0100 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <5486DF7F.7080706@dague.net> (Sean Dague's message of "Tue, 09 Dec 2014 06:39:43 -0500") References: <5486DF7F.7080706@dague.net> Message-ID: On Tue, Dec 09 2014, Sean Dague wrote: > 1 - the entire H8* group. This doesn't function on python code, it > functions on git commit message, which makes it tough to run locally. It > also would be a reason to prevent us from not rerunning tests on commit > message changes (something we could do after the next gerrit update). +1 > 2 - the entire H3* group - because of this - > https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm > > A look at the H3* code shows that it's terribly complicated, and is > often full of bugs (a few bit us last week). I'd rather just delete it > and move on. -0 Not sure it's a good idea to drop it, but I don't have strong arguments for it. -- Julien Danjou /* Free Software hacker http://julien.danjou.info */ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 826 bytes Desc: not available URL: From fungi at yuggoth.org Tue Dec 9 14:04:07 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Dec 2014 14:04:07 +0000 Subject: [openstack-dev] People of OpenStack (and their IRC nicks) In-Reply-To: <20141209125828.GA10149@redhat.redhat.com> References: <20141209125828.GA10149@redhat.redhat.com> Message-ID: <20141209140407.GO2497@yuggoth.org> On 2014-12-09 13:58:28 +0100 (+0100), Sahid Orentino Ferdjaoui wrote: > We probably don't want to maintain an other page of Wiki. Yes, the wiki is about low-overhead collaborative documentation. It is not suitable as a database. > We can recommend in how to contribute to fill correctly the IRC > field in launchpad.NET since OpenStack is closely related. [...] OpenStack is not really closely related to Launchpad. At the moment we use its OpenID provider and its bug tracker, both of which the community is actively in the process of moving off of (to openstackid.org and storyboard.openstack.org respectively). We already have a solution for tracking the contributor->IRC mapping--add it to your Foundation Member Profile. For example, mine is in there already: http://www.openstack.org/community/members/profile/5479 -- Jeremy Stanley From berrange at redhat.com Tue Dec 9 14:04:11 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Tue, 9 Dec 2014 14:04:11 +0000 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: References: Message-ID: <20141209140411.GI29167@redhat.com> On Tue, Dec 09, 2014 at 10:53:19AM +0100, Maxime Leroy wrote: > I have also proposed a blueprint to have a new plugin mechanism in > nova to load external vif driver. (nova-specs: > https://review.openstack.org/#/c/136827/ and nova (rfc patch): > https://review.openstack.org/#/c/136857/) > > From my point-of-view of a developer having a plugin framework for > internal/external vif driver seems to be a good thing. > It makes the code more modular and introduce a clear api for vif driver classes. > > So far, it raises legitimate questions concerning API stability and > public API that request a wider discussion on the ML (as asking by > John Garbut). > > I think having a plugin mechanism and a clear api for vif driver is > not going against this policy: > http://docs.openstack.org/developer/nova/devref/policies.html#out-of-tree-support. > > There is no needs to have a stable API. It is up to the owner of the > external VIF driver to ensure that its driver is supported by the > latest API. And not the nova community to manage a stable API for this > external VIF driver. Does it make senses ? Experiance has shown that even if it is documented as unsupported, once the extension point exists, vendors & users will ignore the small print about support status. There will be complaints raised every time it gets broken until we end up being forced to maintain it as stable API whether we want to or not. That's not a route we want to go down. > Considering the network V2 API, L2/ML2 mechanism driver and VIF driver > need to exchange information such as: binding:vif_type and > binding:vif_details. > > From my understanding, 'binding:vif_type' and 'binding:vif_details' as > a field part of the public network api. There is no validation > constraints for these fields (see > http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html), > it means that any value is accepted by the API. So, the values set in > 'binding:vif_type' and 'binding:vif_details' are not part of the > public API. Is my understanding correct ? The VIF parameters are mapped into the nova.network.model.VIF class which is doing some crude validation. I would anticipate that this validation will be increasing over time, because any functional data flowing over the API and so needs to be carefully managed for upgrade reasons. Even if the Neutron impl is out of tree, I would still expect both Nova and Neutron core to sign off on any new VIF type name and its associated details (if any). > What other reasons am I missing to not have VIF driver classes as a > public extension point ? Having to find & install VIF driver classes from countless different vendors, each hiding their code away in their own obsecure website, will lead to awful end user experiance when deploying Nova. Users are better served by having it all provided when they deploy Nova IMHO If every vendor goes off & works in their own isolated world we also loose the scope to align the implementations, so that common concepts work the same way in all cases and allow us to minimize the number of new VIF types required. The proposed vhostuser VIF type is a good example of this - it allows a single Nova VIF driver to be capable of potentially supporting multiple different impls on the Neutron side. If every vendor worked in their own world, we would have ended up with multiple VIF drivers doing the same thing in Nova, each with their own set of bugs & quirks. I expect the quality of the code the operator receives will be lower if it is never reviewed by anyone except the vendor who writes it in the first place. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From doug at doughellmann.com Tue Dec 9 14:07:43 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 9 Dec 2014 09:07:43 -0500 Subject: [openstack-dev] [oslo.messaging][devstack] ZeroMQ driver maintenance next steps In-Reply-To: <548679CD.4030205@gmail.com> References: <546A058C.7090800@ubuntu.com> <1C39F2D6-C600-4BA5-8F95-79E2E7DA7172@doughellmann.com> <6F5E7E0C-656F-4616-A816-BC2E340299AD@doughellmann.com> <37ecc12f660ddca2b5a76930c06c1ce6@sileht.net> <546B4DC7.9090309@ubuntu.com> <548679CD.4030205@gmail.com> Message-ID: <69DE76EB-8BCD-4D35-9099-13B71160AF80@doughellmann.com> On Dec 8, 2014, at 11:25 PM, Li Ma wrote: > Hi all, I tried to deploy zeromq by devstack and it definitely failed with lots of problems, like dependencies, topics, matchmaker setup, etc. I've already registered a blueprint for devstack-zeromq [1]. I added the [devstack] tag to the subject of this message so that team will see the thread. > > Besides, I suggest to build a wiki page in order to trace all the workitems related with ZeroMQ. The general sections may be [Why ZeroMQ], [Current Bugs & Reviews], [Future Plan & Blueprints], [Discussions], [Resources], etc. Coordinating the work on this via a wiki page makes sense. Please post the link when you?re ready. Doug > > Any comments? > > [1] https://blueprints.launchpad.net/devstack/+spec/zeromq > > cheers, > Li Ma > > On 2014/11/18 21:46, James Page wrote: >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA256 >> >> On 18/11/14 00:55, Denis Makogon wrote: >>> >>> So if zmq driver support in devstack is fixed, we can easily add a >>> new job to run them in the same way. >>> >>> >>> Btw this is a good question. I will take look at current state of >>> zmq in devstack. >> I don't think its that far off and its broken rather than missing - >> the rpc backend code needs updating to use oslo.messaging rather than >> project specific copies of the rpc common codebase (pre oslo). >> Devstack should be able to run with the local matchmaker in most >> scenarios but it looks like there was support for the redis matchmaker >> as well. >> >> If you could take some time to fixup that would be awesome! >> >> - -- James Page >> Ubuntu and Debian Developer >> james.page at ubuntu.com >> jamespage at debian.org >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG v1 >> >> iQIbBAEBCAAGBQJUa03HAAoJEL/srsug59jDdZQP+IeEvXAcfxNs2Tgvt5trnjgg >> cnTrJPLbr6i/uIXKjRvNDSkJEdv//EjL/IRVRIf0ld09FpRnyKzUDMPq1CzFJqdo >> 45RqFWwJ46NVA4ApLZVugJvKc4tlouZQvizqCBzDKA6yUsUoGmRpYFAQ3rN6Gs9h >> Q/8XSAmHQF1nyTarxvylZgnqhqWX0p8n1+fckQeq2y7s3D3WxfM71ftiLrmQCWir >> aPkH7/0qvW+XiOtBXVTXDb/7pocNZg+jtBkUcokORXbJCmiCN36DBXv9LPIYgfhe >> /cC/wQFH4RUSkoj7SYPAafX4J2lTMjAd+GwdV6ppKy4DbPZdNty8c9cbG29KUK40 >> TSCz8U3tUcaFGDQdBB5Kg85c1aYri6dmLxJlk7d8pOXLTb0bfnzdl+b6UsLkhXqB >> P4Uc+IaV9vxoqmYZAzuqyWm9QriYlcYeaIJ9Ma5fN+CqxnIaCS7UbSxHj0yzTaUb >> 4XgmcQBwHe22ouwBmk2RGzLc1Rv8EzMLbbrGhtTu459WnAZCrXOTPOCn54PoIgZD >> bK/Om+nmTxepWD1lExHIYk3BXyZObxPO00UJHdxvSAIh45ROlh8jW8hQA9lJ9QVu >> Cz775xVlh4DRYgenN34c2afOrhhdq4V1OmjYUBf5M4gS6iKa20LsMjp7NqT0jzzB >> tRDFb67u28jxnIXR16g= >> =+k0M >> -----END PGP SIGNATURE----- >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mestery at mestery.com Tue Dec 9 14:08:54 2014 From: mestery at mestery.com (Kyle Mestery) Date: Tue, 9 Dec 2014 08:08:54 -0600 Subject: [openstack-dev] [neutron] Spec reviews this week by the neutron-drivers team Message-ID: The neutron-drivers team has started the process of both accepting and rejecting specs for Kilo now. If you've submitted a spec, you will soon see the spec either approved or land in either the abandoned or -2 category. We're doing our best to put helpful messages when we do abandon or -2 specs, but for more detail, see the neutron-drivers wiki page [1]. Also, you can find me on IRC with questions as well. Thanks! Kyle [1] https://wiki.openstack.org/wiki/Neutron-drivers -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Dec 9 14:11:29 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 9 Dec 2014 09:11:29 -0500 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <5486DF7F.7080706@dague.net> References: <5486DF7F.7080706@dague.net> Message-ID: <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> On Dec 9, 2014, at 6:39 AM, Sean Dague wrote: > I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely. > > 1 - the entire H8* group. This doesn't function on python code, it > functions on git commit message, which makes it tough to run locally. It > also would be a reason to prevent us from not rerunning tests on commit > message changes (something we could do after the next gerrit update). > > 2 - the entire H3* group - because of this - > https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm > > A look at the H3* code shows that it's terribly complicated, and is > often full of bugs (a few bit us last week). I'd rather just delete it > and move on. I don?t have the hacking rules memorized. Could you describe them briefly? Doug > > -Sean > > -- > Sean Dague > http://dague.net > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dzimine at stackstorm.com Tue Dec 9 14:35:06 2014 From: dzimine at stackstorm.com (Dmitri Zimine) Date: Tue, 9 Dec 2014 06:35:06 -0800 Subject: [openstack-dev] [Mistral] Message-ID: <2E47CCEA-109A-4FD3-9984-9457C565089F@stackstorm.com> Winson, thanks for filing the blueprint: https://blueprints.launchpad.net/mistral/+spec/mistral-global-context, some clarification questions: 1) how exactly would the user describe these global variables syntactically? In DSL? What can we use as syntax? In the initial workflow input? 2) what is the visibility scope: this and child workflows, or "truly global?? 3) What are the good default behavior? Let?s detail it a bit more. DZ> -------------- next part -------------- An HTML attachment was scrubbed... URL: From vkozhukalov at mirantis.com Tue Dec 9 14:40:06 2014 From: vkozhukalov at mirantis.com (Vladimir Kozhukalov) Date: Tue, 9 Dec 2014 18:40:06 +0400 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: <5486F050.808@redhat.com> References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> <5486F050.808@redhat.com> Message-ID: Vladimir Kozhukalov On Tue, Dec 9, 2014 at 3:51 PM, Dmitry Tantsur wrote: > Hi folks, > > Thank you for additional explanation, it does clarify things a bit. I'd > like to note, however, that you talk a lot about how _different_ Fuel Agent > is from what Ironic does now. I'd like actually to know how well it's going > to fit into what Ironic does (in additional to your specific use cases). > Hence my comments inline: > > On 12/09/2014 01:01 PM, Vladimir Kozhukalov wrote: > >> Just a short explanation of Fuel use case. >> >> Fuel use case is not a cloud. Fuel is a deployment tool. We install OS >> on bare metal servers and on VMs >> and then configure this OS using Puppet. We have been using Cobbler as >> our OS provisioning tool since the beginning of Fuel. >> However, Cobbler assumes using native OS installers (Anaconda and >> Debian-installer). For some reasons we decided to >> switch to image based approach for installing OS. >> >> One of Fuel features is the ability to provide advanced partitioning >> schemes (including software RAIDs, LVM). >> Native installers are quite difficult to customize in the field of >> partitioning >> (that was one of the reasons to switch to image based approach). >> Moreover, we'd like to implement even more >> flexible user experience. We'd like to allow user to choose which hard >> drives to use for root FS, for >> allocating DB. We'd like user to be able to put root FS over LV or MD >> device (including stripe, mirror, multipath). >> We'd like user to be able to choose which hard drives are bootable (if >> any), which options to use for mounting file systems. >> Many many various cases are possible. If you ask why we'd like to >> support all those cases, the answer is simple: >> because our users want us to support all those cases. >> Obviously, many of those cases can not be implemented as image >> internals, some cases can not be also implemented on >> configuration stage (placing root fs on lvm device). >> >> As far as those use cases were rejected to be implemented in term of >> IPA, we implemented so called Fuel Agent. >> Important Fuel Agent features are: >> >> * It does not have REST API >> > I would not call it a feature :-P > > Speaking seriously, if you agent is a long-running thing and it gets it's > configuration from e.g. JSON file, how can Ironic notify it of any changes? > > Fuel Agent is not long-running service. Currently there is no need to have REST API. If we deal with kind of keep alive stuff of inventory/discovery then we probably add API. Frankly, IPA REST API is not REST at all. However that is not a reason to not to call it a feature and through it away. It is a reason to work on it and improve. That is how I try to look at things (pragmatically). Fuel Agent has executable entry point[s] like /usr/bin/provision. You can run this entry point with options (oslo.config) and point out where to find input json data. It is supposed Ironic will use ssh (currently in Fuel we use mcollective) connection and run this waiting for exit code. If exit code is equal to 0, provisioning is done. Extremely simple. > * it has executable entry point[s] >> * It uses local json file as it's input >> * It is planned to implement ability to download input data via HTTP >> (kind of metadata service) >> * It is designed to be agnostic to input data format, not only Fuel >> format (data drivers) >> * It is designed to be agnostic to image format (tar images, file system >> images, disk images, currently fs images) >> * It is designed to be agnostic to image compression algorithm >> (currently gzip) >> * It is designed to be agnostic to image downloading protocol (currently >> local file and HTTP link) >> > Does it support Glance? I understand it's HTTP, but it requires > authentication. > > >> So, it is clear that being motivated by Fuel, Fuel Agent is quite >> independent and generic. And we are open for >> new use cases. >> > My favorite use case is hardware introspection (aka getting data required > for scheduling from a node automatically). Any ideas on this? (It's not a > priority for this discussion, just curious). That is exactly what we do in Fuel. Currently we use so called 'Default' pxelinux config and all nodes being powered on are supposed to boot with so called 'Bootstrap' ramdisk where Ohai based agent (not Fuel Agent) runs periodically and sends hardware report to Fuel master node. User then is able to look at CPU, hard drive and network info and choose which nodes to use for controllers, which for computes, etc. That is what nova scheduler is supposed to do (look at hardware info and choose a suitable node). Talking about future, we are planning to re-implement inventory/discovery stuff in terms of Fuel Agent (currently, this stuff is implemented as Ohai based independent script). Estimation for that is March 2015. > > > >> According Fuel itself, our nearest plan is to get rid of Cobbler because >> in the case of image based approach it is huge overhead. The question is >> which tool we can use instead of Cobbler. We need power management, >> we need TFTP management, we need DHCP management. That is >> exactly what Ironic is able to do. Frankly, we can implement >> power/TFTP/DHCP >> management tool independently, but as Devananda said, we're all working >> on the same problems, >> so let's do it together. Power/TFTP/DHCP management is where we are >> working on the same problems, >> but IPA and Fuel Agent are about different use cases. This case is not >> just Fuel, any mature >> deployment case require advanced partition/fs management. >> > Taking into consideration that you're doing a generic OS installation > tool... yeah, it starts to make some sense. For cloud advanced partition is > definitely a "pet" case. Generic image based OS installation tool. > > However, for > >> me it is OK, if it is easily possible >> to use Ironic with external drivers (not merged to Ironic and not tested >> on Ironic CI). >> >> AFAIU, this spec https://review.openstack.org/#/c/138115/ does not >> assume changing Ironic API and core. >> Jim asked about how Fuel Agent will know about advanced disk >> partitioning scheme if API is not >> changed. The answer is simple: Ironic is supposed to send a link to >> metadata service (http or local file) >> where Fuel Agent can download input json data. >> > That's not about not changing Ironic. Changing Ironic is ok for reasonable > use cases - we do a huge change right now to accommodate zapping, hardware > introspection and RAID configuration. > > Minimal changes because we don't want to break anything. It is clear how difficult to convince people to do even minimal changes. Again it is just a pragmatic approach. We want to do things iteratively so as not to break Ironic as well as Fuel. We just can not change all at once. > I actually have problems with this particular statement. It does not sound > like Fuel Agent will integrate enough with Ironic. This JSON file: who is > going to generate it? In the most popular use case we're driven by Nova. > Will Nova generate this file? > > If the answer is "generate it manually for every node", it's too much a > "pet" case for me personally. > > That is how this provision data look like right now https://github.com/stackforge/fuel-web/blob/master/fuel_agent_ci/samples/provision.json Do you still think it is written manually? Currently Fuel Agent works as a part of Fuel ecosystem. We have a service which serializes provision data for us into json. Fuel Agent is agnostic to data format (data drivers). If someone wants to use another format, they are welcome to implement a driver. We assume next step will be to put provision data (disk partition scheme, maybe other data) into driver_info and make Fuel Agent driver able to serialize those data (special format) and implement a corresponding data driver in Fuel Agent for this format. Again very simple. Maybe it is time to think of having Ironic metadata service (just maybe). Another point is that currently Fuel stores hardware info in its own database but when it is possible to get those data from Ironic (when inventory stuff is implemented) we will be glad to use Ironic API for that. That is what I mean when I say 'to make Fuel stuff closer to Ironic abstractions' > >> As Roman said, we try to be pragmatic and suggest something which does >> not break anything. All changes >> are supposed to be encapsulated into a driver. No API and core changes. >> We have resources to support, test >> and improve this driver. This spec is just a zero step. Further steps >> are supposed to improve driver >> so as to make it closer to Ironic abstractions. >> > Honestly I think you should at least write a roadmap for it - see my > comments above. > Honestly, I think writing roadmap right now is not very rational as far as I am not even sure people are interested in widening Ironic use cases. Some of the comments were not even constructive like "I don't understand what your use case is, please use IPA". > > About testing and support: are you providing a 3rdparty CI for it? It > would be a big plus as to me: we already have troubles with drivers broken > accidentally. We are flexible here but I'm not ready to answer this question right now. We will try to fit Ironic requirements wherever it is possible. > > >> For Ironic that means widening use cases and user community. But, as I >> already said, >> we are OK if Ironic does not need this feature. >> > I don't think we should through away your hardware provision use case, but > I personally would like to see how well Fuel Agent is going to play with > how Ironic and Nova operate. > Nova is not our case. Fuel is totally about deployment. There is some in common As I already explained, currently we need power/tftp/dhcp management Ironic capabilities. Again, it is not a problem to implement this stuff independently like it happened with Fuel Agent (because this use case was rejected several months ago). Our suggestion is not about "let's compete with IPA" it is totally about "let's work on the same problems together". > >> Vladimir Kozhukalov >> >> On Tue, Dec 9, 2014 at 1:09 PM, Roman Prykhodchenko >> > wrote: >> >> It is true that IPA and FuelAgent share a lot of functionality in >> common. However there is a major difference between them which is >> that they are intended to be used to solve a different problem. >> >> IPA is a solution for provision-use-destroy-use_by_different_user >> use-case and is really great for using it for providing BM nodes for >> other OS services or in services like Rackspace OnMetal. FuelAgent >> itself serves for provision-use-use-?-use use-case like Fuel or >> TripleO have. >> >> Those two use-cases require concentration on different details in >> first place. For instance for IPA proper decommissioning is more >> important than advanced disk management, but for FuelAgent >> priorities are opposite because of obvious reasons. >> >> Putting all functionality to a single driver and a single agent may >> cause conflicts in priorities and make a lot of mess inside both the >> driver and the agent. Actually previously changes to IPA were >> blocked right because of this conflict of priorities. Therefore >> replacing FuelAgent by IPA in where FuelAgent is used currently does >> not seem like a good option because come people (and I?m not talking >> about Mirantis) might loose required features because of different >> priorities. >> >> Having two separate drivers along with two separate agents for those >> different use-cases will allow to have two independent teams that >> are concentrated on what?s really important for a specific use-case. >> I don?t see any problem in overlapping functionality if it?s used >> differently. >> >> >> P. S. >> I realise that people may be also confused by the fact that >> FuelAgent is actually called like that and is used only in Fuel atm. >> Our point is to make it a simple, powerful and what?s more important >> a generic tool for provisioning. It is not bound to Fuel or Mirantis >> and if it will cause confusion in the future we will even be happy >> to give it a different and less confusing name. >> >> P. P. S. >> Some of the points of this integration do not look generic enough or >> nice enough. We look pragmatic on the stuff and are trying to >> implement what?s possible to implement as the first step. For sure >> this is going to have a lot more steps to make it better and more >> generic. >> >> >> On 09 Dec 2014, at 01:46, Jim Rollenhagen >> > wrote: >>> >>> >>> >>> On December 8, 2014 2:23:58 PM PST, Devananda van der Veen >>> > wrote: >>> >>>> I'd like to raise this topic for a wider discussion outside of the >>>> hallway >>>> track and code reviews, where it has thus far mostly remained. >>>> >>>> In previous discussions, my understanding has been that the Fuel >>>> team >>>> sought to use Ironic to manage "pets" rather than "cattle" - and >>>> doing >>>> so >>>> required extending the API and the project's functionality in >>>> ways that >>>> no >>>> one else on the core team agreed with. Perhaps that understanding >>>> was >>>> wrong >>>> (or perhaps not), but in any case, there is now a proposal to add a >>>> FuelAgent driver to Ironic. The proposal claims this would meet that >>>> teams' >>>> needs without requiring changes to the core of Ironic. >>>> >>>> https://review.openstack.org/#/c/138115/ >>>> >>> >>> I think it's clear from the review that I share the opinions >>> expressed in this email. >>> >>> That said (and hopefully without derailing the thread too much), >>> I'm curious how this driver could do software RAID or LVM without >>> modifying Ironic's API or data model. How would the agent know how >>> these should be built? How would an operator or user tell Ironic >>> what the disk/partition/volume layout would look like? >>> >>> And before it's said - no, I don't think vendor passthru API calls >>> are an appropriate answer here. >>> >>> // jim >>> >>> >>>> The Problem Description section calls out four things, which have >>>> all >>>> been >>>> discussed previously (some are here [0]). I would like to address >>>> each >>>> one, >>>> invite discussion on whether or not these are, in fact, problems >>>> facing >>>> Ironic (not whether they are problems for someone, somewhere), >>>> and then >>>> ask >>>> why these necessitate a new driver be added to the project. >>>> >>>> >>>> They are, for reference: >>>> >>>> 1. limited partition support >>>> >>>> 2. no software RAID support >>>> >>>> 3. no LVM support >>>> >>>> 4. no support for hardware that lacks a BMC >>>> >>>> #1. >>>> >>>> When deploying a partition image (eg, QCOW format), Ironic's PXE >>>> deploy >>>> driver performs only the minimal partitioning necessary to >>>> fulfill its >>>> mission as an OpenStack service: respect the user's request for >>>> root, >>>> swap, >>>> and ephemeral partition sizes. When deploying a whole-disk image, >>>> Ironic >>>> does not perform any partitioning -- such is left up to the operator >>>> who >>>> created the disk image. >>>> >>>> Support for arbitrarily complex partition layouts is not required >>>> by, >>>> nor >>>> does it facilitate, the goal of provisioning physical servers via a >>>> common >>>> cloud API. Additionally, as with #3 below, nothing prevents a >>>> user from >>>> creating more partitions in unallocated disk space once they have >>>> access to >>>> their instance. Therefor, I don't see how Ironic's minimal >>>> support for >>>> partitioning is a problem for the project. >>>> >>>> #2. >>>> >>>> There is no support for defining a RAID in Ironic today, at all, >>>> whether >>>> software or hardware. Several proposals were floated last cycle; >>>> one is >>>> under review right now for DRAC support [1], and there are multiple >>>> call >>>> outs for RAID building in the state machine mega-spec [2]. Any such >>>> support >>>> for hardware RAID will necessarily be abstract enough to support >>>> multiple >>>> hardware vendor's driver implementations and both in-band >>>> creation (via >>>> IPA) and out-of-band creation (via vendor tools). >>>> >>>> Given the above, it may become possible to add software RAID >>>> support to >>>> IPA >>>> in the future, under the same abstraction. This would closely tie >>>> the >>>> deploy agent to the images it deploys (the latter image's kernel >>>> would >>>> be >>>> dependent upon a software RAID built by the former), but this would >>>> necessarily be true for the proposed FuelAgent as well. >>>> >>>> I don't see this as a compelling reason to add a new driver to the >>>> project. >>>> Instead, we should (plan to) add support for software RAID to the >>>> deploy >>>> agent which is already part of the project. >>>> >>>> #3. >>>> >>>> LVM volumes can easily be added by a user (after provisioning) >>>> within >>>> unallocated disk space for non-root partitions. I have not yet seen >>>> a >>>> compelling argument for doing this within the provisioning phase. >>>> >>>> #4. >>>> >>>> There are already in-tree drivers [3] [4] [5] which do not require a >>>> BMC. >>>> One of these uses SSH to connect and run pre-determined commands. >>>> Like >>>> the >>>> spec proposal, which states at line 122, "Control via SSH access >>>> feature >>>> intended only for experiments in non-production environment," the >>>> current >>>> SSHPowerDriver is only meant for testing environments. We could >>>> probably >>>> extend this driver to do what the FuelAgent spec proposes, as far as >>>> remote >>>> power control for cheap always-on hardware in testing >>>> environments with >>>> a >>>> pre-shared key. >>>> >>>> (And if anyone wonders about a use case for Ironic without external >>>> power >>>> control ... I can only think of one situation where I would >>>> rationally >>>> ever >>>> want to have a control-plane agent running inside a >>>> user-instance: I am >>>> both the operator and the only user of the cloud.) >>>> >>>> >>>> ---------------- >>>> >>>> In summary, as far as I can tell, all of the problem statements upon >>>> which >>>> the FuelAgent proposal are based are solvable through incremental >>>> changes >>>> in existing drivers, or out of scope for the project entirely. As >>>> another >>>> software-based deploy agent, FuelAgent would duplicate the >>>> majority of >>>> the >>>> functionality which ironic-python-agent has today. >>>> >>>> Ironic's driver ecosystem benefits from a diversity of >>>> hardware-enablement >>>> drivers. Today, we have two divergent software deployment drivers >>>> which >>>> approach image deployment differently: "agent" drivers use a local >>>> agent to >>>> prepare a system and download the image; "pxe" drivers use a remote >>>> agent >>>> and copy the image over iSCSI. I don't understand how a second >>>> driver >>>> which >>>> duplicates the functionality we already have, and shares the same >>>> goals >>>> as >>>> the drivers we already have, is beneficial to the project. >>>> >>>> Doing the same thing twice just increases the burden on the team; >>>> we're >>>> all >>>> working on the same problems, so let's do it together. >>>> >>>> -Devananda >>>> >>>> >>>> [0] >>>> https://blueprints.launchpad.net/ironic/+spec/ironic- >>>> python-agent-partition >>>> >>>> [1] https://review.openstack.org/#/c/107981/ >>>> >>>> [2] >>>> https://review.openstack.org/#/c/133828/11/specs/kilo/new- >>>> ironic-state-machine.rst >>>> >>>> >>>> [3] >>>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/ >>>> drivers/modules/snmp.py >>>> >>>> [4] >>>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/ >>>> drivers/modules/iboot.py >>>> >>>> [5] >>>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/ >>>> drivers/modules/ssh.py >>>> >>>> >>>> ------------------------------------------------------------ >>>> ------------ >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vkozhukalov at mirantis.com Tue Dec 9 14:45:19 2014 From: vkozhukalov at mirantis.com (Vladimir Kozhukalov) Date: Tue, 9 Dec 2014 18:45:19 +0400 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> <5486F050.808@redhat.com> Message-ID: s/though/throw/g Vladimir Kozhukalov On Tue, Dec 9, 2014 at 5:40 PM, Vladimir Kozhukalov < vkozhukalov at mirantis.com> wrote: > > > Vladimir Kozhukalov > > On Tue, Dec 9, 2014 at 3:51 PM, Dmitry Tantsur > wrote: > >> Hi folks, >> >> Thank you for additional explanation, it does clarify things a bit. I'd >> like to note, however, that you talk a lot about how _different_ Fuel Agent >> is from what Ironic does now. I'd like actually to know how well it's going >> to fit into what Ironic does (in additional to your specific use cases). >> Hence my comments inline: > > >> >> On 12/09/2014 01:01 PM, Vladimir Kozhukalov wrote: >> >>> Just a short explanation of Fuel use case. >>> >>> Fuel use case is not a cloud. Fuel is a deployment tool. We install OS >>> on bare metal servers and on VMs >>> and then configure this OS using Puppet. We have been using Cobbler as >>> our OS provisioning tool since the beginning of Fuel. >>> However, Cobbler assumes using native OS installers (Anaconda and >>> Debian-installer). For some reasons we decided to >>> switch to image based approach for installing OS. >>> >>> One of Fuel features is the ability to provide advanced partitioning >>> schemes (including software RAIDs, LVM). >>> Native installers are quite difficult to customize in the field of >>> partitioning >>> (that was one of the reasons to switch to image based approach). >>> Moreover, we'd like to implement even more >>> flexible user experience. We'd like to allow user to choose which hard >>> drives to use for root FS, for >>> allocating DB. We'd like user to be able to put root FS over LV or MD >>> device (including stripe, mirror, multipath). >>> We'd like user to be able to choose which hard drives are bootable (if >>> any), which options to use for mounting file systems. >>> Many many various cases are possible. If you ask why we'd like to >>> support all those cases, the answer is simple: >>> because our users want us to support all those cases. >>> Obviously, many of those cases can not be implemented as image >>> internals, some cases can not be also implemented on >>> configuration stage (placing root fs on lvm device). >>> >>> As far as those use cases were rejected to be implemented in term of >>> IPA, we implemented so called Fuel Agent. >>> Important Fuel Agent features are: >>> >>> * It does not have REST API >>> >> I would not call it a feature :-P >> >> Speaking seriously, if you agent is a long-running thing and it gets it's >> configuration from e.g. JSON file, how can Ironic notify it of any changes? >> >> Fuel Agent is not long-running service. Currently there is no need to > have REST API. If we deal with kind of keep alive stuff of > inventory/discovery then we probably add API. Frankly, IPA REST API is not > REST at all. However that is not a reason to not to call it a feature and > through it away. It is a reason to work on it and improve. That is how I > try to look at things (pragmatically). > > Fuel Agent has executable entry point[s] like /usr/bin/provision. You can > run this entry point with options (oslo.config) and point out where to find > input json data. It is supposed Ironic will use ssh (currently in Fuel we > use mcollective) connection and run this waiting for exit code. If exit > code is equal to 0, provisioning is done. Extremely simple. > > >> * it has executable entry point[s] >>> * It uses local json file as it's input >>> * It is planned to implement ability to download input data via HTTP >>> (kind of metadata service) >>> * It is designed to be agnostic to input data format, not only Fuel >>> format (data drivers) >>> * It is designed to be agnostic to image format (tar images, file system >>> images, disk images, currently fs images) >>> * It is designed to be agnostic to image compression algorithm >>> (currently gzip) >>> * It is designed to be agnostic to image downloading protocol (currently >>> local file and HTTP link) >>> >> Does it support Glance? I understand it's HTTP, but it requires >> authentication. >> >> >>> So, it is clear that being motivated by Fuel, Fuel Agent is quite >>> independent and generic. And we are open for >>> new use cases. >>> >> My favorite use case is hardware introspection (aka getting data required >> for scheduling from a node automatically). Any ideas on this? (It's not a >> priority for this discussion, just curious). > > > That is exactly what we do in Fuel. Currently we use so called 'Default' > pxelinux config and all nodes being powered on are supposed to boot with so > called 'Bootstrap' ramdisk where Ohai based agent (not Fuel Agent) runs > periodically and sends hardware report to Fuel master node. > User then is able to look at CPU, hard drive and network info and choose > which nodes to use for controllers, which for computes, etc. That is what > nova scheduler is supposed to do (look at hardware info and choose a > suitable node). > > Talking about future, we are planning to re-implement inventory/discovery > stuff in terms of Fuel Agent (currently, this stuff is implemented as Ohai > based independent script). Estimation for that is March 2015. > >> >> >> >>> According Fuel itself, our nearest plan is to get rid of Cobbler because >>> in the case of image based approach it is huge overhead. The question is >>> which tool we can use instead of Cobbler. We need power management, >>> we need TFTP management, we need DHCP management. That is >>> exactly what Ironic is able to do. Frankly, we can implement >>> power/TFTP/DHCP >>> management tool independently, but as Devananda said, we're all working >>> on the same problems, >>> so let's do it together. Power/TFTP/DHCP management is where we are >>> working on the same problems, >>> but IPA and Fuel Agent are about different use cases. This case is not >>> just Fuel, any mature >>> deployment case require advanced partition/fs management. >>> >> Taking into consideration that you're doing a generic OS installation >> tool... yeah, it starts to make some sense. For cloud advanced partition is >> definitely a "pet" case. > > > Generic image based OS installation tool. > > >> >> However, for >> >>> me it is OK, if it is easily possible >>> to use Ironic with external drivers (not merged to Ironic and not tested >>> on Ironic CI). >>> >>> AFAIU, this spec https://review.openstack.org/#/c/138115/ does not >>> assume changing Ironic API and core. >>> Jim asked about how Fuel Agent will know about advanced disk >>> partitioning scheme if API is not >>> changed. The answer is simple: Ironic is supposed to send a link to >>> metadata service (http or local file) >>> where Fuel Agent can download input json data. >>> >> That's not about not changing Ironic. Changing Ironic is ok for >> reasonable use cases - we do a huge change right now to accommodate >> zapping, hardware introspection and RAID configuration. >> >> Minimal changes because we don't want to break anything. It is clear how > difficult to convince people to do even minimal changes. Again it is just a > pragmatic approach. We want to do things iteratively so as not to break > Ironic as well as Fuel. We just can not change all at once. > > >> I actually have problems with this particular statement. It does not >> sound like Fuel Agent will integrate enough with Ironic. This JSON file: >> who is going to generate it? In the most popular use case we're driven by >> Nova. Will Nova generate this file? >> >> If the answer is "generate it manually for every node", it's too much a >> "pet" case for me personally. >> >> That is how this provision data look like right now > https://github.com/stackforge/fuel-web/blob/master/fuel_agent_ci/samples/provision.json > Do you still think it is written manually? Currently Fuel Agent works as a > part of Fuel ecosystem. We have a service which serializes provision data > for us into json. Fuel Agent is agnostic to data format (data drivers). If > someone wants to use another format, they are welcome to implement a > driver. > > We assume next step will be to put provision data (disk partition scheme, > maybe other data) into driver_info and make Fuel Agent driver able to > serialize those data (special format) and implement a corresponding data > driver in Fuel Agent for this format. Again very simple. Maybe it is time > to think of having Ironic metadata service (just maybe). > > Another point is that currently Fuel stores hardware info in its own > database but when it is possible to get those data from Ironic (when > inventory stuff is implemented) we will be glad to use Ironic API for that. > That is what I mean when I say 'to make Fuel stuff closer to Ironic > abstractions' > > > >> >>> As Roman said, we try to be pragmatic and suggest something which does >>> not break anything. All changes >>> are supposed to be encapsulated into a driver. No API and core changes. >>> We have resources to support, test >>> and improve this driver. This spec is just a zero step. Further steps >>> are supposed to improve driver >>> so as to make it closer to Ironic abstractions. >>> >> Honestly I think you should at least write a roadmap for it - see my >> comments above. >> > > Honestly, I think writing roadmap right now is not very rational as far as > I am not even sure people are interested in widening Ironic use cases. Some > of the comments were not even constructive like "I don't understand what > your use case is, please use IPA". > > >> >> About testing and support: are you providing a 3rdparty CI for it? It >> would be a big plus as to me: we already have troubles with drivers broken >> accidentally. > > > We are flexible here but I'm not ready to answer this question right now. > We will try to fit Ironic requirements wherever it is possible. > > >> >> >>> For Ironic that means widening use cases and user community. But, as I >>> already said, >>> we are OK if Ironic does not need this feature. >>> >> I don't think we should through away your hardware provision use case, >> but I personally would like to see how well Fuel Agent is going to play >> with how Ironic and Nova operate. >> > > Nova is not our case. Fuel is totally about deployment. There is some in > common > > As I already explained, currently we need power/tftp/dhcp management > Ironic capabilities. Again, it is not a problem to implement this stuff > independently like it happened with Fuel Agent (because this use case was > rejected several months ago). Our suggestion is not about "let's compete > with IPA" it is totally about "let's work on the same problems together". > > > >> >>> Vladimir Kozhukalov >>> >>> On Tue, Dec 9, 2014 at 1:09 PM, Roman Prykhodchenko >>> > >>> wrote: >>> >>> It is true that IPA and FuelAgent share a lot of functionality in >>> common. However there is a major difference between them which is >>> that they are intended to be used to solve a different problem. >>> >>> IPA is a solution for provision-use-destroy-use_by_different_user >>> use-case and is really great for using it for providing BM nodes for >>> other OS services or in services like Rackspace OnMetal. FuelAgent >>> itself serves for provision-use-use-?-use use-case like Fuel or >>> TripleO have. >>> >>> Those two use-cases require concentration on different details in >>> first place. For instance for IPA proper decommissioning is more >>> important than advanced disk management, but for FuelAgent >>> priorities are opposite because of obvious reasons. >>> >>> Putting all functionality to a single driver and a single agent may >>> cause conflicts in priorities and make a lot of mess inside both the >>> driver and the agent. Actually previously changes to IPA were >>> blocked right because of this conflict of priorities. Therefore >>> replacing FuelAgent by IPA in where FuelAgent is used currently does >>> not seem like a good option because come people (and I?m not talking >>> about Mirantis) might loose required features because of different >>> priorities. >>> >>> Having two separate drivers along with two separate agents for those >>> different use-cases will allow to have two independent teams that >>> are concentrated on what?s really important for a specific use-case. >>> I don?t see any problem in overlapping functionality if it?s used >>> differently. >>> >>> >>> P. S. >>> I realise that people may be also confused by the fact that >>> FuelAgent is actually called like that and is used only in Fuel atm. >>> Our point is to make it a simple, powerful and what?s more important >>> a generic tool for provisioning. It is not bound to Fuel or Mirantis >>> and if it will cause confusion in the future we will even be happy >>> to give it a different and less confusing name. >>> >>> P. P. S. >>> Some of the points of this integration do not look generic enough or >>> nice enough. We look pragmatic on the stuff and are trying to >>> implement what?s possible to implement as the first step. For sure >>> this is going to have a lot more steps to make it better and more >>> generic. >>> >>> >>> On 09 Dec 2014, at 01:46, Jim Rollenhagen >>> > wrote: >>>> >>>> >>>> >>>> On December 8, 2014 2:23:58 PM PST, Devananda van der Veen >>>> > wrote: >>>> >>>>> I'd like to raise this topic for a wider discussion outside of the >>>>> hallway >>>>> track and code reviews, where it has thus far mostly remained. >>>>> >>>>> In previous discussions, my understanding has been that the Fuel >>>>> team >>>>> sought to use Ironic to manage "pets" rather than "cattle" - and >>>>> doing >>>>> so >>>>> required extending the API and the project's functionality in >>>>> ways that >>>>> no >>>>> one else on the core team agreed with. Perhaps that understanding >>>>> was >>>>> wrong >>>>> (or perhaps not), but in any case, there is now a proposal to add a >>>>> FuelAgent driver to Ironic. The proposal claims this would meet >>>>> that >>>>> teams' >>>>> needs without requiring changes to the core of Ironic. >>>>> >>>>> https://review.openstack.org/#/c/138115/ >>>>> >>>> >>>> I think it's clear from the review that I share the opinions >>>> expressed in this email. >>>> >>>> That said (and hopefully without derailing the thread too much), >>>> I'm curious how this driver could do software RAID or LVM without >>>> modifying Ironic's API or data model. How would the agent know how >>>> these should be built? How would an operator or user tell Ironic >>>> what the disk/partition/volume layout would look like? >>>> >>>> And before it's said - no, I don't think vendor passthru API calls >>>> are an appropriate answer here. >>>> >>>> // jim >>>> >>>> >>>>> The Problem Description section calls out four things, which have >>>>> all >>>>> been >>>>> discussed previously (some are here [0]). I would like to address >>>>> each >>>>> one, >>>>> invite discussion on whether or not these are, in fact, problems >>>>> facing >>>>> Ironic (not whether they are problems for someone, somewhere), >>>>> and then >>>>> ask >>>>> why these necessitate a new driver be added to the project. >>>>> >>>>> >>>>> They are, for reference: >>>>> >>>>> 1. limited partition support >>>>> >>>>> 2. no software RAID support >>>>> >>>>> 3. no LVM support >>>>> >>>>> 4. no support for hardware that lacks a BMC >>>>> >>>>> #1. >>>>> >>>>> When deploying a partition image (eg, QCOW format), Ironic's PXE >>>>> deploy >>>>> driver performs only the minimal partitioning necessary to >>>>> fulfill its >>>>> mission as an OpenStack service: respect the user's request for >>>>> root, >>>>> swap, >>>>> and ephemeral partition sizes. When deploying a whole-disk image, >>>>> Ironic >>>>> does not perform any partitioning -- such is left up to the >>>>> operator >>>>> who >>>>> created the disk image. >>>>> >>>>> Support for arbitrarily complex partition layouts is not required >>>>> by, >>>>> nor >>>>> does it facilitate, the goal of provisioning physical servers via a >>>>> common >>>>> cloud API. Additionally, as with #3 below, nothing prevents a >>>>> user from >>>>> creating more partitions in unallocated disk space once they have >>>>> access to >>>>> their instance. Therefor, I don't see how Ironic's minimal >>>>> support for >>>>> partitioning is a problem for the project. >>>>> >>>>> #2. >>>>> >>>>> There is no support for defining a RAID in Ironic today, at all, >>>>> whether >>>>> software or hardware. Several proposals were floated last cycle; >>>>> one is >>>>> under review right now for DRAC support [1], and there are multiple >>>>> call >>>>> outs for RAID building in the state machine mega-spec [2]. Any such >>>>> support >>>>> for hardware RAID will necessarily be abstract enough to support >>>>> multiple >>>>> hardware vendor's driver implementations and both in-band >>>>> creation (via >>>>> IPA) and out-of-band creation (via vendor tools). >>>>> >>>>> Given the above, it may become possible to add software RAID >>>>> support to >>>>> IPA >>>>> in the future, under the same abstraction. This would closely tie >>>>> the >>>>> deploy agent to the images it deploys (the latter image's kernel >>>>> would >>>>> be >>>>> dependent upon a software RAID built by the former), but this would >>>>> necessarily be true for the proposed FuelAgent as well. >>>>> >>>>> I don't see this as a compelling reason to add a new driver to the >>>>> project. >>>>> Instead, we should (plan to) add support for software RAID to the >>>>> deploy >>>>> agent which is already part of the project. >>>>> >>>>> #3. >>>>> >>>>> LVM volumes can easily be added by a user (after provisioning) >>>>> within >>>>> unallocated disk space for non-root partitions. I have not yet >>>>> seen a >>>>> compelling argument for doing this within the provisioning phase. >>>>> >>>>> #4. >>>>> >>>>> There are already in-tree drivers [3] [4] [5] which do not require >>>>> a >>>>> BMC. >>>>> One of these uses SSH to connect and run pre-determined commands. >>>>> Like >>>>> the >>>>> spec proposal, which states at line 122, "Control via SSH access >>>>> feature >>>>> intended only for experiments in non-production environment," the >>>>> current >>>>> SSHPowerDriver is only meant for testing environments. We could >>>>> probably >>>>> extend this driver to do what the FuelAgent spec proposes, as far >>>>> as >>>>> remote >>>>> power control for cheap always-on hardware in testing >>>>> environments with >>>>> a >>>>> pre-shared key. >>>>> >>>>> (And if anyone wonders about a use case for Ironic without external >>>>> power >>>>> control ... I can only think of one situation where I would >>>>> rationally >>>>> ever >>>>> want to have a control-plane agent running inside a >>>>> user-instance: I am >>>>> both the operator and the only user of the cloud.) >>>>> >>>>> >>>>> ---------------- >>>>> >>>>> In summary, as far as I can tell, all of the problem statements >>>>> upon >>>>> which >>>>> the FuelAgent proposal are based are solvable through incremental >>>>> changes >>>>> in existing drivers, or out of scope for the project entirely. As >>>>> another >>>>> software-based deploy agent, FuelAgent would duplicate the >>>>> majority of >>>>> the >>>>> functionality which ironic-python-agent has today. >>>>> >>>>> Ironic's driver ecosystem benefits from a diversity of >>>>> hardware-enablement >>>>> drivers. Today, we have two divergent software deployment drivers >>>>> which >>>>> approach image deployment differently: "agent" drivers use a local >>>>> agent to >>>>> prepare a system and download the image; "pxe" drivers use a remote >>>>> agent >>>>> and copy the image over iSCSI. I don't understand how a second >>>>> driver >>>>> which >>>>> duplicates the functionality we already have, and shares the same >>>>> goals >>>>> as >>>>> the drivers we already have, is beneficial to the project. >>>>> >>>>> Doing the same thing twice just increases the burden on the team; >>>>> we're >>>>> all >>>>> working on the same problems, so let's do it together. >>>>> >>>>> -Devananda >>>>> >>>>> >>>>> [0] >>>>> https://blueprints.launchpad.net/ironic/+spec/ironic- >>>>> python-agent-partition >>>>> >>>>> [1] https://review.openstack.org/#/c/107981/ >>>>> >>>>> [2] >>>>> https://review.openstack.org/#/c/133828/11/specs/kilo/new- >>>>> ironic-state-machine.rst >>>>> >>>>> >>>>> [3] >>>>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/ >>>>> drivers/modules/snmp.py >>>>> >>>>> [4] >>>>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/ >>>>> drivers/modules/iboot.py >>>>> >>>>> [5] >>>>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/ >>>>> drivers/modules/ssh.py >>>>> >>>>> >>>>> ------------------------------------------------------------ >>>>> ------------ >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rprikhodchenko at mirantis.com Tue Dec 9 14:52:08 2014 From: rprikhodchenko at mirantis.com (Roman Prykhodchenko) Date: Tue, 9 Dec 2014 15:52:08 +0100 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> <5486F050.808@redhat.com> Message-ID: So regarding to the 3rd-party CI. There is FuelCI that runs tests for some stackforge projects. Attaching it to Ironic to run required tests that detect problems in FuelAgent driver is not a big deal at all and will be done as soon as it's necessary. On Tue, Dec 9, 2014 at 3:45 PM, Vladimir Kozhukalov < vkozhukalov at mirantis.com> wrote: > s/though/throw/g > > Vladimir Kozhukalov > > On Tue, Dec 9, 2014 at 5:40 PM, Vladimir Kozhukalov < > vkozhukalov at mirantis.com> wrote: > >> >> >> Vladimir Kozhukalov >> >> On Tue, Dec 9, 2014 at 3:51 PM, Dmitry Tantsur >> wrote: >> >>> Hi folks, >>> >>> Thank you for additional explanation, it does clarify things a bit. I'd >>> like to note, however, that you talk a lot about how _different_ Fuel Agent >>> is from what Ironic does now. I'd like actually to know how well it's going >>> to fit into what Ironic does (in additional to your specific use cases). >>> Hence my comments inline: >> >> >>> >>> On 12/09/2014 01:01 PM, Vladimir Kozhukalov wrote: >>> >>>> Just a short explanation of Fuel use case. >>>> >>>> Fuel use case is not a cloud. Fuel is a deployment tool. We install OS >>>> on bare metal servers and on VMs >>>> and then configure this OS using Puppet. We have been using Cobbler as >>>> our OS provisioning tool since the beginning of Fuel. >>>> However, Cobbler assumes using native OS installers (Anaconda and >>>> Debian-installer). For some reasons we decided to >>>> switch to image based approach for installing OS. >>>> >>>> One of Fuel features is the ability to provide advanced partitioning >>>> schemes (including software RAIDs, LVM). >>>> Native installers are quite difficult to customize in the field of >>>> partitioning >>>> (that was one of the reasons to switch to image based approach). >>>> Moreover, we'd like to implement even more >>>> flexible user experience. We'd like to allow user to choose which hard >>>> drives to use for root FS, for >>>> allocating DB. We'd like user to be able to put root FS over LV or MD >>>> device (including stripe, mirror, multipath). >>>> We'd like user to be able to choose which hard drives are bootable (if >>>> any), which options to use for mounting file systems. >>>> Many many various cases are possible. If you ask why we'd like to >>>> support all those cases, the answer is simple: >>>> because our users want us to support all those cases. >>>> Obviously, many of those cases can not be implemented as image >>>> internals, some cases can not be also implemented on >>>> configuration stage (placing root fs on lvm device). >>>> >>>> As far as those use cases were rejected to be implemented in term of >>>> IPA, we implemented so called Fuel Agent. >>>> Important Fuel Agent features are: >>>> >>>> * It does not have REST API >>>> >>> I would not call it a feature :-P >>> >>> Speaking seriously, if you agent is a long-running thing and it gets >>> it's configuration from e.g. JSON file, how can Ironic notify it of any >>> changes? >>> >>> Fuel Agent is not long-running service. Currently there is no need to >> have REST API. If we deal with kind of keep alive stuff of >> inventory/discovery then we probably add API. Frankly, IPA REST API is not >> REST at all. However that is not a reason to not to call it a feature and >> through it away. It is a reason to work on it and improve. That is how I >> try to look at things (pragmatically). >> >> Fuel Agent has executable entry point[s] like /usr/bin/provision. You can >> run this entry point with options (oslo.config) and point out where to find >> input json data. It is supposed Ironic will use ssh (currently in Fuel we >> use mcollective) connection and run this waiting for exit code. If exit >> code is equal to 0, provisioning is done. Extremely simple. >> >> >>> * it has executable entry point[s] >>>> * It uses local json file as it's input >>>> * It is planned to implement ability to download input data via HTTP >>>> (kind of metadata service) >>>> * It is designed to be agnostic to input data format, not only Fuel >>>> format (data drivers) >>>> * It is designed to be agnostic to image format (tar images, file system >>>> images, disk images, currently fs images) >>>> * It is designed to be agnostic to image compression algorithm >>>> (currently gzip) >>>> * It is designed to be agnostic to image downloading protocol (currently >>>> local file and HTTP link) >>>> >>> Does it support Glance? I understand it's HTTP, but it requires >>> authentication. >>> >>> >>>> So, it is clear that being motivated by Fuel, Fuel Agent is quite >>>> independent and generic. And we are open for >>>> new use cases. >>>> >>> My favorite use case is hardware introspection (aka getting data >>> required for scheduling from a node automatically). Any ideas on this? >>> (It's not a priority for this discussion, just curious). >> >> >> That is exactly what we do in Fuel. Currently we use so called 'Default' >> pxelinux config and all nodes being powered on are supposed to boot with so >> called 'Bootstrap' ramdisk where Ohai based agent (not Fuel Agent) runs >> periodically and sends hardware report to Fuel master node. >> User then is able to look at CPU, hard drive and network info and choose >> which nodes to use for controllers, which for computes, etc. That is what >> nova scheduler is supposed to do (look at hardware info and choose a >> suitable node). >> >> Talking about future, we are planning to re-implement inventory/discovery >> stuff in terms of Fuel Agent (currently, this stuff is implemented as Ohai >> based independent script). Estimation for that is March 2015. >> >>> >>> >>> >>>> According Fuel itself, our nearest plan is to get rid of Cobbler because >>>> in the case of image based approach it is huge overhead. The question is >>>> which tool we can use instead of Cobbler. We need power management, >>>> we need TFTP management, we need DHCP management. That is >>>> exactly what Ironic is able to do. Frankly, we can implement >>>> power/TFTP/DHCP >>>> management tool independently, but as Devananda said, we're all working >>>> on the same problems, >>>> so let's do it together. Power/TFTP/DHCP management is where we are >>>> working on the same problems, >>>> but IPA and Fuel Agent are about different use cases. This case is not >>>> just Fuel, any mature >>>> deployment case require advanced partition/fs management. >>>> >>> Taking into consideration that you're doing a generic OS installation >>> tool... yeah, it starts to make some sense. For cloud advanced partition is >>> definitely a "pet" case. >> >> >> Generic image based OS installation tool. >> >> >>> >>> However, for >>> >>>> me it is OK, if it is easily possible >>>> to use Ironic with external drivers (not merged to Ironic and not tested >>>> on Ironic CI). >>>> >>>> AFAIU, this spec https://review.openstack.org/#/c/138115/ does not >>>> assume changing Ironic API and core. >>>> Jim asked about how Fuel Agent will know about advanced disk >>>> partitioning scheme if API is not >>>> changed. The answer is simple: Ironic is supposed to send a link to >>>> metadata service (http or local file) >>>> where Fuel Agent can download input json data. >>>> >>> That's not about not changing Ironic. Changing Ironic is ok for >>> reasonable use cases - we do a huge change right now to accommodate >>> zapping, hardware introspection and RAID configuration. >>> >>> Minimal changes because we don't want to break anything. It is clear how >> difficult to convince people to do even minimal changes. Again it is just a >> pragmatic approach. We want to do things iteratively so as not to break >> Ironic as well as Fuel. We just can not change all at once. >> >> >>> I actually have problems with this particular statement. It does not >>> sound like Fuel Agent will integrate enough with Ironic. This JSON file: >>> who is going to generate it? In the most popular use case we're driven by >>> Nova. Will Nova generate this file? >>> >>> If the answer is "generate it manually for every node", it's too much a >>> "pet" case for me personally. >>> >>> That is how this provision data look like right now >> https://github.com/stackforge/fuel-web/blob/master/fuel_agent_ci/samples/provision.json >> Do you still think it is written manually? Currently Fuel Agent works as a >> part of Fuel ecosystem. We have a service which serializes provision data >> for us into json. Fuel Agent is agnostic to data format (data drivers). If >> someone wants to use another format, they are welcome to implement a >> driver. >> >> We assume next step will be to put provision data (disk partition scheme, >> maybe other data) into driver_info and make Fuel Agent driver able to >> serialize those data (special format) and implement a corresponding data >> driver in Fuel Agent for this format. Again very simple. Maybe it is time >> to think of having Ironic metadata service (just maybe). >> >> Another point is that currently Fuel stores hardware info in its own >> database but when it is possible to get those data from Ironic (when >> inventory stuff is implemented) we will be glad to use Ironic API for that. >> That is what I mean when I say 'to make Fuel stuff closer to Ironic >> abstractions' >> >> >> >>> >>>> As Roman said, we try to be pragmatic and suggest something which does >>>> not break anything. All changes >>>> are supposed to be encapsulated into a driver. No API and core changes. >>>> We have resources to support, test >>>> and improve this driver. This spec is just a zero step. Further steps >>>> are supposed to improve driver >>>> so as to make it closer to Ironic abstractions. >>>> >>> Honestly I think you should at least write a roadmap for it - see my >>> comments above. >>> >> >> Honestly, I think writing roadmap right now is not very rational as far >> as I am not even sure people are interested in widening Ironic use cases. >> Some of the comments were not even constructive like "I don't understand >> what your use case is, please use IPA". >> >> >>> >>> About testing and support: are you providing a 3rdparty CI for it? It >>> would be a big plus as to me: we already have troubles with drivers broken >>> accidentally. >> >> >> We are flexible here but I'm not ready to answer this question right now. >> We will try to fit Ironic requirements wherever it is possible. >> >> >>> >>> >>>> For Ironic that means widening use cases and user community. But, as I >>>> already said, >>>> we are OK if Ironic does not need this feature. >>>> >>> I don't think we should through away your hardware provision use case, >>> but I personally would like to see how well Fuel Agent is going to play >>> with how Ironic and Nova operate. >>> >> >> Nova is not our case. Fuel is totally about deployment. There is some in >> common >> >> As I already explained, currently we need power/tftp/dhcp management >> Ironic capabilities. Again, it is not a problem to implement this stuff >> independently like it happened with Fuel Agent (because this use case was >> rejected several months ago). Our suggestion is not about "let's compete >> with IPA" it is totally about "let's work on the same problems together". >> >> >> >>> >>>> Vladimir Kozhukalov >>>> >>>> On Tue, Dec 9, 2014 at 1:09 PM, Roman Prykhodchenko >>>> > >>>> wrote: >>>> >>>> It is true that IPA and FuelAgent share a lot of functionality in >>>> common. However there is a major difference between them which is >>>> that they are intended to be used to solve a different problem. >>>> >>>> IPA is a solution for provision-use-destroy-use_by_different_user >>>> use-case and is really great for using it for providing BM nodes for >>>> other OS services or in services like Rackspace OnMetal. FuelAgent >>>> itself serves for provision-use-use-?-use use-case like Fuel or >>>> TripleO have. >>>> >>>> Those two use-cases require concentration on different details in >>>> first place. For instance for IPA proper decommissioning is more >>>> important than advanced disk management, but for FuelAgent >>>> priorities are opposite because of obvious reasons. >>>> >>>> Putting all functionality to a single driver and a single agent may >>>> cause conflicts in priorities and make a lot of mess inside both the >>>> driver and the agent. Actually previously changes to IPA were >>>> blocked right because of this conflict of priorities. Therefore >>>> replacing FuelAgent by IPA in where FuelAgent is used currently does >>>> not seem like a good option because come people (and I?m not talking >>>> about Mirantis) might loose required features because of different >>>> priorities. >>>> >>>> Having two separate drivers along with two separate agents for those >>>> different use-cases will allow to have two independent teams that >>>> are concentrated on what?s really important for a specific use-case. >>>> I don?t see any problem in overlapping functionality if it?s used >>>> differently. >>>> >>>> >>>> P. S. >>>> I realise that people may be also confused by the fact that >>>> FuelAgent is actually called like that and is used only in Fuel atm. >>>> Our point is to make it a simple, powerful and what?s more important >>>> a generic tool for provisioning. It is not bound to Fuel or Mirantis >>>> and if it will cause confusion in the future we will even be happy >>>> to give it a different and less confusing name. >>>> >>>> P. P. S. >>>> Some of the points of this integration do not look generic enough or >>>> nice enough. We look pragmatic on the stuff and are trying to >>>> implement what?s possible to implement as the first step. For sure >>>> this is going to have a lot more steps to make it better and more >>>> generic. >>>> >>>> >>>> On 09 Dec 2014, at 01:46, Jim Rollenhagen >>>> > wrote: >>>>> >>>>> >>>>> >>>>> On December 8, 2014 2:23:58 PM PST, Devananda van der Veen >>>>> > wrote: >>>>> >>>>>> I'd like to raise this topic for a wider discussion outside of the >>>>>> hallway >>>>>> track and code reviews, where it has thus far mostly remained. >>>>>> >>>>>> In previous discussions, my understanding has been that the Fuel >>>>>> team >>>>>> sought to use Ironic to manage "pets" rather than "cattle" - and >>>>>> doing >>>>>> so >>>>>> required extending the API and the project's functionality in >>>>>> ways that >>>>>> no >>>>>> one else on the core team agreed with. Perhaps that understanding >>>>>> was >>>>>> wrong >>>>>> (or perhaps not), but in any case, there is now a proposal to add >>>>>> a >>>>>> FuelAgent driver to Ironic. The proposal claims this would meet >>>>>> that >>>>>> teams' >>>>>> needs without requiring changes to the core of Ironic. >>>>>> >>>>>> https://review.openstack.org/#/c/138115/ >>>>>> >>>>> >>>>> I think it's clear from the review that I share the opinions >>>>> expressed in this email. >>>>> >>>>> That said (and hopefully without derailing the thread too much), >>>>> I'm curious how this driver could do software RAID or LVM without >>>>> modifying Ironic's API or data model. How would the agent know how >>>>> these should be built? How would an operator or user tell Ironic >>>>> what the disk/partition/volume layout would look like? >>>>> >>>>> And before it's said - no, I don't think vendor passthru API calls >>>>> are an appropriate answer here. >>>>> >>>>> // jim >>>>> >>>>> >>>>>> The Problem Description section calls out four things, which have >>>>>> all >>>>>> been >>>>>> discussed previously (some are here [0]). I would like to address >>>>>> each >>>>>> one, >>>>>> invite discussion on whether or not these are, in fact, problems >>>>>> facing >>>>>> Ironic (not whether they are problems for someone, somewhere), >>>>>> and then >>>>>> ask >>>>>> why these necessitate a new driver be added to the project. >>>>>> >>>>>> >>>>>> They are, for reference: >>>>>> >>>>>> 1. limited partition support >>>>>> >>>>>> 2. no software RAID support >>>>>> >>>>>> 3. no LVM support >>>>>> >>>>>> 4. no support for hardware that lacks a BMC >>>>>> >>>>>> #1. >>>>>> >>>>>> When deploying a partition image (eg, QCOW format), Ironic's PXE >>>>>> deploy >>>>>> driver performs only the minimal partitioning necessary to >>>>>> fulfill its >>>>>> mission as an OpenStack service: respect the user's request for >>>>>> root, >>>>>> swap, >>>>>> and ephemeral partition sizes. When deploying a whole-disk image, >>>>>> Ironic >>>>>> does not perform any partitioning -- such is left up to the >>>>>> operator >>>>>> who >>>>>> created the disk image. >>>>>> >>>>>> Support for arbitrarily complex partition layouts is not required >>>>>> by, >>>>>> nor >>>>>> does it facilitate, the goal of provisioning physical servers via >>>>>> a >>>>>> common >>>>>> cloud API. Additionally, as with #3 below, nothing prevents a >>>>>> user from >>>>>> creating more partitions in unallocated disk space once they have >>>>>> access to >>>>>> their instance. Therefor, I don't see how Ironic's minimal >>>>>> support for >>>>>> partitioning is a problem for the project. >>>>>> >>>>>> #2. >>>>>> >>>>>> There is no support for defining a RAID in Ironic today, at all, >>>>>> whether >>>>>> software or hardware. Several proposals were floated last cycle; >>>>>> one is >>>>>> under review right now for DRAC support [1], and there are >>>>>> multiple >>>>>> call >>>>>> outs for RAID building in the state machine mega-spec [2]. Any >>>>>> such >>>>>> support >>>>>> for hardware RAID will necessarily be abstract enough to support >>>>>> multiple >>>>>> hardware vendor's driver implementations and both in-band >>>>>> creation (via >>>>>> IPA) and out-of-band creation (via vendor tools). >>>>>> >>>>>> Given the above, it may become possible to add software RAID >>>>>> support to >>>>>> IPA >>>>>> in the future, under the same abstraction. This would closely tie >>>>>> the >>>>>> deploy agent to the images it deploys (the latter image's kernel >>>>>> would >>>>>> be >>>>>> dependent upon a software RAID built by the former), but this >>>>>> would >>>>>> necessarily be true for the proposed FuelAgent as well. >>>>>> >>>>>> I don't see this as a compelling reason to add a new driver to the >>>>>> project. >>>>>> Instead, we should (plan to) add support for software RAID to the >>>>>> deploy >>>>>> agent which is already part of the project. >>>>>> >>>>>> #3. >>>>>> >>>>>> LVM volumes can easily be added by a user (after provisioning) >>>>>> within >>>>>> unallocated disk space for non-root partitions. I have not yet >>>>>> seen a >>>>>> compelling argument for doing this within the provisioning phase. >>>>>> >>>>>> #4. >>>>>> >>>>>> There are already in-tree drivers [3] [4] [5] which do not >>>>>> require a >>>>>> BMC. >>>>>> One of these uses SSH to connect and run pre-determined commands. >>>>>> Like >>>>>> the >>>>>> spec proposal, which states at line 122, "Control via SSH access >>>>>> feature >>>>>> intended only for experiments in non-production environment," the >>>>>> current >>>>>> SSHPowerDriver is only meant for testing environments. We could >>>>>> probably >>>>>> extend this driver to do what the FuelAgent spec proposes, as far >>>>>> as >>>>>> remote >>>>>> power control for cheap always-on hardware in testing >>>>>> environments with >>>>>> a >>>>>> pre-shared key. >>>>>> >>>>>> (And if anyone wonders about a use case for Ironic without >>>>>> external >>>>>> power >>>>>> control ... I can only think of one situation where I would >>>>>> rationally >>>>>> ever >>>>>> want to have a control-plane agent running inside a >>>>>> user-instance: I am >>>>>> both the operator and the only user of the cloud.) >>>>>> >>>>>> >>>>>> ---------------- >>>>>> >>>>>> In summary, as far as I can tell, all of the problem statements >>>>>> upon >>>>>> which >>>>>> the FuelAgent proposal are based are solvable through incremental >>>>>> changes >>>>>> in existing drivers, or out of scope for the project entirely. As >>>>>> another >>>>>> software-based deploy agent, FuelAgent would duplicate the >>>>>> majority of >>>>>> the >>>>>> functionality which ironic-python-agent has today. >>>>>> >>>>>> Ironic's driver ecosystem benefits from a diversity of >>>>>> hardware-enablement >>>>>> drivers. Today, we have two divergent software deployment drivers >>>>>> which >>>>>> approach image deployment differently: "agent" drivers use a local >>>>>> agent to >>>>>> prepare a system and download the image; "pxe" drivers use a >>>>>> remote >>>>>> agent >>>>>> and copy the image over iSCSI. I don't understand how a second >>>>>> driver >>>>>> which >>>>>> duplicates the functionality we already have, and shares the same >>>>>> goals >>>>>> as >>>>>> the drivers we already have, is beneficial to the project. >>>>>> >>>>>> Doing the same thing twice just increases the burden on the team; >>>>>> we're >>>>>> all >>>>>> working on the same problems, so let's do it together. >>>>>> >>>>>> -Devananda >>>>>> >>>>>> >>>>>> [0] >>>>>> https://blueprints.launchpad.net/ironic/+spec/ironic- >>>>>> python-agent-partition >>>>>> >>>>>> [1] https://review.openstack.org/#/c/107981/ >>>>>> >>>>>> [2] >>>>>> https://review.openstack.org/#/c/133828/11/specs/kilo/new- >>>>>> ironic-state-machine.rst >>>>>> >>>>>> >>>>>> [3] >>>>>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/ >>>>>> drivers/modules/snmp.py >>>>>> >>>>>> [4] >>>>>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/ >>>>>> drivers/modules/iboot.py >>>>>> >>>>>> [5] >>>>>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/ >>>>>> drivers/modules/ssh.py >>>>>> >>>>>> >>>>>> ------------------------------------------------------------ >>>>>> ------------ >>>>>> >>>>>> _______________________________________________ >>>>>> OpenStack-dev mailing list >>>>>> OpenStack-dev at lists.openstack.org >>>>>> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ozamiatin at mirantis.com Tue Dec 9 14:52:42 2014 From: ozamiatin at mirantis.com (ozamiatin) Date: Tue, 09 Dec 2014 16:52:42 +0200 Subject: [openstack-dev] [oslo.messaging][devstack] ZeroMQ driver maintenance next steps In-Reply-To: <69DE76EB-8BCD-4D35-9099-13B71160AF80@doughellmann.com> References: <546A058C.7090800@ubuntu.com> <1C39F2D6-C600-4BA5-8F95-79E2E7DA7172@doughellmann.com> <6F5E7E0C-656F-4616-A816-BC2E340299AD@doughellmann.com> <37ecc12f660ddca2b5a76930c06c1ce6@sileht.net> <546B4DC7.9090309@ubuntu.com> <548679CD.4030205@gmail.com> <69DE76EB-8BCD-4D35-9099-13B71160AF80@doughellmann.com> Message-ID: <54870CBA.8000705@mirantis.com> +1 To wiki page. I also tried to deploy devstack with zmq, and met the same problems https://bugs.launchpad.net/devstack/+bug/1397999 https://bugs.launchpad.net/oslo.messaging/+bug/1395721 We also have some unimplemented closures in zmq driver: one of them: https://bugs.launchpad.net/oslo.messaging/+bug/1400323 On 09.12.14 16:07, Doug Hellmann wrote: > On Dec 8, 2014, at 11:25 PM, Li Ma wrote: > >> Hi all, I tried to deploy zeromq by devstack and it definitely failed with lots of problems, like dependencies, topics, matchmaker setup, etc. I've already registered a blueprint for devstack-zeromq [1]. > I added the [devstack] tag to the subject of this message so that team will see the thread. >> Besides, I suggest to build a wiki page in order to trace all the workitems related with ZeroMQ. The general sections may be [Why ZeroMQ], [Current Bugs & Reviews], [Future Plan & Blueprints], [Discussions], [Resources], etc. > Coordinating the work on this via a wiki page makes sense. Please post the link when you?re ready. > > Doug > >> Any comments? >> >> [1] https://blueprints.launchpad.net/devstack/+spec/zeromq >> >> cheers, >> Li Ma >> >> On 2014/11/18 21:46, James Page wrote: >>> -----BEGIN PGP SIGNED MESSAGE----- >>> Hash: SHA256 >>> >>> On 18/11/14 00:55, Denis Makogon wrote: >>>> So if zmq driver support in devstack is fixed, we can easily add a >>>> new job to run them in the same way. >>>> >>>> >>>> Btw this is a good question. I will take look at current state of >>>> zmq in devstack. >>> I don't think its that far off and its broken rather than missing - >>> the rpc backend code needs updating to use oslo.messaging rather than >>> project specific copies of the rpc common codebase (pre oslo). >>> Devstack should be able to run with the local matchmaker in most >>> scenarios but it looks like there was support for the redis matchmaker >>> as well. >>> >>> If you could take some time to fixup that would be awesome! >>> >>> - -- James Page >>> Ubuntu and Debian Developer >>> james.page at ubuntu.com >>> jamespage at debian.org >>> -----BEGIN PGP SIGNATURE----- >>> Version: GnuPG v1 >>> >>> iQIbBAEBCAAGBQJUa03HAAoJEL/srsug59jDdZQP+IeEvXAcfxNs2Tgvt5trnjgg >>> cnTrJPLbr6i/uIXKjRvNDSkJEdv//EjL/IRVRIf0ld09FpRnyKzUDMPq1CzFJqdo >>> 45RqFWwJ46NVA4ApLZVugJvKc4tlouZQvizqCBzDKA6yUsUoGmRpYFAQ3rN6Gs9h >>> Q/8XSAmHQF1nyTarxvylZgnqhqWX0p8n1+fckQeq2y7s3D3WxfM71ftiLrmQCWir >>> aPkH7/0qvW+XiOtBXVTXDb/7pocNZg+jtBkUcokORXbJCmiCN36DBXv9LPIYgfhe >>> /cC/wQFH4RUSkoj7SYPAafX4J2lTMjAd+GwdV6ppKy4DbPZdNty8c9cbG29KUK40 >>> TSCz8U3tUcaFGDQdBB5Kg85c1aYri6dmLxJlk7d8pOXLTb0bfnzdl+b6UsLkhXqB >>> P4Uc+IaV9vxoqmYZAzuqyWm9QriYlcYeaIJ9Ma5fN+CqxnIaCS7UbSxHj0yzTaUb >>> 4XgmcQBwHe22ouwBmk2RGzLc1Rv8EzMLbbrGhtTu459WnAZCrXOTPOCn54PoIgZD >>> bK/Om+nmTxepWD1lExHIYk3BXyZObxPO00UJHdxvSAIh45ROlh8jW8hQA9lJ9QVu >>> Cz775xVlh4DRYgenN34c2afOrhhdq4V1OmjYUBf5M4gS6iKa20LsMjp7NqT0jzzB >>> tRDFb67u28jxnIXR16g= >>> =+k0M >>> -----END PGP SIGNATURE----- >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sahid.ferdjaoui at redhat.com Tue Dec 9 14:56:55 2014 From: sahid.ferdjaoui at redhat.com (Sahid Orentino Ferdjaoui) Date: Tue, 9 Dec 2014 15:56:55 +0100 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <5486F632.2020706@dague.net> References: <5486DF7F.7080706@dague.net> <20141209123243.GA9099@redhat.redhat.com> <5486F632.2020706@dague.net> Message-ID: <20141209145655.GA11620@redhat.redhat.com> On Tue, Dec 09, 2014 at 08:16:34AM -0500, Sean Dague wrote: > On 12/09/2014 07:32 AM, Sahid Orentino Ferdjaoui wrote: > > On Tue, Dec 09, 2014 at 06:39:43AM -0500, Sean Dague wrote: > >> I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely. > >> > >> 1 - the entire H8* group. This doesn't function on python code, it > >> functions on git commit message, which makes it tough to run locally. It > >> also would be a reason to prevent us from not rerunning tests on commit > >> message changes (something we could do after the next gerrit update). > > > > -1, We probably want to recommend a git commit message more stronger > > formatted mainly about the first line which is the most important. It > > should reflect which part of the code the commit is attended to update > > that gives the ability for contributors to quickly see on what the > > submission is related; > > > > An example with Nova which is quite big: api, compute, > > doc, scheduler, virt, vmware, libvirt, objects... > > > > We should to use a prefix in the first line of commit message. There > > is a large number of commits waiting for reviews, that can help > > contributors with a knowledge in a particular domain to identify > > quickly which one to pick. > > And how exactly do you expect a machine to decide if that's done correctly? Keep what we already have then let community move forward about how to make those rules better. Does it is something we want to remove machine validations to make them human validations ? Contributors already have a ton of work and I guess we are agree the aim is not to remove validations to have all in green in the dashboard. s. > -Sean > > > > >> 2 - the entire H3* group - because of this - > >> https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm > >> > >> A look at the H3* code shows that it's terribly complicated, and is > >> often full of bugs (a few bit us last week). I'd rather just delete it > >> and move on. > >> > >> -Sean > >> > >> -- > >> Sean Dague > >> http://dague.net > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Sean Dague > http://dague.net > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dtantsur at redhat.com Tue Dec 9 14:58:46 2014 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 09 Dec 2014 15:58:46 +0100 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> <5486F050.808@redhat.com> Message-ID: <54870E26.6070809@redhat.com> On 12/09/2014 03:40 PM, Vladimir Kozhukalov wrote: > > > Vladimir Kozhukalov > > On Tue, Dec 9, 2014 at 3:51 PM, Dmitry Tantsur > wrote: > > Hi folks, > > Thank you for additional explanation, it does clarify things a bit. > I'd like to note, however, that you talk a lot about how _different_ > Fuel Agent is from what Ironic does now. I'd like actually to know > how well it's going to fit into what Ironic does (in additional to > your specific use cases). Hence my comments inline: > > > > On 12/09/2014 01:01 PM, Vladimir Kozhukalov wrote: > > Just a short explanation of Fuel use case. > > Fuel use case is not a cloud. Fuel is a deployment tool. We > install OS > on bare metal servers and on VMs > and then configure this OS using Puppet. We have been using > Cobbler as > our OS provisioning tool since the beginning of Fuel. > However, Cobbler assumes using native OS installers (Anaconda and > Debian-installer). For some reasons we decided to > switch to image based approach for installing OS. > > One of Fuel features is the ability to provide advanced partitioning > schemes (including software RAIDs, LVM). > Native installers are quite difficult to customize in the field of > partitioning > (that was one of the reasons to switch to image based approach). > Moreover, we'd like to implement even more > flexible user experience. We'd like to allow user to choose > which hard > drives to use for root FS, for > allocating DB. We'd like user to be able to put root FS over LV > or MD > device (including stripe, mirror, multipath). > We'd like user to be able to choose which hard drives are > bootable (if > any), which options to use for mounting file systems. > Many many various cases are possible. If you ask why we'd like to > support all those cases, the answer is simple: > because our users want us to support all those cases. > Obviously, many of those cases can not be implemented as image > internals, some cases can not be also implemented on > configuration stage (placing root fs on lvm device). > > As far as those use cases were rejected to be implemented in term of > IPA, we implemented so called Fuel Agent. > Important Fuel Agent features are: > > * It does not have REST API > > I would not call it a feature :-P > > Speaking seriously, if you agent is a long-running thing and it gets > it's configuration from e.g. JSON file, how can Ironic notify it of > any changes? > > Fuel Agent is not long-running service. Currently there is no need to > have REST API. If we deal with kind of keep alive stuff of > inventory/discovery then we probably add API. Frankly, IPA REST API is > not REST at all. However that is not a reason to not to call it a > feature and through it away. It is a reason to work on it and improve. > That is how I try to look at things (pragmatically). > > Fuel Agent has executable entry point[s] like /usr/bin/provision. You > can run this entry point with options (oslo.config) and point out where > to find input json data. It is supposed Ironic will use ssh (currently > in Fuel we use mcollective) connection and run this waiting for exit > code. If exit code is equal to 0, provisioning is done. Extremely simple. > > * it has executable entry point[s] > * It uses local json file as it's input > * It is planned to implement ability to download input data via HTTP > (kind of metadata service) > * It is designed to be agnostic to input data format, not only Fuel > format (data drivers) > * It is designed to be agnostic to image format (tar images, > file system > images, disk images, currently fs images) > * It is designed to be agnostic to image compression algorithm > (currently gzip) > * It is designed to be agnostic to image downloading protocol > (currently > local file and HTTP link) > > Does it support Glance? I understand it's HTTP, but it requires > authentication. > > > So, it is clear that being motivated by Fuel, Fuel Agent is quite > independent and generic. And we are open for > new use cases. > > My favorite use case is hardware introspection (aka getting data > required for scheduling from a node automatically). Any ideas on > this? (It's not a priority for this discussion, just curious). > > > That is exactly what we do in Fuel. Currently we use so called 'Default' > pxelinux config and all nodes being powered on are supposed to boot with > so called 'Bootstrap' ramdisk where Ohai based agent (not Fuel Agent) > runs periodically and sends hardware report to Fuel master node. > User then is able to look at CPU, hard drive and network info and choose > which nodes to use for controllers, which for computes, etc. That is > what nova scheduler is supposed to do (look at hardware info and choose > a suitable node). > > Talking about future, we are planning to re-implement > inventory/discovery stuff in terms of Fuel Agent (currently, this stuff > is implemented as Ohai based independent script). Estimation for that > is March 2015. > > > > > According Fuel itself, our nearest plan is to get rid of Cobbler > because > in the case of image based approach it is huge overhead. The > question is > which tool we can use instead of Cobbler. We need power management, > we need TFTP management, we need DHCP management. That is > exactly what Ironic is able to do. Frankly, we can implement > power/TFTP/DHCP > management tool independently, but as Devananda said, we're all > working > on the same problems, > so let's do it together. Power/TFTP/DHCP management is where we are > working on the same problems, > but IPA and Fuel Agent are about different use cases. This case > is not > just Fuel, any mature > deployment case require advanced partition/fs management. > > Taking into consideration that you're doing a generic OS > installation tool... yeah, it starts to make some sense. For cloud > advanced partition is definitely a "pet" case. > > > Generic image based OS installation tool. > > > However, for > > me it is OK, if it is easily possible > to use Ironic with external drivers (not merged to Ironic and > not tested > on Ironic CI). > > AFAIU, this spec https://review.openstack.org/#__/c/138115/ > does not > assume changing Ironic API and core. > Jim asked about how Fuel Agent will know about advanced disk > partitioning scheme if API is not > changed. The answer is simple: Ironic is supposed to send a link to > metadata service (http or local file) > where Fuel Agent can download input json data. > > That's not about not changing Ironic. Changing Ironic is ok for > reasonable use cases - we do a huge change right now to accommodate > zapping, hardware introspection and RAID configuration. > > Minimal changes because we don't want to break anything. It is clear how > difficult to convince people to do even minimal changes. Again it is > just a pragmatic approach. We want to do things iteratively so as not > to break Ironic as well as Fuel. We just can not change all at once. > > I actually have problems with this particular statement. It does not > sound like Fuel Agent will integrate enough with Ironic. This JSON > file: who is going to generate it? In the most popular use case > we're driven by Nova. Will Nova generate this file? > > If the answer is "generate it manually for every node", it's too > much a "pet" case for me personally. > > That is how this provision data look like right now > https://github.com/stackforge/fuel-web/blob/master/fuel_agent_ci/samples/provision.json > Do you still think it is written manually? Currently Fuel Agent works > as a part of Fuel ecosystem. We have a service which serializes > provision data for us into json. Fuel Agent is agnostic to data format > (data drivers). If someone wants to use another format, they are welcome > to implement a driver. > > We assume next step will be to put provision data (disk partition > scheme, maybe other data) into driver_info and make Fuel Agent driver > able to serialize those data (special format) and implement a > corresponding data driver in Fuel Agent for this format. Again very > simple. Maybe it is time to think of having Ironic metadata service > (just maybe). I'm ok with the format, my question is: what and how is going to collect all the data and put into say driver_info? > > Another point is that currently Fuel stores hardware info in its own > database but when it is possible to get those data from Ironic (when > inventory stuff is implemented) we will be glad to use Ironic API for > that. That is what I mean when I say 'to make Fuel stuff closer to > Ironic abstractions' > > > As Roman said, we try to be pragmatic and suggest something > which does > not break anything. All changes > are supposed to be encapsulated into a driver. No API and core > changes. > We have resources to support, test > and improve this driver. This spec is just a zero step. Further > steps > are supposed to improve driver > so as to make it closer to Ironic abstractions. > > Honestly I think you should at least write a roadmap for it - see my > comments above. > > > Honestly, I think writing roadmap right now is not very rational as far > as I am not even sure people are interested in widening Ironic use > cases. Some of the comments were not even constructive like "I don't > understand what your use case is, please use IPA". Please don't be offended by this. We did put a lot of effort into IPA and it's reasonable to look for a good use cases before having one more smart ramdisk. Nothing personal, just estimating cost vs value :) Also "why not use IPA" is a fair question for me and the answer is about use cases (as you stated it before), not about missing features of IPA, right? > > > About testing and support: are you providing a 3rdparty CI for it? > It would be a big plus as to me: we already have troubles with > drivers broken accidentally. > > > We are flexible here but I'm not ready to answer this question right > now. We will try to fit Ironic requirements wherever it is possible. > > > > For Ironic that means widening use cases and user community. > But, as I > already said, > we are OK if Ironic does not need this feature. > > I don't think we should through away your hardware provision use > case, but I personally would like to see how well Fuel Agent is > going to play with how Ironic and Nova operate. > > > Nova is not our case. Fuel is totally about deployment. There is some in > common Here when we have a difficult point. Major use case for Ironic is to be driven by Nova (and assisted by Neutron). Without these two it's hard to understand how Fuel Agent is going to fit into the infrastructure. And hence my question above about where your json comes from. In the current Ironic world the same data is received partly from Nova flavor, partly managed by Neutron completely. I'm not saying it can't change - we do want to become more stand-alone. E.g. we can do without Neutron right now. I think specifying the source of input data for Fuel Agent in the Ironic infrastructure would help a lot understand, how well Ironic and Fuel Agent could play together. > > As I already explained, currently we need power/tftp/dhcp management > Ironic capabilities. Again, it is not a problem to implement this stuff > independently like it happened with Fuel Agent (because this use case > was rejected several months ago). Our suggestion is not about "let's > compete with IPA" it is totally about "let's work on the same problems > together". > > > Vladimir Kozhukalov > > On Tue, Dec 9, 2014 at 1:09 PM, Roman Prykhodchenko > > >> wrote: > > It is true that IPA and FuelAgent share a lot of > functionality in > common. However there is a major difference between them > which is > that they are intended to be used to solve a different problem. > > IPA is a solution for > provision-use-destroy-use_by___different_user > use-case and is really great for using it for providing BM > nodes for > other OS services or in services like Rackspace OnMetal. > FuelAgent > itself serves for provision-use-use-?-use use-case like Fuel or > TripleO have. > > Those two use-cases require concentration on different > details in > first place. For instance for IPA proper decommissioning is > more > important than advanced disk management, but for FuelAgent > priorities are opposite because of obvious reasons. > > Putting all functionality to a single driver and a single > agent may > cause conflicts in priorities and make a lot of mess inside > both the > driver and the agent. Actually previously changes to IPA were > blocked right because of this conflict of priorities. Therefore > replacing FuelAgent by IPA in where FuelAgent is used > currently does > not seem like a good option because come people (and I?m > not talking > about Mirantis) might loose required features because of > different > priorities. > > Having two separate drivers along with two separate agents > for those > different use-cases will allow to have two independent > teams that > are concentrated on what?s really important for a specific > use-case. > I don?t see any problem in overlapping functionality if > it?s used > differently. > > > P. S. > I realise that people may be also confused by the fact that > FuelAgent is actually called like that and is used only in > Fuel atm. > Our point is to make it a simple, powerful and what?s more > important > a generic tool for provisioning. It is not bound to Fuel or > Mirantis > and if it will cause confusion in the future we will even > be happy > to give it a different and less confusing name. > > P. P. S. > Some of the points of this integration do not look generic > enough or > nice enough. We look pragmatic on the stuff and are trying to > implement what?s possible to implement as the first step. > For sure > this is going to have a lot more steps to make it better > and more > generic. > > > On 09 Dec 2014, at 01:46, Jim Rollenhagen > > __>> wrote: > > > > On December 8, 2014 2:23:58 PM PST, Devananda van der Veen > > >> wrote: > > I'd like to raise this topic for a wider discussion > outside of the > hallway > track and code reviews, where it has thus far > mostly remained. > > In previous discussions, my understanding has been > that the Fuel team > sought to use Ironic to manage "pets" rather than > "cattle" - and > doing > so > required extending the API and the project's > functionality in > ways that > no > one else on the core team agreed with. Perhaps that > understanding was > wrong > (or perhaps not), but in any case, there is now a > proposal to add a > FuelAgent driver to Ironic. The proposal claims > this would meet that > teams' > needs without requiring changes to the core of Ironic. > > https://review.openstack.org/#__/c/138115/ > > > > I think it's clear from the review that I share the > opinions > expressed in this email. > > That said (and hopefully without derailing the thread > too much), > I'm curious how this driver could do software RAID or > LVM without > modifying Ironic's API or data model. How would the > agent know how > these should be built? How would an operator or user > tell Ironic > what the disk/partition/volume layout would look like? > > And before it's said - no, I don't think vendor > passthru API calls > are an appropriate answer here. > > // jim > > > The Problem Description section calls out four > things, which have all > been > discussed previously (some are here [0]). I would > like to address > each > one, > invite discussion on whether or not these are, in > fact, problems > facing > Ironic (not whether they are problems for someone, > somewhere), > and then > ask > why these necessitate a new driver be added to the > project. > > > They are, for reference: > > 1. limited partition support > > 2. no software RAID support > > 3. no LVM support > > 4. no support for hardware that lacks a BMC > > #1. > > When deploying a partition image (eg, QCOW format), > Ironic's PXE > deploy > driver performs only the minimal partitioning > necessary to > fulfill its > mission as an OpenStack service: respect the user's > request for root, > swap, > and ephemeral partition sizes. When deploying a > whole-disk image, > Ironic > does not perform any partitioning -- such is left > up to the operator > who > created the disk image. > > Support for arbitrarily complex partition layouts > is not required by, > nor > does it facilitate, the goal of provisioning > physical servers via a > common > cloud API. Additionally, as with #3 below, nothing > prevents a > user from > creating more partitions in unallocated disk space > once they have > access to > their instance. Therefor, I don't see how Ironic's > minimal > support for > partitioning is a problem for the project. > > #2. > > There is no support for defining a RAID in Ironic > today, at all, > whether > software or hardware. Several proposals were > floated last cycle; > one is > under review right now for DRAC support [1], and > there are multiple > call > outs for RAID building in the state machine > mega-spec [2]. Any such > support > for hardware RAID will necessarily be abstract > enough to support > multiple > hardware vendor's driver implementations and both > in-band > creation (via > IPA) and out-of-band creation (via vendor tools). > > Given the above, it may become possible to add > software RAID > support to > IPA > in the future, under the same abstraction. This > would closely tie the > deploy agent to the images it deploys (the latter > image's kernel > would > be > dependent upon a software RAID built by the > former), but this would > necessarily be true for the proposed FuelAgent as well. > > I don't see this as a compelling reason to add a > new driver to the > project. > Instead, we should (plan to) add support for > software RAID to the > deploy > agent which is already part of the project. > > #3. > > LVM volumes can easily be added by a user (after > provisioning) within > unallocated disk space for non-root partitions. I > have not yet seen a > compelling argument for doing this within the > provisioning phase. > > #4. > > There are already in-tree drivers [3] [4] [5] which > do not require a > BMC. > One of these uses SSH to connect and run > pre-determined commands. > Like > the > spec proposal, which states at line 122, "Control > via SSH access > feature > intended only for experiments in non-production > environment," the > current > SSHPowerDriver is only meant for testing > environments. We could > probably > extend this driver to do what the FuelAgent spec > proposes, as far as > remote > power control for cheap always-on hardware in testing > environments with > a > pre-shared key. > > (And if anyone wonders about a use case for Ironic > without external > power > control ... I can only think of one situation where > I would > rationally > ever > want to have a control-plane agent running inside a > user-instance: I am > both the operator and the only user of the cloud.) > > > ---------------- > > In summary, as far as I can tell, all of the > problem statements upon > which > the FuelAgent proposal are based are solvable > through incremental > changes > in existing drivers, or out of scope for the > project entirely. As > another > software-based deploy agent, FuelAgent would > duplicate the > majority of > the > functionality which ironic-python-agent has today. > > Ironic's driver ecosystem benefits from a diversity of > hardware-enablement > drivers. Today, we have two divergent software > deployment drivers > which > approach image deployment differently: "agent" > drivers use a local > agent to > prepare a system and download the image; "pxe" > drivers use a remote > agent > and copy the image over iSCSI. I don't understand > how a second driver > which > duplicates the functionality we already have, and > shares the same > goals > as > the drivers we already have, is beneficial to the > project. > > Doing the same thing twice just increases the > burden on the team; > we're > all > working on the same problems, so let's do it together. > > -Devananda > > > [0] > https://blueprints.launchpad.__net/ironic/+spec/ironic-__python-agent-partition > > > [1] https://review.openstack.org/#__/c/107981/ > > > [2] > https://review.openstack.org/#__/c/133828/11/specs/kilo/new-__ironic-state-machine.rst > > > > [3] > http://git.openstack.org/cgit/__openstack/ironic/tree/ironic/__drivers/modules/snmp.py > > > [4] > http://git.openstack.org/cgit/__openstack/ironic/tree/ironic/__drivers/modules/iboot.py > > > [5] > http://git.openstack.org/cgit/__openstack/ironic/tree/ironic/__drivers/modules/ssh.py > > > > > ------------------------------__------------------------------__------------ > > _________________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.__org > > > > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev > > > > > _________________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.__org > > > > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev > > > > > _________________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.__org > > > > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev > > > > > > _________________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.__org > > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev > > > > > _________________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.__org > > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jim at jimrollenhagen.com Tue Dec 9 15:00:48 2014 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 9 Dec 2014 07:00:48 -0800 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> Message-ID: <20141209150048.GB5494@jimrollenhagen.com> On Tue, Dec 09, 2014 at 04:01:07PM +0400, Vladimir Kozhukalov wrote: > Just a short explanation of Fuel use case. > > Fuel use case is not a cloud. Fuel is a deployment tool. We install OS on > bare metal servers and on VMs > and then configure this OS using Puppet. We have been using Cobbler as our > OS provisioning tool since the beginning of Fuel. > However, Cobbler assumes using native OS installers (Anaconda and > Debian-installer). For some reasons we decided to > switch to image based approach for installing OS. > > One of Fuel features is the ability to provide advanced partitioning > schemes (including software RAIDs, LVM). > Native installers are quite difficult to customize in the field of > partitioning > (that was one of the reasons to switch to image based approach). Moreover, > we'd like to implement even more > flexible user experience. We'd like to allow user to choose which hard > drives to use for root FS, for > allocating DB. We'd like user to be able to put root FS over LV or MD > device (including stripe, mirror, multipath). > We'd like user to be able to choose which hard drives are bootable (if > any), which options to use for mounting file systems. > Many many various cases are possible. If you ask why we'd like to support > all those cases, the answer is simple: > because our users want us to support all those cases. > Obviously, many of those cases can not be implemented as image internals, > some cases can not be also implemented on > configuration stage (placing root fs on lvm device). > > As far as those use cases were rejected to be implemented in term of IPA, > we implemented so called Fuel Agent. This is *precisely* why I disagree with adding this driver. Nearly every feature that is listed here has been talked about before, within the Ironic community. Software RAID, LVM, user choosing the partition layout. These were reected from IPA because they do not fit in *Ironic*, not because they don't fit in IPA. If the Fuel team can convince enough people that Ironic should be managing pets, then I'm almost okay with adding this driver (though I still think adding those features to IPA is the right thing to do). // jim > Important Fuel Agent features are: > > * It does not have REST API > * it has executable entry point[s] > * It uses local json file as it's input > * It is planned to implement ability to download input data via HTTP (kind > of metadata service) > * It is designed to be agnostic to input data format, not only Fuel format > (data drivers) > * It is designed to be agnostic to image format (tar images, file system > images, disk images, currently fs images) > * It is designed to be agnostic to image compression algorithm (currently > gzip) > * It is designed to be agnostic to image downloading protocol (currently > local file and HTTP link) > > So, it is clear that being motivated by Fuel, Fuel Agent is quite > independent and generic. And we are open for > new use cases. > > According Fuel itself, our nearest plan is to get rid of Cobbler because > in the case of image based approach it is huge overhead. The question is > which tool we can use instead of Cobbler. We need power management, > we need TFTP management, we need DHCP management. That is > exactly what Ironic is able to do. Frankly, we can implement power/TFTP/DHCP > management tool independently, but as Devananda said, we're all working on > the same problems, > so let's do it together. Power/TFTP/DHCP management is where we are > working on the same problems, > but IPA and Fuel Agent are about different use cases. This case is not just > Fuel, any mature > deployment case require advanced partition/fs management. However, for me > it is OK, if it is easily possible > to use Ironic with external drivers (not merged to Ironic and not tested on > Ironic CI). > > AFAIU, this spec https://review.openstack.org/#/c/138115/ does not assume > changing Ironic API and core. > Jim asked about how Fuel Agent will know about advanced disk partitioning > scheme if API is not > changed. The answer is simple: Ironic is supposed to send a link to > metadata service (http or local file) > where Fuel Agent can download input json data. > > As Roman said, we try to be pragmatic and suggest something which does not > break anything. All changes > are supposed to be encapsulated into a driver. No API and core changes. We > have resources to support, test > and improve this driver. This spec is just a zero step. Further steps are > supposed to improve driver > so as to make it closer to Ironic abstractions. > > For Ironic that means widening use cases and user community. But, as I > already said, > we are OK if Ironic does not need this feature. > > Vladimir Kozhukalov > > On Tue, Dec 9, 2014 at 1:09 PM, Roman Prykhodchenko < > rprikhodchenko at mirantis.com> wrote: > > > It is true that IPA and FuelAgent share a lot of functionality in common. > > However there is a major difference between them which is that they are > > intended to be used to solve a different problem. > > > > IPA is a solution for provision-use-destroy-use_by_different_user use-case > > and is really great for using it for providing BM nodes for other OS > > services or in services like Rackspace OnMetal. FuelAgent itself serves for > > provision-use-use-?-use use-case like Fuel or TripleO have. > > > > Those two use-cases require concentration on different details in first > > place. For instance for IPA proper decommissioning is more important than > > advanced disk management, but for FuelAgent priorities are opposite because > > of obvious reasons. > > > > Putting all functionality to a single driver and a single agent may cause > > conflicts in priorities and make a lot of mess inside both the driver and > > the agent. Actually previously changes to IPA were blocked right because of > > this conflict of priorities. Therefore replacing FuelAgent by IPA in where > > FuelAgent is used currently does not seem like a good option because come > > people (and I?m not talking about Mirantis) might loose required features > > because of different priorities. > > > > Having two separate drivers along with two separate agents for those > > different use-cases will allow to have two independent teams that are > > concentrated on what?s really important for a specific use-case. I don?t > > see any problem in overlapping functionality if it?s used differently. > > > > > > P. S. > > I realise that people may be also confused by the fact that FuelAgent is > > actually called like that and is used only in Fuel atm. Our point is to > > make it a simple, powerful and what?s more important a generic tool for > > provisioning. It is not bound to Fuel or Mirantis and if it will cause > > confusion in the future we will even be happy to give it a different and > > less confusing name. > > > > P. P. S. > > Some of the points of this integration do not look generic enough or nice > > enough. We look pragmatic on the stuff and are trying to implement what?s > > possible to implement as the first step. For sure this is going to have a > > lot more steps to make it better and more generic. > > > > > > On 09 Dec 2014, at 01:46, Jim Rollenhagen wrote: > > > > > > > > On December 8, 2014 2:23:58 PM PST, Devananda van der Veen < > > devananda.vdv at gmail.com> wrote: > > > > I'd like to raise this topic for a wider discussion outside of the > > hallway > > track and code reviews, where it has thus far mostly remained. > > > > In previous discussions, my understanding has been that the Fuel team > > sought to use Ironic to manage "pets" rather than "cattle" - and doing > > so > > required extending the API and the project's functionality in ways that > > no > > one else on the core team agreed with. Perhaps that understanding was > > wrong > > (or perhaps not), but in any case, there is now a proposal to add a > > FuelAgent driver to Ironic. The proposal claims this would meet that > > teams' > > needs without requiring changes to the core of Ironic. > > > > https://review.openstack.org/#/c/138115/ > > > > > > I think it's clear from the review that I share the opinions expressed in > > this email. > > > > That said (and hopefully without derailing the thread too much), I'm > > curious how this driver could do software RAID or LVM without modifying > > Ironic's API or data model. How would the agent know how these should be > > built? How would an operator or user tell Ironic what the > > disk/partition/volume layout would look like? > > > > And before it's said - no, I don't think vendor passthru API calls are an > > appropriate answer here. > > > > // jim > > > > > > The Problem Description section calls out four things, which have all > > been > > discussed previously (some are here [0]). I would like to address each > > one, > > invite discussion on whether or not these are, in fact, problems facing > > Ironic (not whether they are problems for someone, somewhere), and then > > ask > > why these necessitate a new driver be added to the project. > > > > > > They are, for reference: > > > > 1. limited partition support > > > > 2. no software RAID support > > > > 3. no LVM support > > > > 4. no support for hardware that lacks a BMC > > > > #1. > > > > When deploying a partition image (eg, QCOW format), Ironic's PXE deploy > > driver performs only the minimal partitioning necessary to fulfill its > > mission as an OpenStack service: respect the user's request for root, > > swap, > > and ephemeral partition sizes. When deploying a whole-disk image, > > Ironic > > does not perform any partitioning -- such is left up to the operator > > who > > created the disk image. > > > > Support for arbitrarily complex partition layouts is not required by, > > nor > > does it facilitate, the goal of provisioning physical servers via a > > common > > cloud API. Additionally, as with #3 below, nothing prevents a user from > > creating more partitions in unallocated disk space once they have > > access to > > their instance. Therefor, I don't see how Ironic's minimal support for > > partitioning is a problem for the project. > > > > #2. > > > > There is no support for defining a RAID in Ironic today, at all, > > whether > > software or hardware. Several proposals were floated last cycle; one is > > under review right now for DRAC support [1], and there are multiple > > call > > outs for RAID building in the state machine mega-spec [2]. Any such > > support > > for hardware RAID will necessarily be abstract enough to support > > multiple > > hardware vendor's driver implementations and both in-band creation (via > > IPA) and out-of-band creation (via vendor tools). > > > > Given the above, it may become possible to add software RAID support to > > IPA > > in the future, under the same abstraction. This would closely tie the > > deploy agent to the images it deploys (the latter image's kernel would > > be > > dependent upon a software RAID built by the former), but this would > > necessarily be true for the proposed FuelAgent as well. > > > > I don't see this as a compelling reason to add a new driver to the > > project. > > Instead, we should (plan to) add support for software RAID to the > > deploy > > agent which is already part of the project. > > > > #3. > > > > LVM volumes can easily be added by a user (after provisioning) within > > unallocated disk space for non-root partitions. I have not yet seen a > > compelling argument for doing this within the provisioning phase. > > > > #4. > > > > There are already in-tree drivers [3] [4] [5] which do not require a > > BMC. > > One of these uses SSH to connect and run pre-determined commands. Like > > the > > spec proposal, which states at line 122, "Control via SSH access > > feature > > intended only for experiments in non-production environment," the > > current > > SSHPowerDriver is only meant for testing environments. We could > > probably > > extend this driver to do what the FuelAgent spec proposes, as far as > > remote > > power control for cheap always-on hardware in testing environments with > > a > > pre-shared key. > > > > (And if anyone wonders about a use case for Ironic without external > > power > > control ... I can only think of one situation where I would rationally > > ever > > want to have a control-plane agent running inside a user-instance: I am > > both the operator and the only user of the cloud.) > > > > > > ---------------- > > > > In summary, as far as I can tell, all of the problem statements upon > > which > > the FuelAgent proposal are based are solvable through incremental > > changes > > in existing drivers, or out of scope for the project entirely. As > > another > > software-based deploy agent, FuelAgent would duplicate the majority of > > the > > functionality which ironic-python-agent has today. > > > > Ironic's driver ecosystem benefits from a diversity of > > hardware-enablement > > drivers. Today, we have two divergent software deployment drivers which > > approach image deployment differently: "agent" drivers use a local > > agent to > > prepare a system and download the image; "pxe" drivers use a remote > > agent > > and copy the image over iSCSI. I don't understand how a second driver > > which > > duplicates the functionality we already have, and shares the same goals > > as > > the drivers we already have, is beneficial to the project. > > > > Doing the same thing twice just increases the burden on the team; we're > > all > > working on the same problems, so let's do it together. > > > > -Devananda > > > > > > [0] > > https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition > > > > [1] https://review.openstack.org/#/c/107981/ > > > > [2] > > > > https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst > > > > > > [3] > > > > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py > > > > [4] > > > > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py > > > > [5] > > > > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean at dague.net Tue Dec 9 15:05:48 2014 From: sean at dague.net (Sean Dague) Date: Tue, 09 Dec 2014 10:05:48 -0500 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> References: <5486DF7F.7080706@dague.net> <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> Message-ID: <54870FCC.3010006@dague.net> On 12/09/2014 09:11 AM, Doug Hellmann wrote: > > On Dec 9, 2014, at 6:39 AM, Sean Dague wrote: > >> I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely. >> >> 1 - the entire H8* group. This doesn't function on python code, it >> functions on git commit message, which makes it tough to run locally. It >> also would be a reason to prevent us from not rerunning tests on commit >> message changes (something we could do after the next gerrit update). >> >> 2 - the entire H3* group - because of this - >> https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm >> >> A look at the H3* code shows that it's terribly complicated, and is >> often full of bugs (a few bit us last week). I'd rather just delete it >> and move on. > > I don?t have the hacking rules memorized. Could you describe them briefly? Sure, the H8* group is git commit messages. It's checking for line length in the commit message. - [H802] First, provide a brief summary of 50 characters or less. Summaries of greater then 72 characters will be rejected by the gate. - [H801] The first line of the commit message should provide an accurate description of the change, not just a reference to a bug or blueprint. H802 is mechanically enforced (though not the 50 characters part, so the code isn't the same as the rule). H801 is enforced by a regex that looks to see if the first line is a launchpad bug and fails on it. You can't mechanically enforce that english provides an accurate description. H3* are all the module import rules: Imports ------- - [H302] Do not import objects, only modules (*) - [H301] Do not import more than one module per line (*) - [H303] Do not use wildcard ``*`` import (*) - [H304] Do not make relative imports - Order your imports by the full module path - [H305 H306 H307] Organize your imports according to the `Import order template`_ and `Real-world Import Order Examples`_ below. I think these remain reasonable guidelines, but H302 is exceptionally tricky to get right, and we keep not getting it right. H305-307 are actually impossible to get right. Things come in and out of stdlib in python all the time. I think it's time to just decide to be reasonable Humans and that these are guidelines. The H3* set of rules is also why you have to install *all* of requirements.txt and test-requirements.txt in your pep8 tox target, because H302 actually inspects the sys.modules to attempt to figure out if things are correct. -Sean > > Doug > - [H802] First, provide a brief summary of 50 characters or less. Summaries of greater then 72 characters will be rejected by the gate. - [H801] The first line of the commit message should provide an accurate description of the change, not just a reference to a bug or blueprint. > >> >> -Sean >> >> -- >> Sean Dague >> http://dague.net >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Sean Dague http://dague.net From chmouel at chmouel.com Tue Dec 9 15:20:42 2014 From: chmouel at chmouel.com (Chmouel Boudjnah) Date: Tue, 9 Dec 2014 16:20:42 +0100 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <5486DF7F.7080706@dague.net> References: <5486DF7F.7080706@dague.net> Message-ID: On Tue, Dec 9, 2014 at 12:39 PM, Sean Dague wrote: > 1 - the entire H8* group. This doesn't function on python code, it > functions on git commit message, which makes it tough to run locally. > I do run them locally using git-review custom script features which would launch a flake8 before sending the review but I guess it's not a common usage. Chmouel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Tue Dec 9 15:29:31 2014 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 09 Dec 2014 07:29:31 -0800 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <5486DF7F.7080706@dague.net> References: <5486DF7F.7080706@dague.net> Message-ID: <5487155B.4060902@inaugust.com> On 12/09/2014 03:39 AM, Sean Dague wrote: > I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely. > > 1 - the entire H8* group. This doesn't function on python code, it > functions on git commit message, which makes it tough to run locally. It > also would be a reason to prevent us from not rerunning tests on commit > message changes (something we could do after the next gerrit update). +1 I DO like something warning about commit subject length ... but maybe that should be a git-review function or something. > 2 - the entire H3* group - because of this - > https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm > > A look at the H3* code shows that it's terribly complicated, and is > often full of bugs (a few bit us last week). I'd rather just delete it > and move on. +1 flake8 does the important one now - "no wildcard imports" The others are ones where I find myself dancing to meet the style more often than not. From kurt.r.taylor at gmail.com Tue Dec 9 15:32:25 2014 From: kurt.r.taylor at gmail.com (Kurt Taylor) Date: Tue, 9 Dec 2014 09:32:25 -0600 Subject: [openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time Message-ID: All of the feedback so far has supported moving the existing IRC Third-party CI meeting to better fit a worldwide audience. The consensus is that we will have only 1 meeting per week at alternating times. You can see examples of other teams with alternating meeting times at: https://wiki.openstack.org/wiki/Meetings This way, one week we are good for one part of the world, the next week for the other. You will not need to attend both meetings, just the meeting time every other week that fits your schedule. Proposed times in UTC are being voted on here: https://www.google.com/moderator/#16/e=21b93c Please vote on the time that is best for you. I would like to finalize the new times this week. Thanks! Kurt Taylor (krtaylor) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dguryanov at parallels.com Tue Dec 9 15:33:47 2014 From: dguryanov at parallels.com (Dmitry Guryanov) Date: Tue, 09 Dec 2014 18:33:47 +0300 Subject: [openstack-dev] [Nova] question about "Get Guest Info" row in HypervisorSupportMatrix Message-ID: <7997383.8LhO9nnzxZ@dblinov.sw.ru> Hello! There is a feature in HypervisorSupportMatrix (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called "Get Guest Info". Does anybody know, what does it mean? I haven't found anything like this neither in nova api nor in horizon and nova command line. -- Thanks, Dmitry Guryanov From yzveryanskyy at mirantis.com Tue Dec 9 15:47:50 2014 From: yzveryanskyy at mirantis.com (Yuriy Zveryanskyy) Date: Tue, 09 Dec 2014 17:47:50 +0200 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: <20141209150048.GB5494@jimrollenhagen.com> References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> <20141209150048.GB5494@jimrollenhagen.com> Message-ID: <548719A6.9030303@mirantis.com> On 12/09/2014 05:00 PM, Jim Rollenhagen wrote: > On Tue, Dec 09, 2014 at 04:01:07PM +0400, Vladimir Kozhukalov wrote: >> Just a short explanation of Fuel use case. >> >> Fuel use case is not a cloud. Fuel is a deployment tool. We install OS on >> bare metal servers and on VMs >> and then configure this OS using Puppet. We have been using Cobbler as our >> OS provisioning tool since the beginning of Fuel. >> However, Cobbler assumes using native OS installers (Anaconda and >> Debian-installer). For some reasons we decided to >> switch to image based approach for installing OS. >> >> One of Fuel features is the ability to provide advanced partitioning >> schemes (including software RAIDs, LVM). >> Native installers are quite difficult to customize in the field of >> partitioning >> (that was one of the reasons to switch to image based approach). Moreover, >> we'd like to implement even more >> flexible user experience. We'd like to allow user to choose which hard >> drives to use for root FS, for >> allocating DB. We'd like user to be able to put root FS over LV or MD >> device (including stripe, mirror, multipath). >> We'd like user to be able to choose which hard drives are bootable (if >> any), which options to use for mounting file systems. >> Many many various cases are possible. If you ask why we'd like to support >> all those cases, the answer is simple: >> because our users want us to support all those cases. >> Obviously, many of those cases can not be implemented as image internals, >> some cases can not be also implemented on >> configuration stage (placing root fs on lvm device). >> >> As far as those use cases were rejected to be implemented in term of IPA, >> we implemented so called Fuel Agent. > This is *precisely* why I disagree with adding this driver. > > Nearly every feature that is listed here has been talked about before, > within the Ironic community. Software RAID, LVM, user choosing the > partition layout. These were reected from IPA because they do not fit in > *Ironic*, not because they don't fit in IPA. Yes, they do not fit in Ironic *core* but this is a *driver*. There is iLO driver for example. Good or bad is iLO management technology? I don't know. But it is an existing vendor's solution. I should buy or rent HP server for tests or experiments with iLO driver. Fuel is widely used solution for deployment, and it is open-source. I think to have Fuel Agent driver in Ironic will be better than driver for rare hardware XYZ for example. > If the Fuel team can convince enough people that Ironic should be > managing pets, then I'm almost okay with adding this driver (though I > still think adding those features to IPA is the right thing to do). > > // jim > >> Important Fuel Agent features are: >> >> * It does not have REST API >> * it has executable entry point[s] >> * It uses local json file as it's input >> * It is planned to implement ability to download input data via HTTP (kind >> of metadata service) >> * It is designed to be agnostic to input data format, not only Fuel format >> (data drivers) >> * It is designed to be agnostic to image format (tar images, file system >> images, disk images, currently fs images) >> * It is designed to be agnostic to image compression algorithm (currently >> gzip) >> * It is designed to be agnostic to image downloading protocol (currently >> local file and HTTP link) >> >> So, it is clear that being motivated by Fuel, Fuel Agent is quite >> independent and generic. And we are open for >> new use cases. >> >> According Fuel itself, our nearest plan is to get rid of Cobbler because >> in the case of image based approach it is huge overhead. The question is >> which tool we can use instead of Cobbler. We need power management, >> we need TFTP management, we need DHCP management. That is >> exactly what Ironic is able to do. Frankly, we can implement power/TFTP/DHCP >> management tool independently, but as Devananda said, we're all working on >> the same problems, >> so let's do it together. Power/TFTP/DHCP management is where we are >> working on the same problems, >> but IPA and Fuel Agent are about different use cases. This case is not just >> Fuel, any mature >> deployment case require advanced partition/fs management. However, for me >> it is OK, if it is easily possible >> to use Ironic with external drivers (not merged to Ironic and not tested on >> Ironic CI). >> >> AFAIU, this spec https://review.openstack.org/#/c/138115/ does not assume >> changing Ironic API and core. >> Jim asked about how Fuel Agent will know about advanced disk partitioning >> scheme if API is not >> changed. The answer is simple: Ironic is supposed to send a link to >> metadata service (http or local file) >> where Fuel Agent can download input json data. >> >> As Roman said, we try to be pragmatic and suggest something which does not >> break anything. All changes >> are supposed to be encapsulated into a driver. No API and core changes. We >> have resources to support, test >> and improve this driver. This spec is just a zero step. Further steps are >> supposed to improve driver >> so as to make it closer to Ironic abstractions. >> >> For Ironic that means widening use cases and user community. But, as I >> already said, >> we are OK if Ironic does not need this feature. >> >> Vladimir Kozhukalov >> >> On Tue, Dec 9, 2014 at 1:09 PM, Roman Prykhodchenko < >> rprikhodchenko at mirantis.com> wrote: >> >>> It is true that IPA and FuelAgent share a lot of functionality in common. >>> However there is a major difference between them which is that they are >>> intended to be used to solve a different problem. >>> >>> IPA is a solution for provision-use-destroy-use_by_different_user use-case >>> and is really great for using it for providing BM nodes for other OS >>> services or in services like Rackspace OnMetal. FuelAgent itself serves for >>> provision-use-use-?-use use-case like Fuel or TripleO have. >>> >>> Those two use-cases require concentration on different details in first >>> place. For instance for IPA proper decommissioning is more important than >>> advanced disk management, but for FuelAgent priorities are opposite because >>> of obvious reasons. >>> >>> Putting all functionality to a single driver and a single agent may cause >>> conflicts in priorities and make a lot of mess inside both the driver and >>> the agent. Actually previously changes to IPA were blocked right because of >>> this conflict of priorities. Therefore replacing FuelAgent by IPA in where >>> FuelAgent is used currently does not seem like a good option because come >>> people (and I?m not talking about Mirantis) might loose required features >>> because of different priorities. >>> >>> Having two separate drivers along with two separate agents for those >>> different use-cases will allow to have two independent teams that are >>> concentrated on what?s really important for a specific use-case. I don?t >>> see any problem in overlapping functionality if it?s used differently. >>> >>> >>> P. S. >>> I realise that people may be also confused by the fact that FuelAgent is >>> actually called like that and is used only in Fuel atm. Our point is to >>> make it a simple, powerful and what?s more important a generic tool for >>> provisioning. It is not bound to Fuel or Mirantis and if it will cause >>> confusion in the future we will even be happy to give it a different and >>> less confusing name. >>> >>> P. P. S. >>> Some of the points of this integration do not look generic enough or nice >>> enough. We look pragmatic on the stuff and are trying to implement what?s >>> possible to implement as the first step. For sure this is going to have a >>> lot more steps to make it better and more generic. >>> >>> >>> On 09 Dec 2014, at 01:46, Jim Rollenhagen wrote: >>> >>> >>> >>> On December 8, 2014 2:23:58 PM PST, Devananda van der Veen < >>> devananda.vdv at gmail.com> wrote: >>> >>> I'd like to raise this topic for a wider discussion outside of the >>> hallway >>> track and code reviews, where it has thus far mostly remained. >>> >>> In previous discussions, my understanding has been that the Fuel team >>> sought to use Ironic to manage "pets" rather than "cattle" - and doing >>> so >>> required extending the API and the project's functionality in ways that >>> no >>> one else on the core team agreed with. Perhaps that understanding was >>> wrong >>> (or perhaps not), but in any case, there is now a proposal to add a >>> FuelAgent driver to Ironic. The proposal claims this would meet that >>> teams' >>> needs without requiring changes to the core of Ironic. >>> >>> https://review.openstack.org/#/c/138115/ >>> >>> >>> I think it's clear from the review that I share the opinions expressed in >>> this email. >>> >>> That said (and hopefully without derailing the thread too much), I'm >>> curious how this driver could do software RAID or LVM without modifying >>> Ironic's API or data model. How would the agent know how these should be >>> built? How would an operator or user tell Ironic what the >>> disk/partition/volume layout would look like? >>> >>> And before it's said - no, I don't think vendor passthru API calls are an >>> appropriate answer here. >>> >>> // jim >>> >>> >>> The Problem Description section calls out four things, which have all >>> been >>> discussed previously (some are here [0]). I would like to address each >>> one, >>> invite discussion on whether or not these are, in fact, problems facing >>> Ironic (not whether they are problems for someone, somewhere), and then >>> ask >>> why these necessitate a new driver be added to the project. >>> >>> >>> They are, for reference: >>> >>> 1. limited partition support >>> >>> 2. no software RAID support >>> >>> 3. no LVM support >>> >>> 4. no support for hardware that lacks a BMC >>> >>> #1. >>> >>> When deploying a partition image (eg, QCOW format), Ironic's PXE deploy >>> driver performs only the minimal partitioning necessary to fulfill its >>> mission as an OpenStack service: respect the user's request for root, >>> swap, >>> and ephemeral partition sizes. When deploying a whole-disk image, >>> Ironic >>> does not perform any partitioning -- such is left up to the operator >>> who >>> created the disk image. >>> >>> Support for arbitrarily complex partition layouts is not required by, >>> nor >>> does it facilitate, the goal of provisioning physical servers via a >>> common >>> cloud API. Additionally, as with #3 below, nothing prevents a user from >>> creating more partitions in unallocated disk space once they have >>> access to >>> their instance. Therefor, I don't see how Ironic's minimal support for >>> partitioning is a problem for the project. >>> >>> #2. >>> >>> There is no support for defining a RAID in Ironic today, at all, >>> whether >>> software or hardware. Several proposals were floated last cycle; one is >>> under review right now for DRAC support [1], and there are multiple >>> call >>> outs for RAID building in the state machine mega-spec [2]. Any such >>> support >>> for hardware RAID will necessarily be abstract enough to support >>> multiple >>> hardware vendor's driver implementations and both in-band creation (via >>> IPA) and out-of-band creation (via vendor tools). >>> >>> Given the above, it may become possible to add software RAID support to >>> IPA >>> in the future, under the same abstraction. This would closely tie the >>> deploy agent to the images it deploys (the latter image's kernel would >>> be >>> dependent upon a software RAID built by the former), but this would >>> necessarily be true for the proposed FuelAgent as well. >>> >>> I don't see this as a compelling reason to add a new driver to the >>> project. >>> Instead, we should (plan to) add support for software RAID to the >>> deploy >>> agent which is already part of the project. >>> >>> #3. >>> >>> LVM volumes can easily be added by a user (after provisioning) within >>> unallocated disk space for non-root partitions. I have not yet seen a >>> compelling argument for doing this within the provisioning phase. >>> >>> #4. >>> >>> There are already in-tree drivers [3] [4] [5] which do not require a >>> BMC. >>> One of these uses SSH to connect and run pre-determined commands. Like >>> the >>> spec proposal, which states at line 122, "Control via SSH access >>> feature >>> intended only for experiments in non-production environment," the >>> current >>> SSHPowerDriver is only meant for testing environments. We could >>> probably >>> extend this driver to do what the FuelAgent spec proposes, as far as >>> remote >>> power control for cheap always-on hardware in testing environments with >>> a >>> pre-shared key. >>> >>> (And if anyone wonders about a use case for Ironic without external >>> power >>> control ... I can only think of one situation where I would rationally >>> ever >>> want to have a control-plane agent running inside a user-instance: I am >>> both the operator and the only user of the cloud.) >>> >>> >>> ---------------- >>> >>> In summary, as far as I can tell, all of the problem statements upon >>> which >>> the FuelAgent proposal are based are solvable through incremental >>> changes >>> in existing drivers, or out of scope for the project entirely. As >>> another >>> software-based deploy agent, FuelAgent would duplicate the majority of >>> the >>> functionality which ironic-python-agent has today. >>> >>> Ironic's driver ecosystem benefits from a diversity of >>> hardware-enablement >>> drivers. Today, we have two divergent software deployment drivers which >>> approach image deployment differently: "agent" drivers use a local >>> agent to >>> prepare a system and download the image; "pxe" drivers use a remote >>> agent >>> and copy the image over iSCSI. I don't understand how a second driver >>> which >>> duplicates the functionality we already have, and shares the same goals >>> as >>> the drivers we already have, is beneficial to the project. >>> >>> Doing the same thing twice just increases the burden on the team; we're >>> all >>> working on the same problems, so let's do it together. >>> >>> -Devananda >>> >>> >>> [0] >>> https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition >>> >>> [1] https://review.openstack.org/#/c/107981/ >>> >>> [2] >>> >>> https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst >>> >>> >>> [3] >>> >>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py >>> >>> [4] >>> >>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py >>> >>> [5] >>> >>> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py >>> >>> >>> ------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jruzicka at redhat.com Tue Dec 9 15:50:48 2014 From: jruzicka at redhat.com (Jakub Ruzicka) Date: Tue, 09 Dec 2014 16:50:48 +0100 Subject: [openstack-dev] [all] Deprecating exceptions In-Reply-To: <54203F31.6080108@redhat.com> References: <54203B18.3040809@sheep.art.pl> <54203F31.6080108@redhat.com> Message-ID: <54871A58.6090309@redhat.com> On 22.9.2014 17:24, Ihar Hrachyshka wrote: > On 22/09/14 17:07, Radomir Dopieralski wrote: >> Horizon's tests were recently broken by a change in the Ceilometer >> client that removed a deprecated exception. The exception was >> deprecated for a while already, but as it often is, nobody did the >> work of removing all references to it from Horizon before it was >> too late. Sure, in theory we should all be reading the release >> notes of all versions of all dependencies and acting upon things >> like this. In practice, if there is no warning generated in the >> unit tests, nobody is going to do anything about it. > >> So I sat down and started thinking about how to best generate a >> warning when someone is trying to catch a deprecated exception. I >> came up with this code: > >> http://paste.openstack.org/show/114170/ > >> It's not pretty -- it is based on the fact that the `except` >> statement has to do a subclass check on the exceptions it is >> catching. It requires a metaclass and a class decorator to work, >> and it uses a global variable. I'm sure it would be possible to do >> it in a little bit cleaner way. But at least it gives us the >> warning (sure, only if an exception is actually being thrown, but >> that's test coverage problem). > >> I propose to do exception deprecating in this way in the future. > > > Aren't clients supposed to be backwards compatible? Isn't it the exact > reason why we don't maintain stable branches for client modules? Supposed, yes. However, it's not ensured/enforced any way, so it's as good as an empty promise. > So, another reason to actually start maintaining those stable branches > for clients. We already do it in RDO (Red Hat backed Openstack > distribution) anyway. We kinda do that in RHOSP as well, no? Each rhos-X.Y-rhel-Z branch was forked from RDO stable client branch (redhat-openstack)... Cheers, Jakub From ppouliot at microsoft.com Tue Dec 9 15:55:18 2014 From: ppouliot at microsoft.com (Peter Pouliot) Date: Tue, 9 Dec 2014 15:55:18 +0000 Subject: [openstack-dev] Hyper-V Meeting Canceled for Today Message-ID: Hi All, I'm canceling the hyper-v meeting today due to a conflicting shedules, we will resume next week. Peter J. Pouliot CISSP Microsoft Enterprise Cloud Solutions C:\OpenStack New England Research & Development Center 1 Memorial Drive Cambridge, MA 02142 P: 1.(857).4536436 E: ppouliot at microsoft.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Dec 9 15:55:59 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 9 Dec 2014 10:55:59 -0500 Subject: [openstack-dev] new library oslo.context released Message-ID: <078CB4DB-14D8-42AB-82F5-CDCCBB7E36F2@doughellmann.com> The Oslo team is pleased to announce the release of oslo.context 0.1.0. This is the first version of oslo.context, the library containing the base class for the request context object. This is a relatively small module, but we?ve placed it in a separate library so oslo.messaging and oslo.log can both influence its API without ?owning? it or causing circular dependencies. This initial release includes enough support for projects to start adopting the oslo_context.RequestContext as the base class for existing request context implementations. The documentation [1] covers the basic API that is present now. We expect to find a few holes as we start integrating the library into existing projects, so liaisons please work with us to identify and plug the holes by reporting bugs or raising issues in #openstack-oslo. There are a few more API additions planned for the library, but those will come later in the release cycle. Please see the spec for details if you are interested [2]. Thanks! Doug [1] http://docs.openstack.org/developer/oslo.context/ [2] http://specs.openstack.org/openstack/oslo-specs/specs/kilo/graduate-oslo-context.html From btopol at us.ibm.com Tue Dec 9 15:57:37 2014 From: btopol at us.ibm.com (Brad Topol) Date: Tue, 9 Dec 2014 10:57:37 -0500 Subject: [openstack-dev] [Keystone] OSAAA-Policy In-Reply-To: References: <54862B9C.8060809@redhat.com> Message-ID: +1! Makes sense. --Brad Brad Topol, Ph.D. IBM Distinguished Engineer OpenStack (919) 543-0646 Internet: btopol at us.ibm.com Assistant: Kendra Witherspoon (919) 254-0680 From: Morgan Fainberg To: Adam Young , "OpenStack Development Mailing List (not for usage questions)" Date: 12/08/2014 06:07 PM Subject: Re: [openstack-dev] [Keystone] OSAAA-Policy I agree that this library should not have ?Keystone? in the name. This is more along the lines of pycadf, something that is housed under the OpenStack Identity Program but it is more interesting for general use-case than exclusively something that is tied to Keystone specifically. Cheers, Morgan -- Morgan Fainberg On December 8, 2014 at 4:55:20 PM, Adam Young (ayoung at redhat.com) wrote: The Policy libraray has been nominated for promotion from Oslo incubator. The Keystone team was formally known as the Identity Program, but now is Authentication, Authorization, and Audit, or AAA. Does the prefeix OSAAA for the library make sense? It should not be Keystone-policy. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.dulko at intel.com Tue Dec 9 16:10:19 2014 From: michal.dulko at intel.com (Dulko, Michal) Date: Tue, 9 Dec 2014 16:10:19 +0000 Subject: [openstack-dev] [cinder] HA issues In-Reply-To: References: <3895CB36EABD4E49B816E6081F3B00171F601C36@IRSMSX108.ger.corp.intel.com> Message-ID: <3895CB36EABD4E49B816E6081F3B00171F6025EE@IRSMSX108.ger.corp.intel.com> And what about no recovery in case of failure mid-task? I can see that there's some TaskFlow integration done. This lib seems to address these issues (if used with taskflow.persistent submodule, which Cinder isn't using). Any plans for further integration with TaskFlow? -----Original Message----- From: John Griffith [mailto:john.griffith8 at gmail.com] Sent: Monday, December 8, 2014 11:28 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [cinder] HA issues On Mon, Dec 8, 2014 at 8:18 AM, Dulko, Michal wrote: > Hi all! > > > > At the summit during crossproject HA session there were multiple > Cinder issues mentioned. These can be found in this etherpad: > https://etherpad.openstack.org/p/kilo-crossproject-ha-integration > > > > Is there any ongoing effort to fix these issues? Is there an idea how > to approach any of them? > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Thanks for the nudge on this, personally I hadn't seen this. So the items are pretty vague, there are def plans to try and address a number of race conditions etc. I'm not aware of any specific plans to focus on HA from this perspective, or anybody stepping up to work on it but certainly would be great for somebody to dig in and start flushing this out. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From brian at python.org Tue Dec 9 16:15:34 2014 From: brian at python.org (Brian Curtin) Date: Tue, 9 Dec 2014 10:15:34 -0600 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <54870FCC.3010006@dague.net> References: <5486DF7F.7080706@dague.net> <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> <54870FCC.3010006@dague.net> Message-ID: On Tue, Dec 9, 2014 at 9:05 AM, Sean Dague wrote: > - [H305 H306 H307] Organize your imports according to the `Import order > template`_ and `Real-world Import Order Examples`_ below. > > I think these remain reasonable guidelines, but H302 is exceptionally > tricky to get right, and we keep not getting it right. > > H305-307 are actually impossible to get right. Things come in and out of > stdlib in python all the time. Do you have concrete examples of where this has been an issue? Modules are only added roughly every 18 months and only on the 3.x line as of the middle of 2010 when 2.7.0 was released. Nothing should have left the 2.x line within that time as well, and I don't recall anything having completed a deprecation cycle on the 3.x side. From vkozhukalov at mirantis.com Tue Dec 9 16:24:43 2014 From: vkozhukalov at mirantis.com (Vladimir Kozhukalov) Date: Tue, 9 Dec 2014 20:24:43 +0400 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: <20141209150048.GB5494@jimrollenhagen.com> References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> <20141209150048.GB5494@jimrollenhagen.com> Message-ID: We assume next step will be to put provision data (disk partition > scheme, maybe other data) into driver_info and make Fuel Agent driver > able to serialize those data (special format) and implement a > corresponding data driver in Fuel Agent for this format. Again very > simple. Maybe it is time to think of having Ironic metadata service > (just maybe). I'm ok with the format, my question is: what and how is going to collect > all the data and put into say driver_info? Fuel has a web service which stores nodes info in its database. When user clicks "Deploy" button, this web service serializes deployment task and puts this task into task runner (another Fuel component). Then this task runner parses task and adds a node into Ironic via REST API (including driver_info). Then it calls Ironic deploy method and Ironic uses Fuel Agent driver to deploy a node. Corresponding Fuel spec is here https://review.openstack.org/#/c/138301/. Again it is zero step implementation. Honestly, I think writing roadmap right now is not very rational as far as > I am not even sure people are interested in widening Ironic use cases. Some > of the comments were not even constructive like "I don't understand what > your use case is, please use IPA". > > Please don't be offended by this. We did put a lot of effort into IPA and > it's reasonable to look for a good use cases before having one more smart > ramdisk. Nothing personal, just estimating cost vs value :) > Also "why not use IPA" is a fair question for me and the answer is about > use cases (as you stated it before), not about missing features of IPA, > right? You are right it is a fair question, and answer is exactly about *missing features*. Nova is not our case. Fuel is totally about deployment. There is some in > common Here when we have a difficult point. Major use case for Ironic is to be > driven by Nova (and assisted by Neutron). Without these two it's hard to > understand how Fuel Agent is going to fit into the infrastructure. And > hence my question above about where your json comes from. In the current > Ironic world the same data is received partly from Nova flavor, partly > managed by Neutron completely. > I'm not saying it can't change - we do want to become more stand-alone. > E.g. we can do without Neutron right now. I think specifying the source of > input data for Fuel Agent in the Ironic infrastructure would help a lot > understand, how well Ironic and Fuel Agent could play together. According to the information I have, correct me if I'm wrong, Ironic currently is on the stage of becoming stand-alone service. That is the reason why this spec has been brought up. Again we need something to manage power/tftp/dhcp to substitute Cobbler. Ironic looks like a suitable tool, but we need this driver. We are not going to break anything. We have resources to test and support this driver. And I can not use IPA *right now* because it does not have features I need. I can not wait for next half a year for these features to be implemented. Why can't we add this (Fuel Agent) driver and then if IPA implements what we need we can switch to IPA. The only alternative for me right now is to implement my own power/tftp/dhcp management solution like I did with Fuel Agent when I did not get approve for including advanced disk partitioning. Questions are: Is Ironic interested in this use case or not? Is Ironic interested to get more development resources? The only case when it's rational for us to spend our resources to develop Ironic is when we get something back. We are totally pragmatic, we just address our user's wishes and issues. It is ok for us to use any tool which provides what we need (IPA, Fuel Agent, any other). We need advanced disk partitioning and power/tftp/dhcp management by March 2015. Is it possible to get this from Ironic + IPA? I doubt it. Is it possible to get this form Ironic + Fuel Agent? Yes it is. Is it possible to get this from Fuel power/tftp/dhcp management + Fuel Agent? Yes it is. So, I have two options right now: Ironic + Fuel Agent or Fuel power/tftp/dhcp management + Fuel Agent. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at dague.net Tue Dec 9 16:27:29 2014 From: sean at dague.net (Sean Dague) Date: Tue, 09 Dec 2014 11:27:29 -0500 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: References: <5486DF7F.7080706@dague.net> <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> <54870FCC.3010006@dague.net> Message-ID: <548722F1.9020700@dague.net> On 12/09/2014 11:15 AM, Brian Curtis wrote: > On Tue, Dec 9, 2014 at 9:05 AM, Sean Dague wrote: >> - [H305 H306 H307] Organize your imports according to the `Import order >> template`_ and `Real-world Import Order Examples`_ below. >> >> I think these remain reasonable guidelines, but H302 is exceptionally >> tricky to get right, and we keep not getting it right. >> >> H305-307 are actually impossible to get right. Things come in and out of >> stdlib in python all the time. > > Do you have concrete examples of where this has been an issue? Modules > are only added roughly every 18 months and only on the 3.x line as of > the middle of 2010 when 2.7.0 was released. Nothing should have left > the 2.x line within that time as well, and I don't recall anything > having completed a deprecation cycle on the 3.x side. argparse - which is stdlib in 2.7, not in 2.6. So hacking on 2.6 would give different results from 2.7. Less of an issue now that 2.6 support in OpenStack has been dropped for most projects, but it's a very concrete example. This check should run on any version of python and give the same results. It does not, because it queries python to know what's in stdlib vs. not. Having a deprecation cycle isn't the concern here, it's the checks working the same on python 2.7, 3.3, 3.4, 3.5 -Sean -- Sean Dague http://dague.net From fungi at yuggoth.org Tue Dec 9 16:28:08 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Dec 2014 16:28:08 +0000 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <5487155B.4060902@inaugust.com> References: <5486DF7F.7080706@dague.net> <5487155B.4060902@inaugust.com> Message-ID: <20141209162806.GQ2497@yuggoth.org> On 2014-12-09 07:29:31 -0800 (-0800), Monty Taylor wrote: > I DO like something warning about commit subject length ... but maybe > that should be a git-review function or something. [...] How about a hook in Gerrit to refuse commits based on some simple (maybe even project-specific) rules? -- Jeremy Stanley From duncan.thomas at gmail.com Tue Dec 9 16:39:46 2014 From: duncan.thomas at gmail.com (Duncan Thomas) Date: Tue, 9 Dec 2014 16:39:46 +0000 Subject: [openstack-dev] [cinder] HA issues In-Reply-To: <3895CB36EABD4E49B816E6081F3B00171F6025EE@IRSMSX108.ger.corp.intel.com> References: <3895CB36EABD4E49B816E6081F3B00171F601C36@IRSMSX108.ger.corp.intel.com> <3895CB36EABD4E49B816E6081F3B00171F6025EE@IRSMSX108.ger.corp.intel.com> Message-ID: There are some significant limitations to the pure taskflow approach, however some combination of atomic micro-state management and taskflow persistence is being looked at Duncan Thomas On Dec 9, 2014 6:24 PM, "Dulko, Michal" wrote: > And what about no recovery in case of failure mid-task? I can see that > there's some TaskFlow integration done. This lib seems to address these > issues (if used with taskflow.persistent submodule, which Cinder isn't > using). Any plans for further integration with TaskFlow? > > -----Original Message----- > From: John Griffith [mailto:john.griffith8 at gmail.com] > Sent: Monday, December 8, 2014 11:28 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [cinder] HA issues > > On Mon, Dec 8, 2014 at 8:18 AM, Dulko, Michal > wrote: > > Hi all! > > > > > > > > At the summit during crossproject HA session there were multiple > > Cinder issues mentioned. These can be found in this etherpad: > > https://etherpad.openstack.org/p/kilo-crossproject-ha-integration > > > > > > > > Is there any ongoing effort to fix these issues? Is there an idea how > > to approach any of them? > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Thanks for the nudge on this, personally I hadn't seen this. So the items > are pretty vague, there are def plans to try and address a number of race > conditions etc. I'm not aware of any specific plans to focus on HA from > this perspective, or anybody stepping up to work on it but certainly would > be great for somebody to dig in and start flushing this out. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannes at erdfelt.com Tue Dec 9 16:39:49 2014 From: johannes at erdfelt.com (Johannes Erdfelt) Date: Tue, 9 Dec 2014 08:39:49 -0800 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <548722F1.9020700@dague.net> References: <5486DF7F.7080706@dague.net> <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> <54870FCC.3010006@dague.net> <548722F1.9020700@dague.net> Message-ID: <20141209163949.GJ26706@sventech.com> On Tue, Dec 09, 2014, Sean Dague wrote: > This check should run on any version of python and give the same > results. It does not, because it queries python to know what's in stdlib > vs. not. Just to underscore that it's difficult to get right, I found out recently that hacking doesn't do a great job of figuring out what is a standard library. I've installed some libraries in 'develop' mode and recent hacking thinks they are standard libraries and complains about the order. JE From ayoung at redhat.com Tue Dec 9 16:39:51 2014 From: ayoung at redhat.com (Adam Young) Date: Tue, 09 Dec 2014 11:39:51 -0500 Subject: [openstack-dev] [Keystone] OSAAA-Policy In-Reply-To: References: <54862B9C.8060809@redhat.com> Message-ID: <548725D7.1070800@redhat.com> On 12/09/2014 10:57 AM, Brad Topol wrote: > +1! Makes sense. > > --Brad > > > Brad Topol, Ph.D. > IBM Distinguished Engineer > OpenStack > (919) 543-0646 > Internet: btopol at us.ibm.com > Assistant: Kendra Witherspoon (919) 254-0680 > > > > From: Morgan Fainberg > To: Adam Young , "OpenStack Development Mailing > List (not for usage questions)" > Date: 12/08/2014 06:07 PM > Subject: Re: [openstack-dev] [Keystone] OSAAA-Policy > ------------------------------------------------------------------------ > > > > I agree that this library should not have ?Keystone? in the name. This > is more along the lines of pycadf, something that is housed under the > OpenStack Identity Program but it is more interesting for general > use-case than exclusively something that is tied to Keystone specifically. openstack-policy? osid-policy? It really should not position itself as a standard. pycadf is more general purpose, but we are not looking to replace all of the rules languages out there. > > > Cheers, > Morgan > > -- > Morgan Fainberg > > On December 8, 2014 at 4:55:20 PM, Adam Young (_ayoung at redhat.com_ > ) wrote: > > The Policy libraray has been nominated for promotion from Oslo > incubator. The Keystone team was formally known as the Identity > Program, but now is Authentication, Authorization, and Audit, or AAA. > > Does the prefeix OSAAA for the library make sense? It should not be > Keystone-policy. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtreinish at kortar.org Tue Dec 9 16:41:06 2014 From: mtreinish at kortar.org (Matthew Treinish) Date: Tue, 9 Dec 2014 11:41:06 -0500 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: References: <5486DF7F.7080706@dague.net> <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> <54870FCC.3010006@dague.net> Message-ID: <20141209164106.GA1012@Sazabi.treinish> On Tue, Dec 09, 2014 at 10:15:34AM -0600, Brian Curtin wrote: > On Tue, Dec 9, 2014 at 9:05 AM, Sean Dague wrote: > > - [H305 H306 H307] Organize your imports according to the `Import order > > template`_ and `Real-world Import Order Examples`_ below. > > > > I think these remain reasonable guidelines, but H302 is exceptionally > > tricky to get right, and we keep not getting it right. > > > > H305-307 are actually impossible to get right. Things come in and out of > > stdlib in python all the time. > > Do you have concrete examples of where this has been an issue? Modules > are only added roughly every 18 months and only on the 3.x line as of > the middle of 2010 when 2.7.0 was released. Nothing should have left > the 2.x line within that time as well, and I don't recall anything > having completed a deprecation cycle on the 3.x side. > I don't have any examples of stdlib removals (and there may not be any) but that isn't the only issue with the import grouping rules. The reverse will also cause issues, adding a library to stdlib which was previously a third-party module. The best example I've found is pathlib which was added to stdlib in 3.4: https://docs.python.org/3/library/pathlib.html but a third-party module on all the previous releases: https://pypi.python.org/pypi/pathlib So, the hacking rule will behave differently depending on which version of python you're running with. There really isn't a way around that, if the rule can't behave consistently and enforce the same behavior between releases we shouldn't be using it. Especially as things are trying to migrate to use python 3 where possible. I've seen proposals to hard code the list of stdlib in the rule to a specific python version which would make the behavior consistent, but I very much opposed to that because it means we're not actually enforcing the correct thing which I think is as big an issue. We don't want the hacking checks to error out and say that pathlib is a 3rd party module even if we're running it on python 3.4, that would just be very confusing. The middle ground I proposed was to not differentiate the third-party and stdlib import groups and just check local project's import grouping against the others. This would make the behavior consistent between python versions and still provide some useful feedback. But if the consensus is to just remove the rules I'm fine with that too. -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From johannes at erdfelt.com Tue Dec 9 16:41:18 2014 From: johannes at erdfelt.com (Johannes Erdfelt) Date: Tue, 9 Dec 2014 08:41:18 -0800 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <5486DF7F.7080706@dague.net> References: <5486DF7F.7080706@dague.net> Message-ID: <20141209164118.GK26706@sventech.com> On Tue, Dec 09, 2014, Sean Dague wrote: > I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely. > > 1 - the entire H8* group. This doesn't function on python code, it > functions on git commit message, which makes it tough to run locally. It > also would be a reason to prevent us from not rerunning tests on commit > message changes (something we could do after the next gerrit update). One of the problems with the H8* tests is that it can reject a commit message generated by git itself. I had a 'git revert' rejected because the first line was too long :( JE From sorlando at nicira.com Tue Dec 9 16:41:43 2014 From: sorlando at nicira.com (Salvatore Orlando) Date: Tue, 9 Dec 2014 16:41:43 +0000 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: I would like to chime into this discussion wearing my plugin developer hat. We (the VMware team) have looked very carefully at the current proposal for splitting off drivers and plugins from the main source code tree. Therefore the concerns you've heard from Gary are not just ramblings but are the results of careful examination of this proposal. While we agree with the final goal, the feeling is that for many plugin maintainers this process change might be too much for what can be accomplished in a single release cycle. As a member of the drivers team, I am still very supportive of the split, I just want to make sure that it?s made in a sustainable way; I also understand that ?sustainability? has been one of the requirements of the current proposal, and therefore we should all be on the same page on this aspect. However, we did a simple exercise trying to assess the amount of work needed to achieve something which might be acceptable to satisfy the process. Without going into too many details, this requires efforts for: - refactor the code to achieve a plugin module simple and thin enough to satisfy the requirements. Unfortunately a radical approach like the one in [1] with a reference to an external library is not pursuable for us - maintaining code repositories outside of the neutron scope and the necessary infrastructure - reinforcing our CI infrastructure, and improve our error detection and log analysis capabilities to improve reaction times upon failures triggered by upstream changes. As you know, even if the plugin interface is solid-ish, the dependency on the db base class increases the chances of upstream changes breaking 3rd party plugins. The feedback from our engineering team is that satisfying the requirements of this new process might not be feasible in the Kilo timeframe, both for existing plugins and for new plugins and drivers that should be upstreamed (there are a few proposed on neutron-specs at the moment, which are all in -2 status considering the impending approval of the split out). The questions I would like to bring to the wider community are therefore the following: 1 - Is there a possibility of making a further concession on the current proposal, where maintainers are encouraged to experiment with the plugin split in Kilo, but will actually required to do it in the next release? 2 - What could be considered as a acceptable as a new plugin? I understand that they would be accepted only as ?thin integration modules?, which ideally should just be a pointer to code living somewhere else. I?m not questioning the validity of this approach, but it has been brought to my attention that this will actually be troubling for teams which have made an investment in the previous release cycles to upstream plugins following the ?old? process 3 - Regarding the above discussion on "ML2 or not ML2". The point on co-gating is well taken. Eventually we'd like to remove this binding - because I believe the ML2 subteam would also like to have more freedom on their plugin. Do we already have an idea about how doing that without completely moving away from the db_base class approach? Thanks for your attention and for reading through this Salvatore [1] http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/vmware/plugin.py#n22 On 8 December 2014 at 21:51, Maru Newby wrote: > > On Dec 7, 2014, at 10:51 AM, Gary Kotton wrote: > > > Hi Kyle, > > I am not missing the point. I understand the proposal. I just think that > it has some shortcomings (unless I misunderstand, which will certainly not > be the first time and most definitely not the last). The thinning out is to > have a shim in place. I understand this and this will be the entry point > for the plugin. I do not have a concern for this. My concern is that we are > not doing this with the ML2 off the bat. That should lead by example as it > is our reference architecture. Lets not kid anyone, but we are going to > hit some problems with the decomposition. I would prefer that it be done > with the default implementation. Why? > > The proposal is to move vendor-specific logic out of the tree to increase > vendor control over such code while decreasing load on reviewers. ML2 > doesn?t contain vendor-specific logic - that?s the province of ML2 drivers > - so it is not a good target for the proposed decomposition by itself. > > > > ? Cause we will fix them quicker as it is something that prevent > Neutron from moving forwards > > ? We will just need to fix in one place first and not in N (where > N is the vendor plugins) > > ? This is a community effort ? so we will have a lot more eyes on > it > > ? It will provide a reference architecture for all new plugins > that want to be added to the tree > > ? It will provide a working example for plugin that are already in > tree and are to be replaced by the shim > > If we really want to do this, we can say freeze all development (which > is just approvals for patches) for a few days so that we will can just > focus on this. I stated what I think should be the process on the review. > For those who do not feel like finding the link: > > ? Create a stack forge project for ML2 > > ? Create the shim in Neutron > > ? Update devstack for the to use the two repos and the shim > > When #3 is up and running we switch for that to be the gate. Then we > start a stopwatch on all other plugins. > > As was pointed out on the spec (see Miguel?s comment on r15), the ML2 > plugin and the OVS mechanism driver need to remain in the main Neutron repo > for now. Neutron gates on ML2+OVS and landing a breaking change in the > Neutron repo along with its corresponding fix to a separate ML2 repo would > be all but impossible under the current integrated gating scheme. > Plugins/drivers that do not gate Neutron have no such constraint. > > > Maru > > > > Sure, I?ll catch you on IRC tomorrow. I guess that you guys will bash > out the details at the meetup. Sadly I will not be able to attend ? so you > will have to delay on the tar and feathers. > > Thanks > > Gary > > > > > > From: "mestery at mestery.com" > > Reply-To: OpenStack List > > Date: Sunday, December 7, 2014 at 7:19 PM > > To: OpenStack List > > Cc: "openstack at lists.openstack.org" > > Subject: Re: [openstack-dev] [Neutron] Core/Vendor code decomposition > > > > Gary, you are still miss the point of this proposal. Please see my > comments in review. We are not forcing things out of tree, we are thinning > them. The text you quoted in the review makes that clear. We will look at > further decomposing ML2 post Kilo, but we have to be realistic with what we > can accomplish during Kilo. > > > > Find me on IRC Monday morning and we can discuss further if you still > have questions and concerns. > > > > Thanks! > > Kyle > > > > On Sun, Dec 7, 2014 at 2:08 AM, Gary Kotton wrote: > >> Hi, > >> I have raised my concerns on the proposal. I think that all plugins > should be treated on an equal footing. My main concern is having the ML2 > plugin in tree whilst the others will be moved out of tree will be > problematic. I think that the model will be complete if the ML2 was also > out of tree. This will help crystalize the idea and make sure that the > model works correctly. > >> Thanks > >> Gary > >> > >> From: "Armando M." > >> Reply-To: OpenStack List > >> Date: Saturday, December 6, 2014 at 1:04 AM > >> To: OpenStack List , " > openstack at lists.openstack.org" > >> Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition > >> > >> Hi folks, > >> > >> For a few weeks now the Neutron team has worked tirelessly on [1]. > >> > >> This initiative stems from the fact that as the project matures, > evolution of processes and contribution guidelines need to evolve with it. > This is to ensure that the project can keep on thriving in order to meet > the needs of an ever growing community. > >> > >> The effort of documenting intentions, and fleshing out the various > details of the proposal is about to reach an end, and we'll soon kick the > tires to put the proposal into practice. Since the spec has grown pretty > big, I'll try to capture the tl;dr below. > >> > >> If you have any comment please do not hesitate to raise them here > and/or reach out to us. > >> > >> tl;dr >>> > >> > >> From the Kilo release, we'll initiate a set of steps to change the > following areas: > >> ? Code structure: every plugin or driver that exists or wants to > exist as part of Neutron project is decomposed in an slim vendor > integration (which lives in the Neutron repo), plus a bulkier vendor > library (which lives in an independent publicly available repo); > >> ? Contribution process: this extends to the following aspects: > >> ? Design and Development: the process is largely unchanged > for the part that pertains the vendor integration; the maintainer team is > fully auto governed for the design and development of the vendor library; > >> ? Testing and Continuous Integration: maintainers will be > required to support their vendor integration with 3rd CI testing; the > requirements for 3rd CI testing are largely unchanged; > >> ? Defect management: the process is largely unchanged, > issues affecting the vendor library can be tracked with whichever > tool/process the maintainer see fit. In cases where vendor library fixes > need to be reflected in the vendor integration, the usual OpenStack defect > management apply. > >> ? Documentation: there will be some changes to the way > plugins and drivers are documented with the intention of promoting > discoverability of the integrated solutions. > >> ? Adoption and transition plan: we strongly advise maintainers to > stay abreast of the developments of this effort, as their code, their CI, > etc will be affected. The core team will provide guidelines and support > throughout this cycle the ensure a smooth transition. > >> To learn more, please refer to [1]. > >> > >> Many thanks, > >> Armando > >> > >> [1] https://review.openstack.org/#/c/134680 > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berrange at redhat.com Tue Dec 9 15:39:35 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Tue, 9 Dec 2014 15:39:35 +0000 Subject: [openstack-dev] [Nova] question about "Get Guest Info" row in HypervisorSupportMatrix In-Reply-To: <7997383.8LhO9nnzxZ@dblinov.sw.ru> References: <7997383.8LhO9nnzxZ@dblinov.sw.ru> Message-ID: <20141209153935.GM29167@redhat.com> On Tue, Dec 09, 2014 at 06:33:47PM +0300, Dmitry Guryanov wrote: > Hello! > > There is a feature in HypervisorSupportMatrix > (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called "Get Guest > Info". Does anybody know, what does it mean? I haven't found anything like > this neither in nova api nor in horizon and nova command line. I've pretty much no idea what the intention was for that field. I've been working on formally documenting all those things, but draw a blank for that FYI: https://review.openstack.org/#/c/136380/1/doc/hypervisor-support.ini Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From anteaya at anteaya.info Tue Dec 9 16:55:08 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Tue, 09 Dec 2014 09:55:08 -0700 Subject: [openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time In-Reply-To: References: Message-ID: <5487296C.5010406@anteaya.info> On 12/09/2014 08:32 AM, Kurt Taylor wrote: > All of the feedback so far has supported moving the existing IRC > Third-party CI meeting to better fit a worldwide audience. > > The consensus is that we will have only 1 meeting per week at alternating > times. You can see examples of other teams with alternating meeting times > at: https://wiki.openstack.org/wiki/Meetings > > This way, one week we are good for one part of the world, the next week for > the other. You will not need to attend both meetings, just the meeting time > every other week that fits your schedule. > > Proposed times in UTC are being voted on here: > https://www.google.com/moderator/#16/e=21b93c > > Please vote on the time that is best for you. I would like to finalize the > new times this week. > > Thanks! > Kurt Taylor (krtaylor) > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Note that Kurt is welcome to do as he pleases with his own time. I will be having meetings in the irc channel for the times that I have booked. Thanks, Anita. From sean at dague.net Tue Dec 9 16:56:54 2014 From: sean at dague.net (Sean Dague) Date: Tue, 09 Dec 2014 11:56:54 -0500 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <20141209162806.GQ2497@yuggoth.org> References: <5486DF7F.7080706@dague.net> <5487155B.4060902@inaugust.com> <20141209162806.GQ2497@yuggoth.org> Message-ID: <548729D6.1030805@dague.net> On 12/09/2014 11:28 AM, Jeremy Stanley wrote: > On 2014-12-09 07:29:31 -0800 (-0800), Monty Taylor wrote: >> I DO like something warning about commit subject length ... but maybe >> that should be a git-review function or something. > [...] > > How about a hook in Gerrit to refuse commits based on some simple > (maybe even project-specific) rules? > Honestly, any hard rejection ends up problematic. For instance, it means it's impossible to include actual urls in commit messages to reference things without a url shortener much of the time. -Sean -- Sean Dague http://dague.net From sean at dague.net Tue Dec 9 16:57:23 2014 From: sean at dague.net (Sean Dague) Date: Tue, 09 Dec 2014 11:57:23 -0500 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <20141209164118.GK26706@sventech.com> References: <5486DF7F.7080706@dague.net> <20141209164118.GK26706@sventech.com> Message-ID: <548729F3.5050805@dague.net> On 12/09/2014 11:41 AM, Johannes Erdfelt wrote: > On Tue, Dec 09, 2014, Sean Dague wrote: >> I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely. >> >> 1 - the entire H8* group. This doesn't function on python code, it >> functions on git commit message, which makes it tough to run locally. It >> also would be a reason to prevent us from not rerunning tests on commit >> message changes (something we could do after the next gerrit update). > > One of the problems with the H8* tests is that it can reject a commit > message generated by git itself. > > I had a 'git revert' rejected because the first line was too long :( +1 I've had the gerrit revert button reject me for the same reason. -Sean -- Sean Dague http://dague.net From kevin.mitchell at rackspace.com Tue Dec 9 16:58:01 2014 From: kevin.mitchell at rackspace.com (Kevin L. Mitchell) Date: Tue, 9 Dec 2014 10:58:01 -0600 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <54870FCC.3010006@dague.net> References: <5486DF7F.7080706@dague.net> <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> <54870FCC.3010006@dague.net> Message-ID: <1418144281.4233.5.camel@einstein.kev> On Tue, 2014-12-09 at 10:05 -0500, Sean Dague wrote: > Sure, the H8* group is git commit messages. It's checking for line > length in the commit message. I agree the H8* group should be dropped. It would be appropriate to create a new gate check job that validated that, but it should not be part of hacking. > H3* are all the module import rules: > > Imports > ------- > - [H302] Do not import objects, only modules (*) > - [H301] Do not import more than one module per line (*) > - [H303] Do not use wildcard ``*`` import (*) > - [H304] Do not make relative imports > - Order your imports by the full module path > - [H305 H306 H307] Organize your imports according to the `Import order > template`_ and `Real-world Import Order Examples`_ below. > > I think these remain reasonable guidelines, but H302 is exceptionally > tricky to get right, and we keep not getting it right. > > H305-307 are actually impossible to get right. Things come in and out of > stdlib in python all the time. > > > I think it's time to just decide to be reasonable Humans and that these > are guidelines. > > The H3* set of rules is also why you have to install *all* of > requirements.txt and test-requirements.txt in your pep8 tox target, > because H302 actually inspects the sys.modules to attempt to figure out > if things are correct. I agree that dropping H302 and the grouping checks makes sense. I think we should keep the H301, H303, H304, and the basic ordering checks, however; it doesn't seem to me that these would be that difficult to implement or maintain. -- Kevin L. Mitchell Rackspace From morgan.fainberg at gmail.com Tue Dec 9 16:58:17 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Tue, 9 Dec 2014 10:58:17 -0600 Subject: [openstack-dev] [Keystone] OSAAA-Policy In-Reply-To: <548725D7.1070800@redhat.com> References: <54862B9C.8060809@redhat.com> <548725D7.1070800@redhat.com> Message-ID: On December 9, 2014 at 10:43:51 AM, Adam Young (ayoung at redhat.com) wrote: On 12/09/2014 10:57 AM, Brad Topol wrote: +1! ?Makes sense. --Brad Brad Topol, Ph.D. IBM Distinguished Engineer OpenStack (919) 543-0646 Internet: ?btopol at us.ibm.com Assistant: Kendra Witherspoon (919) 254-0680 From: ? ? ? ?Morgan Fainberg To: ? ? ? ?Adam Young , "OpenStack Development Mailing List (not for usage questions)" Date: ? ? ? ?12/08/2014 06:07 PM Subject: ? ? ? ?Re: [openstack-dev] [Keystone] OSAAA-Policy I agree that this library should not have ?Keystone? in the name. This is more along the lines of pycadf, something that is housed under the OpenStack Identity Program but it is more interesting for general use-case than exclusively something that is tied to Keystone specifically. openstack-policy?? osid-policy?? It really should not position itself as a standard.? pycadf is more general purpose, but we are not looking to replace all of the rules languages out there. Just keep in mind we?re a policy rules enforcement library (with whatever name we end up with). This is obviously one of the hard computer science issues (naming things). I wasn?t clear, I didn?t mean to imply usage would be like pycadf (a more global standard), but just that it was not exclusive (at least within the OpenStack world) to be used with Keystone. Cheers, Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Dec 9 17:00:19 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Dec 2014 17:00:19 +0000 Subject: [openstack-dev] [all] Deprecating exceptions In-Reply-To: <54871A58.6090309@redhat.com> References: <54203B18.3040809@sheep.art.pl> <54203F31.6080108@redhat.com> <54871A58.6090309@redhat.com> Message-ID: <20141209170018.GR2497@yuggoth.org> On 2014-12-09 16:50:48 +0100 (+0100), Jakub Ruzicka wrote: > On 22.9.2014 17:24, Ihar Hrachyshka wrote: [...] > > Aren't clients supposed to be backwards compatible? Isn't it the exact > > reason why we don't maintain stable branches for client modules? > > Supposed, yes. However, it's not ensured/enforced any way, so it's as > good as an empty promise. [...] We do test changes to the client libraries against currently supported stable branches. If distros want to perform similar regression testing against their supported releases this is also welcome. See for example, this python-novaclient change last week being tested against a stable/icehouse branch DevStack environment: That is a voting job. If it hadn't succeeded the change couldn't have been approved to merge. -- Jeremy Stanley From sean at dague.net Tue Dec 9 17:05:32 2014 From: sean at dague.net (Sean Dague) Date: Tue, 09 Dec 2014 12:05:32 -0500 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <1418144281.4233.5.camel@einstein.kev> References: <5486DF7F.7080706@dague.net> <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> <54870FCC.3010006@dague.net> <1418144281.4233.5.camel@einstein.kev> Message-ID: <54872BDC.5070407@dague.net> On 12/09/2014 11:58 AM, Kevin L. Mitchell wrote: > On Tue, 2014-12-09 at 10:05 -0500, Sean Dague wrote: >> Sure, the H8* group is git commit messages. It's checking for line >> length in the commit message. > > I agree the H8* group should be dropped. It would be appropriate to > create a new gate check job that validated that, but it should not be > part of hacking. > >> H3* are all the module import rules: >> >> Imports >> ------- >> - [H302] Do not import objects, only modules (*) >> - [H301] Do not import more than one module per line (*) >> - [H303] Do not use wildcard ``*`` import (*) >> - [H304] Do not make relative imports >> - Order your imports by the full module path >> - [H305 H306 H307] Organize your imports according to the `Import order >> template`_ and `Real-world Import Order Examples`_ below. >> >> I think these remain reasonable guidelines, but H302 is exceptionally >> tricky to get right, and we keep not getting it right. >> >> H305-307 are actually impossible to get right. Things come in and out of >> stdlib in python all the time. >> >> >> I think it's time to just decide to be reasonable Humans and that these >> are guidelines. >> >> The H3* set of rules is also why you have to install *all* of >> requirements.txt and test-requirements.txt in your pep8 tox target, >> because H302 actually inspects the sys.modules to attempt to figure out >> if things are correct. > > I agree that dropping H302 and the grouping checks makes sense. I think > we should keep the H301, H303, H304, and the basic ordering checks, > however; it doesn't seem to me that these would be that difficult to > implement or maintain. Well, be careful what you think is easy - https://github.com/openstack-dev/hacking/blob/master/hacking/checks/imports.py :) -Sean -- Sean Dague http://dague.net From fungi at yuggoth.org Tue Dec 9 17:07:30 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Dec 2014 17:07:30 +0000 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <548729D6.1030805@dague.net> References: <5486DF7F.7080706@dague.net> <5487155B.4060902@inaugust.com> <20141209162806.GQ2497@yuggoth.org> <548729D6.1030805@dague.net> Message-ID: <20141209170730.GS2497@yuggoth.org> On 2014-12-09 11:56:54 -0500 (-0500), Sean Dague wrote: > Honestly, any hard rejection ends up problematic. For instance, it > means it's impossible to include actual urls in commit messages to > reference things without a url shortener much of the time. Fair enough. I think this makes it a human problem which we're not going to solve by applying more technology. Drop all of H8XX, make Gerrit preserve votes on commit-message-only patchset updates, decree no more commit message -1s from reviewers, and make it socially acceptable to just edit commit messages of changes you review to bring them up to acceptable standards. -- Jeremy Stanley From mzoeller at de.ibm.com Tue Dec 9 17:15:01 2014 From: mzoeller at de.ibm.com (Markus Zoeller) Date: Tue, 9 Dec 2014 18:15:01 +0100 Subject: [openstack-dev] [Nova] question about "Get Guest Info" row in HypervisorSupportMatrix Message-ID: > > On Tue, Dec 09, 2014 at 06:33:47PM +0300, Dmitry Guryanov wrote: > > > > Hello! > > > > There is a feature in HypervisorSupportMatrix > > (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called "Get Guest > > Info". Does anybody know, what does it mean? I haven't found anything like > > this neither in nova api nor in horizon and nova command line. I think this maps to the nova driver function "get_info": https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4054 I believe (and didn't double-check) that this is used e.g. by the Nova CLI via `nova show [--minimal] ` command. I tried to map the features of the hypervisor support matrix to specific nova driver functions on this wiki page: https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DriverAPI > On Tue Dec 9 15:39:35 UTC 2014, Daniel P. Berrange wrote: > I've pretty much no idea what the intention was for that field. I've > been working on formally documenting all those things, but draw a blank > for that > > FYI: > > https://review.openstack.org/#/c/136380/1/doc/hypervisor-support.ini > > Regards, Daniel Nice! I will keep an eye on that :) Regards, Markus Zoeller IRC: markus_z Launchpad: mzoeller From morgan.fainberg at gmail.com Tue Dec 9 17:18:48 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Tue, 9 Dec 2014 11:18:48 -0600 Subject: [openstack-dev] [oslo][Keystone] Policy graduation Message-ID: I would like to propose that we keep the policy library under the oslo program. As with other graduated projects we will maintain a core team (that while including the oslo-core team) will be comprised of the expected individuals from the Identity and other security related teams. The change in direction is due to the policy library being more generic and not exactly a clean fit with the OpenStack Identity program. This is the policy rules engine, which is currently used by all (or almost all) OpenStack projects. Based on the continued conversation, it doesn?t make sense to take it out of the ?common? namespace. If there are no concerns with this change of direction we will update the spec[1] to reflect this proposal and continue with the plans to graduate as soon as possible. [1]?https://review.openstack.org/#/c/140161/ --? Morgan Fainberg -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.mitchell at rackspace.com Tue Dec 9 17:20:53 2014 From: kevin.mitchell at rackspace.com (Kevin L. Mitchell) Date: Tue, 9 Dec 2014 11:20:53 -0600 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <54872BDC.5070407@dague.net> References: <5486DF7F.7080706@dague.net> <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> <54870FCC.3010006@dague.net> <1418144281.4233.5.camel@einstein.kev> <54872BDC.5070407@dague.net> Message-ID: <1418145653.4233.14.camel@einstein.kev> On Tue, 2014-12-09 at 12:05 -0500, Sean Dague wrote: > > I agree that dropping H302 and the grouping checks makes sense. I > think > > we should keep the H301, H303, H304, and the basic ordering checks, > > however; it doesn't seem to me that these would be that difficult to > > implement or maintain. > > Well, be careful what you think is easy - > https://github.com/openstack-dev/hacking/blob/master/hacking/checks/imports.py > :) So, hacking_import_rules() is very complex. However, it implements H302 as well as H301, H303, and H304. I feel it can be simplified to just a textual match rule if we remove the H302 implementation: H301 just needs to exclude imports with ',', H303 needs to exclude imports with '*', and H304 is already implemented as a regular expression match. It looks like the basic ordering check I was referring to is H306, which isn't all that complicated. It seems like the rest of the code is related to the checks which I just agreed should be dropped :) Am I missing anything? -- Kevin L. Mitchell Rackspace From ayoung at redhat.com Tue Dec 9 17:26:23 2014 From: ayoung at redhat.com (Adam Young) Date: Tue, 09 Dec 2014 12:26:23 -0500 Subject: [openstack-dev] [oslo][Keystone] Policy graduation In-Reply-To: References: Message-ID: <548730BF.6060200@redhat.com> On 12/09/2014 12:18 PM, Morgan Fainberg wrote: > I would like to propose that we keep the policy library under the oslo > program. As with other graduated projects we will maintain a core team > (that while including the oslo-core team) will be comprised of the > expected individuals from the Identity and other security related teams. > > The change in direction is due to the policy library being more > generic and not exactly a clean fit with the OpenStack Identity > program. This is the policy rules engine, which is currently used by > all (or almost all) OpenStack projects. Based on the continued > conversation, it doesn?t make sense to take it out of the ?common? > namespace. Agreed. I think the original design was quite clean, and could easily be used by projects not even in Open Stack. While we don't want to be challenge any of the python based replacements for Prolog, the name should reflect its general purpose. Is the congress program still planning on using the same rules engine? > > If there are no concerns with this change of direction we will update > the spec[1] to reflect this proposal and continue with the plans to > graduate as soon as possible. > > [1] https://review.openstack.org/#/c/140161/ > > -- > Morgan Fainberg > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From armamig at gmail.com Tue Dec 9 17:32:52 2014 From: armamig at gmail.com (Armando M.) Date: Tue, 9 Dec 2014 10:32:52 -0700 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: On 9 December 2014 at 09:41, Salvatore Orlando wrote: > I would like to chime into this discussion wearing my plugin developer hat. > > We (the VMware team) have looked very carefully at the current proposal > for splitting off drivers and plugins from the main source code tree. > Therefore the concerns you've heard from Gary are not just ramblings but > are the results of careful examination of this proposal. > > While we agree with the final goal, the feeling is that for many plugin > maintainers this process change might be too much for what can be > accomplished in a single release cycle. > We actually gave a lot more than a cycle: https://review.openstack.org/#/c/134680/16/specs/kilo/core-vendor-decomposition.rst LINE 416 And in all honestly, I can only tell that getting this done by such an experienced team like the Neutron team @VMware shouldn't take that long. By the way, if Kyle can do it in his teeny tiny time that he has left after his PTL duties, then anyone can do it! :) https://review.openstack.org/#/c/140191/ > As a member of the drivers team, I am still very supportive of the split, > I just want to make sure that it?s made in a sustainable way; I also > understand that ?sustainability? has been one of the requirements of the > current proposal, and therefore we should all be on the same page on this > aspect. > > However, we did a simple exercise trying to assess the amount of work > needed to achieve something which might be acceptable to satisfy the > process. Without going into too many details, this requires efforts for: > > - refactor the code to achieve a plugin module simple and thin enough to > satisfy the requirements. Unfortunately a radical approach like the one in > [1] with a reference to an external library is not pursuable for us > > - maintaining code repositories outside of the neutron scope and the > necessary infrastructure > > - reinforcing our CI infrastructure, and improve our error detection and > log analysis capabilities to improve reaction times upon failures triggered > by upstream changes. As you know, even if the plugin interface is > solid-ish, the dependency on the db base class increases the chances of > upstream changes breaking 3rd party plugins. > No-one is advocating for approach laid out in [1], but a lot of code can be moved elsewhere (like the nsxlib) without too much effort. Don't forget that not so long ago I was the maintainer of this plugin and the one who built the VMware NSX CI; I know very well what it takes to scope this effort, and I can support you in the process. > The feedback from our engineering team is that satisfying the requirements > of this new process might not be feasible in the Kilo timeframe, both for > existing plugins and for new plugins and drivers that should be upstreamed > (there are a few proposed on neutron-specs at the moment, which are all in > -2 status considering the impending approval of the split out). > No new plugins can and will be accepted if they do not adopt the proposed model, let's be very clear about this. > The questions I would like to bring to the wider community are therefore > the following: > > 1 - Is there a possibility of making a further concession on the current > proposal, where maintainers are encouraged to experiment with the plugin > split in Kilo, but will actually required to do it in the next release? > This is exactly what the spec is proposing: get started now, and it does not matter if you don't finish in time. > 2 - What could be considered as a acceptable as a new plugin? I understand > that they would be accepted only as ?thin integration modules?, which > ideally should just be a pointer to code living somewhere else. I?m not > questioning the validity of this approach, but it has been brought to my > attention that this will actually be troubling for teams which have made an > investment in the previous release cycles to upstream plugins following the > ?old? process > You are not alone. Other efforts went through the same process [1, 2, 3]. Adjusting is a way of life. No-one is advocating for throwing away existing investment. This proposal actually promotes new and pre-existing investment. [1] https://review.openstack.org/#/c/104452/ [2] https://review.openstack.org/#/c/103728/ [3] https://review.openstack.org/#/c/136091/ > 3 - Regarding the above discussion on "ML2 or not ML2". The point on > co-gating is well taken. Eventually we'd like to remove this binding - > because I believe the ML2 subteam would also like to have more freedom on > their plugin. Do we already have an idea about how doing that without > completely moving away from the db_base class approach? > Sure, if you like to participate in the process, we can only welcome you! > Thanks for your attention and for reading through this > > Salvatore > > [1] > http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/vmware/plugin.py#n22 > > On 8 December 2014 at 21:51, Maru Newby wrote: > >> >> On Dec 7, 2014, at 10:51 AM, Gary Kotton wrote: >> >> > Hi Kyle, >> > I am not missing the point. I understand the proposal. I just think >> that it has some shortcomings (unless I misunderstand, which will certainly >> not be the first time and most definitely not the last). The thinning out >> is to have a shim in place. I understand this and this will be the entry >> point for the plugin. I do not have a concern for this. My concern is that >> we are not doing this with the ML2 off the bat. That should lead by example >> as it is our reference architecture. Lets not kid anyone, but we are going >> to hit some problems with the decomposition. I would prefer that it be done >> with the default implementation. Why? >> >> The proposal is to move vendor-specific logic out of the tree to increase >> vendor control over such code while decreasing load on reviewers. ML2 >> doesn?t contain vendor-specific logic - that?s the province of ML2 drivers >> - so it is not a good target for the proposed decomposition by itself. >> >> >> > ? Cause we will fix them quicker as it is something that prevent >> Neutron from moving forwards >> > ? We will just need to fix in one place first and not in N (where >> N is the vendor plugins) >> > ? This is a community effort ? so we will have a lot more eyes on >> it >> > ? It will provide a reference architecture for all new plugins >> that want to be added to the tree >> > ? It will provide a working example for plugin that are already >> in tree and are to be replaced by the shim >> > If we really want to do this, we can say freeze all development (which >> is just approvals for patches) for a few days so that we will can just >> focus on this. I stated what I think should be the process on the review. >> For those who do not feel like finding the link: >> > ? Create a stack forge project for ML2 >> > ? Create the shim in Neutron >> > ? Update devstack for the to use the two repos and the shim >> > When #3 is up and running we switch for that to be the gate. Then we >> start a stopwatch on all other plugins. >> >> As was pointed out on the spec (see Miguel?s comment on r15), the ML2 >> plugin and the OVS mechanism driver need to remain in the main Neutron repo >> for now. Neutron gates on ML2+OVS and landing a breaking change in the >> Neutron repo along with its corresponding fix to a separate ML2 repo would >> be all but impossible under the current integrated gating scheme. >> Plugins/drivers that do not gate Neutron have no such constraint. >> >> >> Maru >> >> >> > Sure, I?ll catch you on IRC tomorrow. I guess that you guys will bash >> out the details at the meetup. Sadly I will not be able to attend ? so you >> will have to delay on the tar and feathers. >> > Thanks >> > Gary >> > >> > >> > From: "mestery at mestery.com" >> > Reply-To: OpenStack List >> > Date: Sunday, December 7, 2014 at 7:19 PM >> > To: OpenStack List >> > Cc: "openstack at lists.openstack.org" >> > Subject: Re: [openstack-dev] [Neutron] Core/Vendor code decomposition >> > >> > Gary, you are still miss the point of this proposal. Please see my >> comments in review. We are not forcing things out of tree, we are thinning >> them. The text you quoted in the review makes that clear. We will look at >> further decomposing ML2 post Kilo, but we have to be realistic with what we >> can accomplish during Kilo. >> > >> > Find me on IRC Monday morning and we can discuss further if you still >> have questions and concerns. >> > >> > Thanks! >> > Kyle >> > >> > On Sun, Dec 7, 2014 at 2:08 AM, Gary Kotton wrote: >> >> Hi, >> >> I have raised my concerns on the proposal. I think that all plugins >> should be treated on an equal footing. My main concern is having the ML2 >> plugin in tree whilst the others will be moved out of tree will be >> problematic. I think that the model will be complete if the ML2 was also >> out of tree. This will help crystalize the idea and make sure that the >> model works correctly. >> >> Thanks >> >> Gary >> >> >> >> From: "Armando M." >> >> Reply-To: OpenStack List >> >> Date: Saturday, December 6, 2014 at 1:04 AM >> >> To: OpenStack List , " >> openstack at lists.openstack.org" >> >> Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition >> >> >> >> Hi folks, >> >> >> >> For a few weeks now the Neutron team has worked tirelessly on [1]. >> >> >> >> This initiative stems from the fact that as the project matures, >> evolution of processes and contribution guidelines need to evolve with it. >> This is to ensure that the project can keep on thriving in order to meet >> the needs of an ever growing community. >> >> >> >> The effort of documenting intentions, and fleshing out the various >> details of the proposal is about to reach an end, and we'll soon kick the >> tires to put the proposal into practice. Since the spec has grown pretty >> big, I'll try to capture the tl;dr below. >> >> >> >> If you have any comment please do not hesitate to raise them here >> and/or reach out to us. >> >> >> >> tl;dr >>> >> >> >> >> From the Kilo release, we'll initiate a set of steps to change the >> following areas: >> >> ? Code structure: every plugin or driver that exists or wants to >> exist as part of Neutron project is decomposed in an slim vendor >> integration (which lives in the Neutron repo), plus a bulkier vendor >> library (which lives in an independent publicly available repo); >> >> ? Contribution process: this extends to the following aspects: >> >> ? Design and Development: the process is largely >> unchanged for the part that pertains the vendor integration; the maintainer >> team is fully auto governed for the design and development of the vendor >> library; >> >> ? Testing and Continuous Integration: maintainers will be >> required to support their vendor integration with 3rd CI testing; the >> requirements for 3rd CI testing are largely unchanged; >> >> ? Defect management: the process is largely unchanged, >> issues affecting the vendor library can be tracked with whichever >> tool/process the maintainer see fit. In cases where vendor library fixes >> need to be reflected in the vendor integration, the usual OpenStack defect >> management apply. >> >> ? Documentation: there will be some changes to the way >> plugins and drivers are documented with the intention of promoting >> discoverability of the integrated solutions. >> >> ? Adoption and transition plan: we strongly advise maintainers to >> stay abreast of the developments of this effort, as their code, their CI, >> etc will be affected. The core team will provide guidelines and support >> throughout this cycle the ensure a smooth transition. >> >> To learn more, please refer to [1]. >> >> >> >> Many thanks, >> >> Armando >> >> >> >> [1] https://review.openstack.org/#/c/134680 >> >> >> >> _______________________________________________ >> >> OpenStack-dev mailing list >> >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> > >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Dec 9 17:35:27 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 9 Dec 2014 12:35:27 -0500 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <54870FCC.3010006@dague.net> References: <5486DF7F.7080706@dague.net> <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> <54870FCC.3010006@dague.net> Message-ID: <2D0B8BDD-472E-4E80-8815-F6FD4CA392DC@doughellmann.com> On Dec 9, 2014, at 10:05 AM, Sean Dague wrote: > On 12/09/2014 09:11 AM, Doug Hellmann wrote: >> >> On Dec 9, 2014, at 6:39 AM, Sean Dague wrote: >> >>> I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely. >>> >>> 1 - the entire H8* group. This doesn't function on python code, it >>> functions on git commit message, which makes it tough to run locally. It >>> also would be a reason to prevent us from not rerunning tests on commit >>> message changes (something we could do after the next gerrit update). >>> >>> 2 - the entire H3* group - because of this - >>> https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm >>> >>> A look at the H3* code shows that it's terribly complicated, and is >>> often full of bugs (a few bit us last week). I'd rather just delete it >>> and move on. >> >> I don?t have the hacking rules memorized. Could you describe them briefly? > > Sure, the H8* group is git commit messages. It's checking for line > length in the commit message. > > - [H802] First, provide a brief summary of 50 characters or less. Summaries > of greater then 72 characters will be rejected by the gate. > > - [H801] The first line of the commit message should provide an accurate > description of the change, not just a reference to a bug or > blueprint. > > > H802 is mechanically enforced (though not the 50 characters part, so the > code isn't the same as the rule). > > H801 is enforced by a regex that looks to see if the first line is a > launchpad bug and fails on it. You can't mechanically enforce that > english provides an accurate description. Those all seem like things it would be reasonable to drop, especially for the reason you gave that they are frequently not tested locally anyway. > > > H3* are all the module import rules: > > Imports > ------- > - [H302] Do not import objects, only modules (*) > - [H301] Do not import more than one module per line (*) > - [H303] Do not use wildcard ``*`` import (*) > - [H304] Do not make relative imports > - Order your imports by the full module path > - [H305 H306 H307] Organize your imports according to the `Import order > template`_ and `Real-world Import Order Examples`_ below. > > I think these remain reasonable guidelines, but H302 is exceptionally > tricky to get right, and we keep not getting it right. I definitely agree with that. I thought we had it right now, but maybe there?s still a case where it?s broken? In any case, I?d like to be able to make the Oslo namespace changes API compatible without worrying about if they are hacking-rule-compatible. That does get pretty ugly. > > H305-307 are actually impossible to get right. Things come in and out of > stdlib in python all the time. +1 > > > I think it's time to just decide to be reasonable Humans and that these > are guidelines. I assume we have the guidelines written down in the review instructions somewhere already, if they are implemented in hacking? > > The H3* set of rules is also why you have to install *all* of > requirements.txt and test-requirements.txt in your pep8 tox target, > because H302 actually inspects the sys.modules to attempt to figure out > if things are correct. Yeah, that?s pretty gross. Doug > > -Sean > >> >> Doug >> - [H802] First, provide a brief summary of 50 characters or less. Summaries > of greater then 72 characters will be rejected by the gate. > > - [H801] The first line of the commit message should provide an accurate > description of the change, not just a reference to a bug or > blueprint. > >> >>> >>> -Sean >>> >>> -- >>> Sean Dague >>> http://dague.net >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Sean Dague > http://dague.net > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From yzveryanskyy at mirantis.com Tue Dec 9 17:35:35 2014 From: yzveryanskyy at mirantis.com (Yuriy Zveryanskyy) Date: Tue, 09 Dec 2014 19:35:35 +0200 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> <20141209150048.GB5494@jimrollenhagen.com> Message-ID: <548732E7.6080903@mirantis.com> Vladimir, IMO there is more "global" problem. Anyone who wants to use baremetal deploy service should resolve problems with power management, PXE/iPXE support, DHCP, etc. Or he/she can use Ironic. User has his own vision of deploy workflow and features needed for it. He hears from Ironic people: "Feature X should be only after release Y" or "This don't fit in Ironic at all". Fuel Agent + driver is the answer. I see Fuel Agent + driver as a solution for anyone who wants custom features. On 12/09/2014 06:24 PM, Vladimir Kozhukalov wrote: > > We assume next step will be to put provision data (disk partition > scheme, maybe other data) into driver_info and make Fuel Agent > driver > able to serialize those data (special format) and implement a > corresponding data driver in Fuel Agent for this format. Again > very > simple. Maybe it is time to think of having Ironic metadata > service > (just maybe). > > > I'm ok with the format, my question is: what and how is going to > collect all the data and put into say driver_info? > > > Fuel has a web service which stores nodes info in its database. When > user clicks "Deploy" button, this web service serializes deployment > task and puts this task into task runner (another Fuel component). > Then this task runner parses task and adds a node into Ironic via REST > API (including driver_info). Then it calls Ironic deploy method and > Ironic uses Fuel Agent driver to deploy a node. Corresponding Fuel > spec is here https://review.openstack.org/#/c/138301/. Again it is > zero step implementation. > > > Honestly, I think writing roadmap right now is not very > rational as far as I am not even sure people are interested in > widening Ironic use cases. Some of the comments were not even > constructive like "I don't understand what your use case is, > please use IPA". > > Please don't be offended by this. We did put a lot of effort into > IPA and it's reasonable to look for a good use cases before having > one more smart ramdisk. Nothing personal, just estimating cost vs > value :) > Also "why not use IPA" is a fair question for me and the answer is > about use cases (as you stated it before), not about missing > features of IPA, right? > > You are right it is a fair question, and answer is exactly about > *missing features*. > > Nova is not our case. Fuel is totally about deployment. There > is some in > common > > > Here when we have a difficult point. Major use case for Ironic is > to be driven by Nova (and assisted by Neutron). Without these two > it's hard to understand how Fuel Agent is going to fit into the > infrastructure. And hence my question above about where your json > comes from. In the current Ironic world the same data is received > partly from Nova flavor, partly managed by Neutron completely. > I'm not saying it can't change - we do want to become more > stand-alone. E.g. we can do without Neutron right now. I think > specifying the source of input data for Fuel Agent in the Ironic > infrastructure would help a lot understand, how well Ironic and > Fuel Agent could play together. > > > According to the information I have, correct me if I'm wrong, Ironic > currently is on the stage of becoming stand-alone service. That is the > reason why this spec has been brought up. Again we need something to > manage power/tftp/dhcp to substitute Cobbler. Ironic looks like a > suitable tool, but we need this driver. We are not going to break > anything. We have resources to test and support this driver. And I can > not use IPA *right now* because it does not have features I need. I > can not wait for next half a year for these features to be > implemented. Why can't we add this (Fuel Agent) driver and then if IPA > implements what we need we can switch to IPA. The only alternative for > me right now is to implement my own power/tftp/dhcp management > solution like I did with Fuel Agent when I did not get approve for > including advanced disk partitioning. > > Questions are: Is Ironic interested in this use case or not? Is Ironic > interested to get more development resources? The only case when it's > rational for us to spend our resources to develop Ironic is when we > get something back. We are totally pragmatic, we just address our > user's wishes and issues. It is ok for us to use any tool which > provides what we need (IPA, Fuel Agent, any other). > > We need advanced disk partitioning and power/tftp/dhcp management by > March 2015. Is it possible to get this from Ironic + IPA? I doubt it. > Is it possible to get this form Ironic + Fuel Agent? Yes it is. Is it > possible to get this from Fuel power/tftp/dhcp management + Fuel > Agent? Yes it is. So, I have two options right now: Ironic + Fuel > Agent or Fuel power/tftp/dhcp management + Fuel Agent. > > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Tue Dec 9 17:41:45 2014 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 9 Dec 2014 17:41:45 +0000 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: <20141209150048.GB5494@jimrollenhagen.com> References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> , <20141209150048.GB5494@jimrollenhagen.com> Message-ID: <1A3C52DFCD06494D8528644858247BF0178162E7@EX10MBOX03.pnnl.gov> We've been interested in Ironic as a replacement for Cobbler for some of our systems and have been kicking the tires a bit recently. While initially I thought this thread was probably another "Fuel not playing well with the community" kind of thing, I'm not thinking that any more. Its deeper then that. Cloud provisioning is great. I really REALLY like it. But one of the things that makes it great is the nice, pretty, cute, uniform, standard "hardware" the vm gives the user. Ideally, the physical hardware would behave the same. But, ?No Battle Plan Survives Contact With the Enemy?. The sad reality is, most hardware is different from each other. Different drivers, different firmware, different different different. One way the cloud enables this isolation is by forcing the cloud admin's to install things and deal with the grungy hardware to make the interface nice and clean for the user. For example, if you want greater mean time between failures of nova compute nodes, you probably use a raid 1. Sure, its kind of a pet kind of thing todo, but its up to the cloud admin to decide what's "better", buying more hardware, or paying for more admin/user time. Extra hard drives are dirt cheep... So, in reality Ironic is playing in a space somewhere between "I want to use cloud tools to deploy hardware, yay!" and "ewww.., physical hardware's nasty. you have to know all these extra things and do all these extra things that you don't have to do with a vm"... I believe Ironic's going to need to be able to deal with this messiness in as clean a way as possible. But that's my opinion. If the team feels its not a valid use case, then we'll just have to use something else for our needs. I really really want to be able to use heat to deploy whole physical distributed systems though. Today, we're using software raid over two disks to deploy our nova compute. Why? We have some very old disks we recovered for one of our clouds and they fail often. nova-compute is pet enough to benefit somewhat from being able to swap out a disk without much effort. If we were to use Ironic to provision the compute nodes, we need to support a way to do the same. We're looking into ways of building an image that has a software raid presetup, and expand it on boot. This requires each image to be customized for this case though. I can see Fuel not wanting to provide two different sets of images, "hardware raid" and "software raid", that have the same contents in them, with just different partitioning layouts... If we want users to not have to care about partition layout, this is also not ideal... Assuming Ironic can be convinced that these features really would be needed, perhaps the solution is a middle ground between the pxe driver and the agent? Associate partition information at the flavor level. The admin can decide the best partitioning layout for a given hardware... The user doesn't have to care any more. Two flavors for the same hardware could be "4 9's" or "5 9's" or something that way. Modify the agent to support a pxe style image in addition to full layout, and have the agent partition/setup raid and lay down the image into it. Modify the agent to support running grub2 at the end of deployment. Or at least make the agent plugable to support adding these options. This does seem a bit backwards from the way the agent has been going. the pxe driver was kind of linux specific. the agent is not... So maybe that does imply a 3rd driver may be beneficial... But it would be nice to have one driver, the agent, in the end that supports everything. Anyway, some things to think over. Thanks, Kevin ________________________________________ From: Jim Rollenhagen [jim at jimrollenhagen.com] Sent: Tuesday, December 09, 2014 7:00 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Ironic] Fuel agent proposal On Tue, Dec 09, 2014 at 04:01:07PM +0400, Vladimir Kozhukalov wrote: > Just a short explanation of Fuel use case. > > Fuel use case is not a cloud. Fuel is a deployment tool. We install OS on > bare metal servers and on VMs > and then configure this OS using Puppet. We have been using Cobbler as our > OS provisioning tool since the beginning of Fuel. > However, Cobbler assumes using native OS installers (Anaconda and > Debian-installer). For some reasons we decided to > switch to image based approach for installing OS. > > One of Fuel features is the ability to provide advanced partitioning > schemes (including software RAIDs, LVM). > Native installers are quite difficult to customize in the field of > partitioning > (that was one of the reasons to switch to image based approach). Moreover, > we'd like to implement even more > flexible user experience. We'd like to allow user to choose which hard > drives to use for root FS, for > allocating DB. We'd like user to be able to put root FS over LV or MD > device (including stripe, mirror, multipath). > We'd like user to be able to choose which hard drives are bootable (if > any), which options to use for mounting file systems. > Many many various cases are possible. If you ask why we'd like to support > all those cases, the answer is simple: > because our users want us to support all those cases. > Obviously, many of those cases can not be implemented as image internals, > some cases can not be also implemented on > configuration stage (placing root fs on lvm device). > > As far as those use cases were rejected to be implemented in term of IPA, > we implemented so called Fuel Agent. This is *precisely* why I disagree with adding this driver. Nearly every feature that is listed here has been talked about before, within the Ironic community. Software RAID, LVM, user choosing the partition layout. These were reected from IPA because they do not fit in *Ironic*, not because they don't fit in IPA. If the Fuel team can convince enough people that Ironic should be managing pets, then I'm almost okay with adding this driver (though I still think adding those features to IPA is the right thing to do). // jim > Important Fuel Agent features are: > > * It does not have REST API > * it has executable entry point[s] > * It uses local json file as it's input > * It is planned to implement ability to download input data via HTTP (kind > of metadata service) > * It is designed to be agnostic to input data format, not only Fuel format > (data drivers) > * It is designed to be agnostic to image format (tar images, file system > images, disk images, currently fs images) > * It is designed to be agnostic to image compression algorithm (currently > gzip) > * It is designed to be agnostic to image downloading protocol (currently > local file and HTTP link) > > So, it is clear that being motivated by Fuel, Fuel Agent is quite > independent and generic. And we are open for > new use cases. > > According Fuel itself, our nearest plan is to get rid of Cobbler because > in the case of image based approach it is huge overhead. The question is > which tool we can use instead of Cobbler. We need power management, > we need TFTP management, we need DHCP management. That is > exactly what Ironic is able to do. Frankly, we can implement power/TFTP/DHCP > management tool independently, but as Devananda said, we're all working on > the same problems, > so let's do it together. Power/TFTP/DHCP management is where we are > working on the same problems, > but IPA and Fuel Agent are about different use cases. This case is not just > Fuel, any mature > deployment case require advanced partition/fs management. However, for me > it is OK, if it is easily possible > to use Ironic with external drivers (not merged to Ironic and not tested on > Ironic CI). > > AFAIU, this spec https://review.openstack.org/#/c/138115/ does not assume > changing Ironic API and core. > Jim asked about how Fuel Agent will know about advanced disk partitioning > scheme if API is not > changed. The answer is simple: Ironic is supposed to send a link to > metadata service (http or local file) > where Fuel Agent can download input json data. > > As Roman said, we try to be pragmatic and suggest something which does not > break anything. All changes > are supposed to be encapsulated into a driver. No API and core changes. We > have resources to support, test > and improve this driver. This spec is just a zero step. Further steps are > supposed to improve driver > so as to make it closer to Ironic abstractions. > > For Ironic that means widening use cases and user community. But, as I > already said, > we are OK if Ironic does not need this feature. > > Vladimir Kozhukalov > > On Tue, Dec 9, 2014 at 1:09 PM, Roman Prykhodchenko < > rprikhodchenko at mirantis.com> wrote: > > > It is true that IPA and FuelAgent share a lot of functionality in common. > > However there is a major difference between them which is that they are > > intended to be used to solve a different problem. > > > > IPA is a solution for provision-use-destroy-use_by_different_user use-case > > and is really great for using it for providing BM nodes for other OS > > services or in services like Rackspace OnMetal. FuelAgent itself serves for > > provision-use-use-?-use use-case like Fuel or TripleO have. > > > > Those two use-cases require concentration on different details in first > > place. For instance for IPA proper decommissioning is more important than > > advanced disk management, but for FuelAgent priorities are opposite because > > of obvious reasons. > > > > Putting all functionality to a single driver and a single agent may cause > > conflicts in priorities and make a lot of mess inside both the driver and > > the agent. Actually previously changes to IPA were blocked right because of > > this conflict of priorities. Therefore replacing FuelAgent by IPA in where > > FuelAgent is used currently does not seem like a good option because come > > people (and I?m not talking about Mirantis) might loose required features > > because of different priorities. > > > > Having two separate drivers along with two separate agents for those > > different use-cases will allow to have two independent teams that are > > concentrated on what?s really important for a specific use-case. I don?t > > see any problem in overlapping functionality if it?s used differently. > > > > > > P. S. > > I realise that people may be also confused by the fact that FuelAgent is > > actually called like that and is used only in Fuel atm. Our point is to > > make it a simple, powerful and what?s more important a generic tool for > > provisioning. It is not bound to Fuel or Mirantis and if it will cause > > confusion in the future we will even be happy to give it a different and > > less confusing name. > > > > P. P. S. > > Some of the points of this integration do not look generic enough or nice > > enough. We look pragmatic on the stuff and are trying to implement what?s > > possible to implement as the first step. For sure this is going to have a > > lot more steps to make it better and more generic. > > > > > > On 09 Dec 2014, at 01:46, Jim Rollenhagen wrote: > > > > > > > > On December 8, 2014 2:23:58 PM PST, Devananda van der Veen < > > devananda.vdv at gmail.com> wrote: > > > > I'd like to raise this topic for a wider discussion outside of the > > hallway > > track and code reviews, where it has thus far mostly remained. > > > > In previous discussions, my understanding has been that the Fuel team > > sought to use Ironic to manage "pets" rather than "cattle" - and doing > > so > > required extending the API and the project's functionality in ways that > > no > > one else on the core team agreed with. Perhaps that understanding was > > wrong > > (or perhaps not), but in any case, there is now a proposal to add a > > FuelAgent driver to Ironic. The proposal claims this would meet that > > teams' > > needs without requiring changes to the core of Ironic. > > > > https://review.openstack.org/#/c/138115/ > > > > > > I think it's clear from the review that I share the opinions expressed in > > this email. > > > > That said (and hopefully without derailing the thread too much), I'm > > curious how this driver could do software RAID or LVM without modifying > > Ironic's API or data model. How would the agent know how these should be > > built? How would an operator or user tell Ironic what the > > disk/partition/volume layout would look like? > > > > And before it's said - no, I don't think vendor passthru API calls are an > > appropriate answer here. > > > > // jim > > > > > > The Problem Description section calls out four things, which have all > > been > > discussed previously (some are here [0]). I would like to address each > > one, > > invite discussion on whether or not these are, in fact, problems facing > > Ironic (not whether they are problems for someone, somewhere), and then > > ask > > why these necessitate a new driver be added to the project. > > > > > > They are, for reference: > > > > 1. limited partition support > > > > 2. no software RAID support > > > > 3. no LVM support > > > > 4. no support for hardware that lacks a BMC > > > > #1. > > > > When deploying a partition image (eg, QCOW format), Ironic's PXE deploy > > driver performs only the minimal partitioning necessary to fulfill its > > mission as an OpenStack service: respect the user's request for root, > > swap, > > and ephemeral partition sizes. When deploying a whole-disk image, > > Ironic > > does not perform any partitioning -- such is left up to the operator > > who > > created the disk image. > > > > Support for arbitrarily complex partition layouts is not required by, > > nor > > does it facilitate, the goal of provisioning physical servers via a > > common > > cloud API. Additionally, as with #3 below, nothing prevents a user from > > creating more partitions in unallocated disk space once they have > > access to > > their instance. Therefor, I don't see how Ironic's minimal support for > > partitioning is a problem for the project. > > > > #2. > > > > There is no support for defining a RAID in Ironic today, at all, > > whether > > software or hardware. Several proposals were floated last cycle; one is > > under review right now for DRAC support [1], and there are multiple > > call > > outs for RAID building in the state machine mega-spec [2]. Any such > > support > > for hardware RAID will necessarily be abstract enough to support > > multiple > > hardware vendor's driver implementations and both in-band creation (via > > IPA) and out-of-band creation (via vendor tools). > > > > Given the above, it may become possible to add software RAID support to > > IPA > > in the future, under the same abstraction. This would closely tie the > > deploy agent to the images it deploys (the latter image's kernel would > > be > > dependent upon a software RAID built by the former), but this would > > necessarily be true for the proposed FuelAgent as well. > > > > I don't see this as a compelling reason to add a new driver to the > > project. > > Instead, we should (plan to) add support for software RAID to the > > deploy > > agent which is already part of the project. > > > > #3. > > > > LVM volumes can easily be added by a user (after provisioning) within > > unallocated disk space for non-root partitions. I have not yet seen a > > compelling argument for doing this within the provisioning phase. > > > > #4. > > > > There are already in-tree drivers [3] [4] [5] which do not require a > > BMC. > > One of these uses SSH to connect and run pre-determined commands. Like > > the > > spec proposal, which states at line 122, "Control via SSH access > > feature > > intended only for experiments in non-production environment," the > > current > > SSHPowerDriver is only meant for testing environments. We could > > probably > > extend this driver to do what the FuelAgent spec proposes, as far as > > remote > > power control for cheap always-on hardware in testing environments with > > a > > pre-shared key. > > > > (And if anyone wonders about a use case for Ironic without external > > power > > control ... I can only think of one situation where I would rationally > > ever > > want to have a control-plane agent running inside a user-instance: I am > > both the operator and the only user of the cloud.) > > > > > > ---------------- > > > > In summary, as far as I can tell, all of the problem statements upon > > which > > the FuelAgent proposal are based are solvable through incremental > > changes > > in existing drivers, or out of scope for the project entirely. As > > another > > software-based deploy agent, FuelAgent would duplicate the majority of > > the > > functionality which ironic-python-agent has today. > > > > Ironic's driver ecosystem benefits from a diversity of > > hardware-enablement > > drivers. Today, we have two divergent software deployment drivers which > > approach image deployment differently: "agent" drivers use a local > > agent to > > prepare a system and download the image; "pxe" drivers use a remote > > agent > > and copy the image over iSCSI. I don't understand how a second driver > > which > > duplicates the functionality we already have, and shares the same goals > > as > > the drivers we already have, is beneficial to the project. > > > > Doing the same thing twice just increases the burden on the team; we're > > all > > working on the same problems, so let's do it together. > > > > -Devananda > > > > > > [0] > > https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition > > > > [1] https://review.openstack.org/#/c/107981/ > > > > [2] > > > > https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst > > > > > > [3] > > > > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py > > > > [4] > > > > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py > > > > [5] > > > > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sorlando at nicira.com Tue Dec 9 17:53:48 2014 From: sorlando at nicira.com (Salvatore Orlando) Date: Tue, 9 Dec 2014 17:53:48 +0000 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: On 9 December 2014 at 17:32, Armando M. wrote: > > > On 9 December 2014 at 09:41, Salvatore Orlando > wrote: > >> I would like to chime into this discussion wearing my plugin developer >> hat. >> >> We (the VMware team) have looked very carefully at the current proposal >> for splitting off drivers and plugins from the main source code tree. >> Therefore the concerns you've heard from Gary are not just ramblings but >> are the results of careful examination of this proposal. >> >> While we agree with the final goal, the feeling is that for many plugin >> maintainers this process change might be too much for what can be >> accomplished in a single release cycle. >> > We actually gave a lot more than a cycle: > > > https://review.openstack.org/#/c/134680/16/specs/kilo/core-vendor-decomposition.rst > LINE 416 > > And in all honestly, I can only tell that getting this done by such an > experienced team like the Neutron team @VMware shouldn't take that long. > We are probably not experienced enough. We always love to learn new things. > > By the way, if Kyle can do it in his teeny tiny time that he has left > after his PTL duties, then anyone can do it! :) > > https://review.openstack.org/#/c/140191/ > I think I should be able to use mv & git push as well - I think however there's a bit more than that to it. > > As a member of the drivers team, I am still very supportive of the split, >> I just want to make sure that it?s made in a sustainable way; I also >> understand that ?sustainability? has been one of the requirements of the >> current proposal, and therefore we should all be on the same page on this >> aspect. >> >> However, we did a simple exercise trying to assess the amount of work >> needed to achieve something which might be acceptable to satisfy the >> process. Without going into too many details, this requires efforts for: >> >> - refactor the code to achieve a plugin module simple and thin enough to >> satisfy the requirements. Unfortunately a radical approach like the one in >> [1] with a reference to an external library is not pursuable for us >> >> - maintaining code repositories outside of the neutron scope and the >> necessary infrastructure >> >> - reinforcing our CI infrastructure, and improve our error detection and >> log analysis capabilities to improve reaction times upon failures triggered >> by upstream changes. As you know, even if the plugin interface is >> solid-ish, the dependency on the db base class increases the chances of >> upstream changes breaking 3rd party plugins. >> > > No-one is advocating for approach laid out in [1], but a lot of code can > be moved elsewhere (like the nsxlib) without too much effort. Don't forget > that not so long ago I was the maintainer of this plugin and the one who > built the VMware NSX CI; I know very well what it takes to scope this > effort, and I can support you in the process. > Thanks for this clarification. I was sure that you guys were not advocating for a ninja-split thing, but I wanted just to be sure of that. I'm also pretty sure our engineering team values your support. > The feedback from our engineering team is that satisfying the requirements >> of this new process might not be feasible in the Kilo timeframe, both for >> existing plugins and for new plugins and drivers that should be upstreamed >> (there are a few proposed on neutron-specs at the moment, which are all in >> -2 status considering the impending approval of the split out). >> > No new plugins can and will be accepted if they do not adopt the proposed > model, let's be very clear about this. > This is also what I gathered from the proposal. It seems that you're however stating that there might be some flexibility in defining how much a plugin complies with the new model. I will need to go back to the drawing board with the rest of my team and see in which way this can work for us. > The questions I would like to bring to the wider community are therefore >> the following: >> >> 1 - Is there a possibility of making a further concession on the current >> proposal, where maintainers are encouraged to experiment with the plugin >> split in Kilo, but will actually required to do it in the next release? >> > This is exactly what the spec is proposing: get started now, and it does > not matter if you don't finish in time. > I think the deprecation note at line 416 still scares people off a bit. To me your word is enough, no change is needed. > 2 - What could be considered as a acceptable as a new plugin? I understand >> that they would be accepted only as ?thin integration modules?, which >> ideally should just be a pointer to code living somewhere else. I?m not >> questioning the validity of this approach, but it has been brought to my >> attention that this will actually be troubling for teams which have made an >> investment in the previous release cycles to upstream plugins following the >> ?old? process >> > You are not alone. Other efforts went through the same process [1, 2, 3]. > Adjusting is a way of life. No-one is advocating for throwing away existing > investment. This proposal actually promotes new and pre-existing investment. > > [1] https://review.openstack.org/#/c/104452/ > [2] https://review.openstack.org/#/c/103728/ > [3] https://review.openstack.org/#/c/136091/ > 3 - Regarding the above discussion on "ML2 or not ML2". The point on >> co-gating is well taken. Eventually we'd like to remove this binding - >> because I believe the ML2 subteam would also like to have more freedom on >> their plugin. Do we already have an idea about how doing that without >> completely moving away from the db_base class approach? >> > Sure, if you like to participate in the process, we can only welcome you! > I actually asked you if you already had an idea... should I take that as a no? > Thanks for your attention and for reading through this >> >> Salvatore >> >> [1] >> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/vmware/plugin.py#n22 >> >> On 8 December 2014 at 21:51, Maru Newby wrote: >> >>> >>> On Dec 7, 2014, at 10:51 AM, Gary Kotton wrote: >>> >>> > Hi Kyle, >>> > I am not missing the point. I understand the proposal. I just think >>> that it has some shortcomings (unless I misunderstand, which will certainly >>> not be the first time and most definitely not the last). The thinning out >>> is to have a shim in place. I understand this and this will be the entry >>> point for the plugin. I do not have a concern for this. My concern is that >>> we are not doing this with the ML2 off the bat. That should lead by example >>> as it is our reference architecture. Lets not kid anyone, but we are going >>> to hit some problems with the decomposition. I would prefer that it be done >>> with the default implementation. Why? >>> >>> The proposal is to move vendor-specific logic out of the tree to >>> increase vendor control over such code while decreasing load on reviewers. >>> ML2 doesn?t contain vendor-specific logic - that?s the province of ML2 >>> drivers - so it is not a good target for the proposed decomposition by >>> itself. >>> >>> >>> > ? Cause we will fix them quicker as it is something that prevent >>> Neutron from moving forwards >>> > ? We will just need to fix in one place first and not in N >>> (where N is the vendor plugins) >>> > ? This is a community effort ? so we will have a lot more eyes >>> on it >>> > ? It will provide a reference architecture for all new plugins >>> that want to be added to the tree >>> > ? It will provide a working example for plugin that are already >>> in tree and are to be replaced by the shim >>> > If we really want to do this, we can say freeze all development (which >>> is just approvals for patches) for a few days so that we will can just >>> focus on this. I stated what I think should be the process on the review. >>> For those who do not feel like finding the link: >>> > ? Create a stack forge project for ML2 >>> > ? Create the shim in Neutron >>> > ? Update devstack for the to use the two repos and the shim >>> > When #3 is up and running we switch for that to be the gate. Then we >>> start a stopwatch on all other plugins. >>> >>> As was pointed out on the spec (see Miguel?s comment on r15), the ML2 >>> plugin and the OVS mechanism driver need to remain in the main Neutron repo >>> for now. Neutron gates on ML2+OVS and landing a breaking change in the >>> Neutron repo along with its corresponding fix to a separate ML2 repo would >>> be all but impossible under the current integrated gating scheme. >>> Plugins/drivers that do not gate Neutron have no such constraint. >>> >>> >>> Maru >>> >>> >>> > Sure, I?ll catch you on IRC tomorrow. I guess that you guys will bash >>> out the details at the meetup. Sadly I will not be able to attend ? so you >>> will have to delay on the tar and feathers. >>> > Thanks >>> > Gary >>> > >>> > >>> > From: "mestery at mestery.com" >>> > Reply-To: OpenStack List >>> > Date: Sunday, December 7, 2014 at 7:19 PM >>> > To: OpenStack List >>> > Cc: "openstack at lists.openstack.org" >>> > Subject: Re: [openstack-dev] [Neutron] Core/Vendor code decomposition >>> > >>> > Gary, you are still miss the point of this proposal. Please see my >>> comments in review. We are not forcing things out of tree, we are thinning >>> them. The text you quoted in the review makes that clear. We will look at >>> further decomposing ML2 post Kilo, but we have to be realistic with what we >>> can accomplish during Kilo. >>> > >>> > Find me on IRC Monday morning and we can discuss further if you still >>> have questions and concerns. >>> > >>> > Thanks! >>> > Kyle >>> > >>> > On Sun, Dec 7, 2014 at 2:08 AM, Gary Kotton >>> wrote: >>> >> Hi, >>> >> I have raised my concerns on the proposal. I think that all plugins >>> should be treated on an equal footing. My main concern is having the ML2 >>> plugin in tree whilst the others will be moved out of tree will be >>> problematic. I think that the model will be complete if the ML2 was also >>> out of tree. This will help crystalize the idea and make sure that the >>> model works correctly. >>> >> Thanks >>> >> Gary >>> >> >>> >> From: "Armando M." >>> >> Reply-To: OpenStack List >>> >> Date: Saturday, December 6, 2014 at 1:04 AM >>> >> To: OpenStack List , " >>> openstack at lists.openstack.org" >>> >> Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition >>> >> >>> >> Hi folks, >>> >> >>> >> For a few weeks now the Neutron team has worked tirelessly on [1]. >>> >> >>> >> This initiative stems from the fact that as the project matures, >>> evolution of processes and contribution guidelines need to evolve with it. >>> This is to ensure that the project can keep on thriving in order to meet >>> the needs of an ever growing community. >>> >> >>> >> The effort of documenting intentions, and fleshing out the various >>> details of the proposal is about to reach an end, and we'll soon kick the >>> tires to put the proposal into practice. Since the spec has grown pretty >>> big, I'll try to capture the tl;dr below. >>> >> >>> >> If you have any comment please do not hesitate to raise them here >>> and/or reach out to us. >>> >> >>> >> tl;dr >>> >>> >> >>> >> From the Kilo release, we'll initiate a set of steps to change the >>> following areas: >>> >> ? Code structure: every plugin or driver that exists or wants to >>> exist as part of Neutron project is decomposed in an slim vendor >>> integration (which lives in the Neutron repo), plus a bulkier vendor >>> library (which lives in an independent publicly available repo); >>> >> ? Contribution process: this extends to the following aspects: >>> >> ? Design and Development: the process is largely >>> unchanged for the part that pertains the vendor integration; the maintainer >>> team is fully auto governed for the design and development of the vendor >>> library; >>> >> ? Testing and Continuous Integration: maintainers will >>> be required to support their vendor integration with 3rd CI testing; the >>> requirements for 3rd CI testing are largely unchanged; >>> >> ? Defect management: the process is largely unchanged, >>> issues affecting the vendor library can be tracked with whichever >>> tool/process the maintainer see fit. In cases where vendor library fixes >>> need to be reflected in the vendor integration, the usual OpenStack defect >>> management apply. >>> >> ? Documentation: there will be some changes to the way >>> plugins and drivers are documented with the intention of promoting >>> discoverability of the integrated solutions. >>> >> ? Adoption and transition plan: we strongly advise maintainers >>> to stay abreast of the developments of this effort, as their code, their >>> CI, etc will be affected. The core team will provide guidelines and support >>> throughout this cycle the ensure a smooth transition. >>> >> To learn more, please refer to [1]. >>> >> >>> >> Many thanks, >>> >> Armando >>> >> >>> >> [1] https://review.openstack.org/#/c/134680 >>> >> >>> >> _______________________________________________ >>> >> OpenStack-dev mailing list >>> >> OpenStack-dev at lists.openstack.org >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >>> > >>> > _______________________________________________ >>> > OpenStack-dev mailing list >>> > OpenStack-dev at lists.openstack.org >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vkozhukalov at mirantis.com Tue Dec 9 18:11:57 2014 From: vkozhukalov at mirantis.com (Vladimir Kozhukalov) Date: Tue, 9 Dec 2014 22:11:57 +0400 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: <1A3C52DFCD06494D8528644858247BF0178162E7@EX10MBOX03.pnnl.gov> References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> <20141209150048.GB5494@jimrollenhagen.com> <1A3C52DFCD06494D8528644858247BF0178162E7@EX10MBOX03.pnnl.gov> Message-ID: Kevin, Just to make sure everyone understands what Fuel Agent is about. Fuel Agent is agnostic to image format. There are 3 possibilities for image format 1) DISK IMAGE contains GPT/MBR table and all partitions and metadata in case of md or lvm. That is just something like what you get when run 'dd if=/dev/sda of=disk_image.raw' 2) FS IMAGE contains fs. Disk contains some partitions which then could be used to create md device or volume group contains logical volumes. We then can put a file system over plain partition or md device or logical volume. This type of image is what you get when run 'dd if=/dev/sdaN of=fs_image.raw' 3) TAR IMAGE contains files. It is when you run 'tar cf tar_image.tar /' Currently in Fuel we use FS images. Fuel Agent creates partitions, md and lvm devices and then downloads FS images and put them on partition devices (/dev/sdaN) or on lvm device (/dev/mapper/vgname/lvname) or md device (/dev/md0) Fuel Agent is also able to install and configure grub. Here is the code of Fuel Agent https://github.com/stackforge/fuel-web/tree/master/fuel_agent Vladimir Kozhukalov On Tue, Dec 9, 2014 at 8:41 PM, Fox, Kevin M wrote: > We've been interested in Ironic as a replacement for Cobbler for some of > our systems and have been kicking the tires a bit recently. > > While initially I thought this thread was probably another "Fuel not > playing well with the community" kind of thing, I'm not thinking that any > more. Its deeper then that. > > Cloud provisioning is great. I really REALLY like it. But one of the > things that makes it great is the nice, pretty, cute, uniform, standard > "hardware" the vm gives the user. Ideally, the physical hardware would > behave the same. But, > ?No Battle Plan Survives Contact With the Enemy?. The sad reality is, > most hardware is different from each other. Different drivers, different > firmware, different different different. > > One way the cloud enables this isolation is by forcing the cloud admin's > to install things and deal with the grungy hardware to make the interface > nice and clean for the user. For example, if you want greater mean time > between failures of nova compute nodes, you probably use a raid 1. Sure, > its kind of a pet kind of thing todo, but its up to the cloud admin to > decide what's "better", buying more hardware, or paying for more admin/user > time. Extra hard drives are dirt cheep... > > So, in reality Ironic is playing in a space somewhere between "I want to > use cloud tools to deploy hardware, yay!" and "ewww.., physical hardware's > nasty. you have to know all these extra things and do all these extra > things that you don't have to do with a vm"... I believe Ironic's going to > need to be able to deal with this messiness in as clean a way as possible. > But that's my opinion. If the team feels its not a valid use case, then > we'll just have to use something else for our needs. I really really want > to be able to use heat to deploy whole physical distributed systems though. > > Today, we're using software raid over two disks to deploy our nova > compute. Why? We have some very old disks we recovered for one of our > clouds and they fail often. nova-compute is pet enough to benefit somewhat > from being able to swap out a disk without much effort. If we were to use > Ironic to provision the compute nodes, we need to support a way to do the > same. > > We're looking into ways of building an image that has a software raid > presetup, and expand it on boot. This requires each image to be customized > for this case though. I can see Fuel not wanting to provide two different > sets of images, "hardware raid" and "software raid", that have the same > contents in them, with just different partitioning layouts... If we want > users to not have to care about partition layout, this is also not ideal... > > Assuming Ironic can be convinced that these features really would be > needed, perhaps the solution is a middle ground between the pxe driver and > the agent? > > Associate partition information at the flavor level. The admin can decide > the best partitioning layout for a given hardware... The user doesn't have > to care any more. Two flavors for the same hardware could be "4 9's" or "5 > 9's" or something that way. > Modify the agent to support a pxe style image in addition to full layout, > and have the agent partition/setup raid and lay down the image into it. > Modify the agent to support running grub2 at the end of deployment. > > Or at least make the agent plugable to support adding these options. > > This does seem a bit backwards from the way the agent has been going. the > pxe driver was kind of linux specific. the agent is not... So maybe that > does imply a 3rd driver may be beneficial... But it would be nice to have > one driver, the agent, in the end that supports everything. > > Anyway, some things to think over. > > Thanks, > Kevin > ________________________________________ > From: Jim Rollenhagen [jim at jimrollenhagen.com] > Sent: Tuesday, December 09, 2014 7:00 AM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [Ironic] Fuel agent proposal > > On Tue, Dec 09, 2014 at 04:01:07PM +0400, Vladimir Kozhukalov wrote: > > Just a short explanation of Fuel use case. > > > > Fuel use case is not a cloud. Fuel is a deployment tool. We install OS on > > bare metal servers and on VMs > > and then configure this OS using Puppet. We have been using Cobbler as > our > > OS provisioning tool since the beginning of Fuel. > > However, Cobbler assumes using native OS installers (Anaconda and > > Debian-installer). For some reasons we decided to > > switch to image based approach for installing OS. > > > > One of Fuel features is the ability to provide advanced partitioning > > schemes (including software RAIDs, LVM). > > Native installers are quite difficult to customize in the field of > > partitioning > > (that was one of the reasons to switch to image based approach). > Moreover, > > we'd like to implement even more > > flexible user experience. We'd like to allow user to choose which hard > > drives to use for root FS, for > > allocating DB. We'd like user to be able to put root FS over LV or MD > > device (including stripe, mirror, multipath). > > We'd like user to be able to choose which hard drives are bootable (if > > any), which options to use for mounting file systems. > > Many many various cases are possible. If you ask why we'd like to support > > all those cases, the answer is simple: > > because our users want us to support all those cases. > > Obviously, many of those cases can not be implemented as image internals, > > some cases can not be also implemented on > > configuration stage (placing root fs on lvm device). > > > > As far as those use cases were rejected to be implemented in term of IPA, > > we implemented so called Fuel Agent. > > This is *precisely* why I disagree with adding this driver. > > Nearly every feature that is listed here has been talked about before, > within the Ironic community. Software RAID, LVM, user choosing the > partition layout. These were reected from IPA because they do not fit in > *Ironic*, not because they don't fit in IPA. > > If the Fuel team can convince enough people that Ironic should be > managing pets, then I'm almost okay with adding this driver (though I > still think adding those features to IPA is the right thing to do). > > // jim > > > Important Fuel Agent features are: > > > > * It does not have REST API > > * it has executable entry point[s] > > * It uses local json file as it's input > > * It is planned to implement ability to download input data via HTTP > (kind > > of metadata service) > > * It is designed to be agnostic to input data format, not only Fuel > format > > (data drivers) > > * It is designed to be agnostic to image format (tar images, file system > > images, disk images, currently fs images) > > * It is designed to be agnostic to image compression algorithm (currently > > gzip) > > * It is designed to be agnostic to image downloading protocol (currently > > local file and HTTP link) > > > > So, it is clear that being motivated by Fuel, Fuel Agent is quite > > independent and generic. And we are open for > > new use cases. > > > > According Fuel itself, our nearest plan is to get rid of Cobbler because > > in the case of image based approach it is huge overhead. The question is > > which tool we can use instead of Cobbler. We need power management, > > we need TFTP management, we need DHCP management. That is > > exactly what Ironic is able to do. Frankly, we can implement > power/TFTP/DHCP > > management tool independently, but as Devananda said, we're all working > on > > the same problems, > > so let's do it together. Power/TFTP/DHCP management is where we are > > working on the same problems, > > but IPA and Fuel Agent are about different use cases. This case is not > just > > Fuel, any mature > > deployment case require advanced partition/fs management. However, for me > > it is OK, if it is easily possible > > to use Ironic with external drivers (not merged to Ironic and not tested > on > > Ironic CI). > > > > AFAIU, this spec https://review.openstack.org/#/c/138115/ does not > assume > > changing Ironic API and core. > > Jim asked about how Fuel Agent will know about advanced disk partitioning > > scheme if API is not > > changed. The answer is simple: Ironic is supposed to send a link to > > metadata service (http or local file) > > where Fuel Agent can download input json data. > > > > As Roman said, we try to be pragmatic and suggest something which does > not > > break anything. All changes > > are supposed to be encapsulated into a driver. No API and core changes. > We > > have resources to support, test > > and improve this driver. This spec is just a zero step. Further steps are > > supposed to improve driver > > so as to make it closer to Ironic abstractions. > > > > For Ironic that means widening use cases and user community. But, as I > > already said, > > we are OK if Ironic does not need this feature. > > > > Vladimir Kozhukalov > > > > On Tue, Dec 9, 2014 at 1:09 PM, Roman Prykhodchenko < > > rprikhodchenko at mirantis.com> wrote: > > > > > It is true that IPA and FuelAgent share a lot of functionality in > common. > > > However there is a major difference between them which is that they are > > > intended to be used to solve a different problem. > > > > > > IPA is a solution for provision-use-destroy-use_by_different_user > use-case > > > and is really great for using it for providing BM nodes for other OS > > > services or in services like Rackspace OnMetal. FuelAgent itself > serves for > > > provision-use-use-?-use use-case like Fuel or TripleO have. > > > > > > Those two use-cases require concentration on different details in first > > > place. For instance for IPA proper decommissioning is more important > than > > > advanced disk management, but for FuelAgent priorities are opposite > because > > > of obvious reasons. > > > > > > Putting all functionality to a single driver and a single agent may > cause > > > conflicts in priorities and make a lot of mess inside both the driver > and > > > the agent. Actually previously changes to IPA were blocked right > because of > > > this conflict of priorities. Therefore replacing FuelAgent by IPA in > where > > > FuelAgent is used currently does not seem like a good option because > come > > > people (and I?m not talking about Mirantis) might loose required > features > > > because of different priorities. > > > > > > Having two separate drivers along with two separate agents for those > > > different use-cases will allow to have two independent teams that are > > > concentrated on what?s really important for a specific use-case. I > don?t > > > see any problem in overlapping functionality if it?s used differently. > > > > > > > > > P. S. > > > I realise that people may be also confused by the fact that FuelAgent > is > > > actually called like that and is used only in Fuel atm. Our point is to > > > make it a simple, powerful and what?s more important a generic tool for > > > provisioning. It is not bound to Fuel or Mirantis and if it will cause > > > confusion in the future we will even be happy to give it a different > and > > > less confusing name. > > > > > > P. P. S. > > > Some of the points of this integration do not look generic enough or > nice > > > enough. We look pragmatic on the stuff and are trying to implement > what?s > > > possible to implement as the first step. For sure this is going to > have a > > > lot more steps to make it better and more generic. > > > > > > > > > On 09 Dec 2014, at 01:46, Jim Rollenhagen > wrote: > > > > > > > > > > > > On December 8, 2014 2:23:58 PM PST, Devananda van der Veen < > > > devananda.vdv at gmail.com> wrote: > > > > > > I'd like to raise this topic for a wider discussion outside of the > > > hallway > > > track and code reviews, where it has thus far mostly remained. > > > > > > In previous discussions, my understanding has been that the Fuel team > > > sought to use Ironic to manage "pets" rather than "cattle" - and doing > > > so > > > required extending the API and the project's functionality in ways that > > > no > > > one else on the core team agreed with. Perhaps that understanding was > > > wrong > > > (or perhaps not), but in any case, there is now a proposal to add a > > > FuelAgent driver to Ironic. The proposal claims this would meet that > > > teams' > > > needs without requiring changes to the core of Ironic. > > > > > > https://review.openstack.org/#/c/138115/ > > > > > > > > > I think it's clear from the review that I share the opinions expressed > in > > > this email. > > > > > > That said (and hopefully without derailing the thread too much), I'm > > > curious how this driver could do software RAID or LVM without modifying > > > Ironic's API or data model. How would the agent know how these should > be > > > built? How would an operator or user tell Ironic what the > > > disk/partition/volume layout would look like? > > > > > > And before it's said - no, I don't think vendor passthru API calls are > an > > > appropriate answer here. > > > > > > // jim > > > > > > > > > The Problem Description section calls out four things, which have all > > > been > > > discussed previously (some are here [0]). I would like to address each > > > one, > > > invite discussion on whether or not these are, in fact, problems facing > > > Ironic (not whether they are problems for someone, somewhere), and then > > > ask > > > why these necessitate a new driver be added to the project. > > > > > > > > > They are, for reference: > > > > > > 1. limited partition support > > > > > > 2. no software RAID support > > > > > > 3. no LVM support > > > > > > 4. no support for hardware that lacks a BMC > > > > > > #1. > > > > > > When deploying a partition image (eg, QCOW format), Ironic's PXE deploy > > > driver performs only the minimal partitioning necessary to fulfill its > > > mission as an OpenStack service: respect the user's request for root, > > > swap, > > > and ephemeral partition sizes. When deploying a whole-disk image, > > > Ironic > > > does not perform any partitioning -- such is left up to the operator > > > who > > > created the disk image. > > > > > > Support for arbitrarily complex partition layouts is not required by, > > > nor > > > does it facilitate, the goal of provisioning physical servers via a > > > common > > > cloud API. Additionally, as with #3 below, nothing prevents a user from > > > creating more partitions in unallocated disk space once they have > > > access to > > > their instance. Therefor, I don't see how Ironic's minimal support for > > > partitioning is a problem for the project. > > > > > > #2. > > > > > > There is no support for defining a RAID in Ironic today, at all, > > > whether > > > software or hardware. Several proposals were floated last cycle; one is > > > under review right now for DRAC support [1], and there are multiple > > > call > > > outs for RAID building in the state machine mega-spec [2]. Any such > > > support > > > for hardware RAID will necessarily be abstract enough to support > > > multiple > > > hardware vendor's driver implementations and both in-band creation (via > > > IPA) and out-of-band creation (via vendor tools). > > > > > > Given the above, it may become possible to add software RAID support to > > > IPA > > > in the future, under the same abstraction. This would closely tie the > > > deploy agent to the images it deploys (the latter image's kernel would > > > be > > > dependent upon a software RAID built by the former), but this would > > > necessarily be true for the proposed FuelAgent as well. > > > > > > I don't see this as a compelling reason to add a new driver to the > > > project. > > > Instead, we should (plan to) add support for software RAID to the > > > deploy > > > agent which is already part of the project. > > > > > > #3. > > > > > > LVM volumes can easily be added by a user (after provisioning) within > > > unallocated disk space for non-root partitions. I have not yet seen a > > > compelling argument for doing this within the provisioning phase. > > > > > > #4. > > > > > > There are already in-tree drivers [3] [4] [5] which do not require a > > > BMC. > > > One of these uses SSH to connect and run pre-determined commands. Like > > > the > > > spec proposal, which states at line 122, "Control via SSH access > > > feature > > > intended only for experiments in non-production environment," the > > > current > > > SSHPowerDriver is only meant for testing environments. We could > > > probably > > > extend this driver to do what the FuelAgent spec proposes, as far as > > > remote > > > power control for cheap always-on hardware in testing environments with > > > a > > > pre-shared key. > > > > > > (And if anyone wonders about a use case for Ironic without external > > > power > > > control ... I can only think of one situation where I would rationally > > > ever > > > want to have a control-plane agent running inside a user-instance: I am > > > both the operator and the only user of the cloud.) > > > > > > > > > ---------------- > > > > > > In summary, as far as I can tell, all of the problem statements upon > > > which > > > the FuelAgent proposal are based are solvable through incremental > > > changes > > > in existing drivers, or out of scope for the project entirely. As > > > another > > > software-based deploy agent, FuelAgent would duplicate the majority of > > > the > > > functionality which ironic-python-agent has today. > > > > > > Ironic's driver ecosystem benefits from a diversity of > > > hardware-enablement > > > drivers. Today, we have two divergent software deployment drivers which > > > approach image deployment differently: "agent" drivers use a local > > > agent to > > > prepare a system and download the image; "pxe" drivers use a remote > > > agent > > > and copy the image over iSCSI. I don't understand how a second driver > > > which > > > duplicates the functionality we already have, and shares the same goals > > > as > > > the drivers we already have, is beneficial to the project. > > > > > > Doing the same thing twice just increases the burden on the team; we're > > > all > > > working on the same problems, so let's do it together. > > > > > > -Devananda > > > > > > > > > [0] > > > > https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition > > > > > > [1] https://review.openstack.org/#/c/107981/ > > > > > > [2] > > > > > > > https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst > > > > > > > > > [3] > > > > > > > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py > > > > > > [4] > > > > > > > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py > > > > > > [5] > > > > > > > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Tue Dec 9 18:13:41 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Tue, 9 Dec 2014 12:13:41 -0600 Subject: [openstack-dev] [Keystone] No Meeting Today Message-ID: This is a quick note that the Keystone team will not be holding a meeting today. Based upon last week?s meeting the goals for today are to review open specs[1] and blocking code reviews for the k1 milestone[2] instead. We will continue with the normal meeting schedule next week. [1]?https://review.openstack.org/#/q/status:open+project:openstack/keystone-specs,n,z [2]?https://gist.github.com/dolph/651c6a1748f69637abd0 --? Morgan Fainberg -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmakhotkin at mirantis.com Tue Dec 9 18:17:15 2014 From: nmakhotkin at mirantis.com (Nikolay Makhotkin) Date: Tue, 9 Dec 2014 21:17:15 +0300 Subject: [openstack-dev] [Mistral] In-Reply-To: <2E47CCEA-109A-4FD3-9984-9457C565089F@stackstorm.com> References: <2E47CCEA-109A-4FD3-9984-9457C565089F@stackstorm.com> Message-ID: Guys, May be I misunderstood something here but what is the difference between this one and https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment ? On Tue, Dec 9, 2014 at 5:35 PM, Dmitri Zimine wrote: > Winson, > > thanks for filing the blueprint: > https://blueprints.launchpad.net/mistral/+spec/mistral-global-context, > > some clarification questions: > 1) how exactly would the user describe these global variables > syntactically? In DSL? What can we use as syntax? In the initial workflow > input? > 2) what is the visibility scope: this and child workflows, or "truly > global?? > 3) What are the good default behavior? > > Let?s detail it a bit more. > > DZ> > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best Regards, Nikolay -------------- next part -------------- An HTML attachment was scrubbed... URL: From thingee at gmail.com Tue Dec 9 18:19:04 2014 From: thingee at gmail.com (Mike Perez) Date: Tue, 9 Dec 2014 10:19:04 -0800 Subject: [openstack-dev] [cinder] Code pointer for processing cinder backend config In-Reply-To: References: Message-ID: <20141209181904.GA6923@gmail.com> On 09:36 Sat 06 Dec , Pradip Mukhopadhyay wrote: > Where this config info is getting parsed out in the cinder code? https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/netapp/options.py https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/netapp/common.py#L76 -- Mike Perez From anil.rao at gigamon.com Tue Dec 9 18:26:48 2014 From: anil.rao at gigamon.com (Anil Rao) Date: Tue, 9 Dec 2014 13:26:48 -0500 Subject: [openstack-dev] [neutron] Tap-aaS Spec for Kilo Message-ID: Hi, The latest version of the Tap-aaS spec is available at: https://review.openstack.org/#/c/140292/ It was uploaded last night and we are hoping that it will be considered as a candidate for the Kilo release. Thanks, Anil -------------- next part -------------- An HTML attachment was scrubbed... URL: From marun at redhat.com Tue Dec 9 18:33:13 2014 From: marun at redhat.com (Maru Newby) Date: Tue, 9 Dec 2014 11:33:13 -0700 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver References: Message-ID: On Dec 9, 2014, at 7:04 AM, Daniel P. Berrange wrote: > On Tue, Dec 09, 2014 at 10:53:19AM +0100, Maxime Leroy wrote: >> I have also proposed a blueprint to have a new plugin mechanism in >> nova to load external vif driver. (nova-specs: >> https://review.openstack.org/#/c/136827/ and nova (rfc patch): >> https://review.openstack.org/#/c/136857/) >> >> From my point-of-view of a developer having a plugin framework for >> internal/external vif driver seems to be a good thing. >> It makes the code more modular and introduce a clear api for vif driver classes. >> >> So far, it raises legitimate questions concerning API stability and >> public API that request a wider discussion on the ML (as asking by >> John Garbut). >> >> I think having a plugin mechanism and a clear api for vif driver is >> not going against this policy: >> http://docs.openstack.org/developer/nova/devref/policies.html#out-of-tree-support. >> >> There is no needs to have a stable API. It is up to the owner of the >> external VIF driver to ensure that its driver is supported by the >> latest API. And not the nova community to manage a stable API for this >> external VIF driver. Does it make senses ? > > Experiance has shown that even if it is documented as unsupported, once > the extension point exists, vendors & users will ignore the small print > about support status. There will be complaints raised every time it gets > broken until we end up being forced to maintain it as stable API whether > we want to or not. That's not a route we want to go down. Is the support contract for a given API have to be binary - ?supported? vs ?unsupported?? The stability requirements for REST API?s that end users and all kinds of tooling consume are arguably different from those of an internal API, and recognizing this difference could be useful. > >> Considering the network V2 API, L2/ML2 mechanism driver and VIF driver >> need to exchange information such as: binding:vif_type and >> binding:vif_details. >> >> From my understanding, 'binding:vif_type' and 'binding:vif_details' as >> a field part of the public network api. There is no validation >> constraints for these fields (see >> http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html), >> it means that any value is accepted by the API. So, the values set in >> 'binding:vif_type' and 'binding:vif_details' are not part of the >> public API. Is my understanding correct ? > > The VIF parameters are mapped into the nova.network.model.VIF class > which is doing some crude validation. I would anticipate that this > validation will be increasing over time, because any functional data > flowing over the API and so needs to be carefully managed for upgrade > reasons. > > Even if the Neutron impl is out of tree, I would still expect both > Nova and Neutron core to sign off on any new VIF type name and its > associated details (if any). > >> What other reasons am I missing to not have VIF driver classes as a >> public extension point ? > > Having to find & install VIF driver classes from countless different > vendors, each hiding their code away in their own obsecure website, > will lead to awful end user experiance when deploying Nova. Users are > better served by having it all provided when they deploy Nova IMHO Shipping drivers in-tree makes sense for a purely open source solution for the reasons you mention. The logic doesn?t necessarily extend to deployment of a proprietary solution, though. If a given OpenStack deployment is intended to integrate with such a solution, it is likely that the distro/operator/deployer will have a direct relationship with the solution provider and the required software (including VIF driver(s), if necessary) is likely to have a well-defined distribution channel. > > If every vendor goes off & works in their own isolated world we also > loose the scope to align the implementations, so that common concepts > work the same way in all cases and allow us to minimize the number of > new VIF types required. The proposed vhostuser VIF type is a good > example of this - it allows a single Nova VIF driver to be capable of > potentially supporting multiple different impls on the Neutron side. > If every vendor worked in their own world, we would have ended up with > multiple VIF drivers doing the same thing in Nova, each with their own > set of bugs & quirks. I?m not sure the suggestion is that every vendor go off and do their own thing. Rather, the option for out-of-tree drivers could be made available to those that are pursuing initiatives that aren?t found to be in-keeping with Nova?s current priorities. I believe that allowing out-of-tree extensions is essential to ensuring the long-term viability of OpenStack. There is only so much experimental work that is going to be acceptable in core OpenStack projects, if only to ensure stability. Yes, there is the potential for duplicative effort with results of varying quality, but that?s the price of competitive innovation whether in the field of ideas or open source software, and it?s arguably a price worth paying. There is always the option of pulling a winning option into the tree and stabilizing it, after all. > > I expect the quality of the code the operator receives will be lower > if it is never reviewed by anyone except the vendor who writes it in > the first place. Code review by itself can?t guarantee quality, especially where the code in question cannot be executed without proprietary software or hardware. The Neutron tree includes an awful lot of vendor code that non-vendor contributors cannot provide effective oversight for. We rely on Tempest scenario testing and 3rd party CI to vet the quality of such efforts, so whether it lives in or out of the tree doesn?t really matter from a quality perspective. I?m not sure why this would be different for Nova. Maru From clint at fewbar.com Tue Dec 9 18:34:24 2014 From: clint at fewbar.com (Clint Byrum) Date: Tue, 09 Dec 2014 10:34:24 -0800 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: <5486E56F.9060606@mirantis.com> References: <5486E56F.9060606@mirantis.com> Message-ID: <1418149486-sup-3128@fewbar.com> Excerpts from Yuriy Zveryanskyy's message of 2014-12-09 04:05:03 -0800: > Good day Ironicers. > > I do not want to discuss questions like "Is feature X good for release > Y?" or "Is feature Z in Ironic scope or not?". > I want to get an answer for this: Is Ironic a flexible, easy extendable > and user-oriented solution for deployment? I surely hope it is. > Yes, it is I think. IPA is the great software, but Fuel Agent proposes a > different and alternative way for deploying. It's not fundamentally different, it is just capable of other things. > Devananda wrote about "pets" and "cattle", and maybe some want to manage > "pets" rather than "cattle"? Let > users do a choice. IMO this is too high-level of a discussion for Ironic to get bogged down in. Disks can have partitions and be hosted in RAID controllers, and these things _MUST_ come before an OS is put on the disks, but after power control happens. Since Ironic does put OS's on disks, and control power, I believe it is obligated to provide an interface for rich disk configuration. There are valid use cases for _both_ of those things in "cattle", which is a higher level problem that should not cloud the low level interface discussion. So IMO, Ironic needs to provide an interface for agents to richly configure disks, whether IPA supports it or not. Would I like to see these things in IPA so that there isn't a mismatch of features? Yes. Does that matter _now_? Not really. The FuelAgent can prove out the interface while the features migrate into IPA. > We do not plan to change any Ironic API for the driver, internal or > external (as opposed to IPA, this was done for it). > If there will be no one for Fuel Agent's driver support I think this > driver should be removed from Ironic tree (I heard > this practice is used in Linux kernel). > We have a _hyperv_ driver in Nova.. I think we can have a "something we're not entirely 100% on board with" in Ironic. All of that said, I would admonish FuelAgent developers to work to commit to combine their agent with IPA long term. I would admonish Ironic developers to be receptive to things that users want. It doesn't always mean taking responsibility for implementations, but you _do_ need to consider the pain of not providing interfaces and of forcing people to remain out of tree (remember when Ironic's driver wasn't in Nova's tree?) From sean at dague.net Tue Dec 9 18:46:36 2014 From: sean at dague.net (Sean Dague) Date: Tue, 09 Dec 2014 13:46:36 -0500 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <1418145653.4233.14.camel@einstein.kev> References: <5486DF7F.7080706@dague.net> <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> <54870FCC.3010006@dague.net> <1418144281.4233.5.camel@einstein.kev> <54872BDC.5070407@dague.net> <1418145653.4233.14.camel@einstein.kev> Message-ID: <5487438C.2010504@dague.net> On 12/09/2014 12:20 PM, Kevin L. Mitchell wrote: > On Tue, 2014-12-09 at 12:05 -0500, Sean Dague wrote: >>> I agree that dropping H302 and the grouping checks makes sense. I >> think >>> we should keep the H301, H303, H304, and the basic ordering checks, >>> however; it doesn't seem to me that these would be that difficult to >>> implement or maintain. >> >> Well, be careful what you think is easy - >> https://github.com/openstack-dev/hacking/blob/master/hacking/checks/imports.py >> :) > > So, hacking_import_rules() is very complex. However, it implements H302 > as well as H301, H303, and H304. I feel it can be simplified to just a > textual match rule if we remove the H302 implementation: H301 just needs > to exclude imports with ',', H303 needs to exclude imports with '*', and > H304 is already implemented as a regular expression match. It looks > like the basic ordering check I was referring to is H306, which isn't > all that complicated. It seems like the rest of the code is related to > the checks which I just agreed should be dropped :) Am I missing > anything? Yes, the following fails H305 and H306. nova/tests/fixtures.py """Fixtures for Nova tests.""" from __future__ import absolute_import import gettext import logging import os import uuid import fixtures from oslo.config import cfg from nova.db import migration from nova.db.sqlalchemy import api as session from nova import service Because name normalization is hard (fixtures is normalized to nova.tests.fixtures so H305 thinks it should be in group 3, and H306 thinks it should be after oslo.config import cfg). To sort things you have to normalize them. -Sean -- Sean Dague http://dague.net From sean at dague.net Tue Dec 9 18:49:00 2014 From: sean at dague.net (Sean Dague) Date: Tue, 09 Dec 2014 13:49:00 -0500 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <20141209170730.GS2497@yuggoth.org> References: <5486DF7F.7080706@dague.net> <5487155B.4060902@inaugust.com> <20141209162806.GQ2497@yuggoth.org> <548729D6.1030805@dague.net> <20141209170730.GS2497@yuggoth.org> Message-ID: <5487441C.5020506@dague.net> On 12/09/2014 12:07 PM, Jeremy Stanley wrote: > On 2014-12-09 11:56:54 -0500 (-0500), Sean Dague wrote: >> Honestly, any hard rejection ends up problematic. For instance, it >> means it's impossible to include actual urls in commit messages to >> reference things without a url shortener much of the time. > > Fair enough. I think this makes it a human problem which we're not > going to solve by applying more technology. Drop all of H8XX, make > Gerrit preserve votes on commit-message-only patchset updates, > decree no more commit message -1s from reviewers, and make it > socially acceptable to just edit commit messages of changes you > review to bring them up to acceptable standards. I still think -1 for commit message is fine, but it's a human thing, not a computer thing. Because the consumers of the commit messages are humans. And I also think that if a commit message change doesn't retrigger all the tests, people will be a lot happier updating them. -Sean -- Sean Dague http://dague.net From suro.patz at gmail.com Tue Dec 9 19:05:12 2014 From: suro.patz at gmail.com (Surojit Pathak) Date: Tue, 09 Dec 2014 11:05:12 -0800 Subject: [openstack-dev] [hacking] hacking package upgrade dependency with setuptools Message-ID: <548747E8.2040105@gmail.com> Hi all, On a RHEL system, as I upgrade hacking package from 0.8.0 to 0.9.5, I see flake8 stops working. Upgrading setuptools resolves the issue. But I do not see a change in version for pep8 or setuptools, with the upgrade in setuptools. Any issue in packaging? Any explanation of this behavior? Snippet - [suro at poweredsoured ~]$ pip list | grep hacking hacking (0.8.0) [suro at poweredsoured ~]$ [suro at poweredsoured app]$ sudo pip install hacking==0.9.5 ... [suro at poweredsoured app]$ flake8 neutron/ ... File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 546, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: pep8>=1.4.6 [suro at poweredsoured app]$ pip list | grep pep8 pep8 (1.5.6) [suro at poweredsoured app]$ pip list | grep setuptools setuptools (0.6c11) [suro at poweredsoured app]$ sudo pip install -U setuptools ... Successfully installed setuptools Cleaning up... [suro at poweredsoured app]$ pip list | grep pep8 pep8 (1.5.6) [suro at poweredsoured app]$ pip list | grep setuptools setuptools (0.6c11) [suro at poweredsoured app]$ flake8 neutron/ [suro at poweredsoured app]$ -- Regards, Surojit Pathak From carl at ecbaldwin.net Tue Dec 9 19:09:12 2014 From: carl at ecbaldwin.net (Carl Baldwin) Date: Tue, 9 Dec 2014 12:09:12 -0700 Subject: [openstack-dev] [neutron] mid-cycle "hot reviews" In-Reply-To: <7A64F4A9F9054721A45DB25C9E5A181B@redhat.com> References: <7A64F4A9F9054721A45DB25C9E5A181B@redhat.com> Message-ID: On Tue, Dec 9, 2014 at 3:33 AM, Miguel ?ngel Ajo wrote: > > Hi all! > > It would be great if you could use this thread to post hot reviews on > stuff > that it?s being worked out during the mid-cycle, where others from different > timezones could participate. I think we've used the etherpad [1] in the past to put hot reviews. I've added some reviews. I don't know if others here are doing the same. Carl [1] https://etherpad.openstack.org/p/neutron-mid-cycle-sprint-dec-2014 From stefano at openstack.org Tue Dec 9 19:11:47 2014 From: stefano at openstack.org (Stefano Maffulli) Date: Tue, 09 Dec 2014 11:11:47 -0800 Subject: [openstack-dev] People of OpenStack (and their IRC nicks) In-Reply-To: <20141209140407.GO2497@yuggoth.org> References: <20141209125828.GA10149@redhat.redhat.com> <20141209140407.GO2497@yuggoth.org> Message-ID: <54874973.5060903@openstack.org> On 12/09/2014 06:04 AM, Jeremy Stanley wrote: > We already have a solution for tracking the contributor->IRC > mapping--add it to your Foundation Member Profile. For example, mine > is in there already: > > http://www.openstack.org/community/members/profile/5479 I recommend updating the openstack.org member profile and add IRC nickname there (and while you're there, update your affiliation history). There is also a search engine on: http://www.openstack.org/community/members/ /stef From doug at doughellmann.com Tue Dec 9 19:28:13 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 9 Dec 2014 14:28:13 -0500 Subject: [openstack-dev] [oslo][Keystone] Policy graduation In-Reply-To: References: Message-ID: On Dec 9, 2014, at 12:18 PM, Morgan Fainberg wrote: > I would like to propose that we keep the policy library under the oslo program. As with other graduated projects we will maintain a core team (that while including the oslo-core team) will be comprised of the expected individuals from the Identity and other security related teams. > > The change in direction is due to the policy library being more generic and not exactly a clean fit with the OpenStack Identity program. This is the policy rules engine, which is currently used by all (or almost all) OpenStack projects. Based on the continued conversation, it doesn?t make sense to take it out of the ?common? namespace. > > If there are no concerns with this change of direction we will update the spec[1] to reflect this proposal and continue with the plans to graduate as soon as possible. I know a few of the Oslo cores are already offline, but I think it?s safe to update the spec and we can hash out details there. Please make sure to document why we changed the plan we discussed at the summit in the spec. Doug > > [1] https://review.openstack.org/#/c/140161/ > > -- > Morgan Fainberg > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Dec 9 19:46:45 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 9 Dec 2014 19:46:45 +0000 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <5487441C.5020506@dague.net> References: <5486DF7F.7080706@dague.net> <5487155B.4060902@inaugust.com> <20141209162806.GQ2497@yuggoth.org> <548729D6.1030805@dague.net> <20141209170730.GS2497@yuggoth.org> <5487441C.5020506@dague.net> Message-ID: <20141209194645.GT2497@yuggoth.org> On 2014-12-09 13:49:00 -0500 (-0500), Sean Dague wrote: [...] > And I also think that if a commit message change doesn't retrigger all > the tests, people will be a lot happier updating them. Agreed--though this will need a newer Gerrit plus a new feature in Zuul so it recognizes the difference in the stream. -- Jeremy Stanley From sean at dague.net Tue Dec 9 20:00:48 2014 From: sean at dague.net (Sean Dague) Date: Tue, 09 Dec 2014 15:00:48 -0500 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <20141209194645.GT2497@yuggoth.org> References: <5486DF7F.7080706@dague.net> <5487155B.4060902@inaugust.com> <20141209162806.GQ2497@yuggoth.org> <548729D6.1030805@dague.net> <20141209170730.GS2497@yuggoth.org> <5487441C.5020506@dague.net> <20141209194645.GT2497@yuggoth.org> Message-ID: <548754F0.7060507@dague.net> On 12/09/2014 02:46 PM, Jeremy Stanley wrote: > On 2014-12-09 13:49:00 -0500 (-0500), Sean Dague wrote: > [...] >> And I also think that if a commit message change doesn't retrigger all >> the tests, people will be a lot happier updating them. > > Agreed--though this will need a newer Gerrit plus a new feature in > Zuul so it recognizes the difference in the stream. > Yes, it's not a tomorrow thing (I know we need Gerrit 2.9 first). But I think it's the way we should evolve the system. -Sean -- Sean Dague http://dague.net From doug at doughellmann.com Tue Dec 9 20:16:56 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 9 Dec 2014 15:16:56 -0500 Subject: [openstack-dev] [oslo] updated instructions for creating a library repository Message-ID: <84B38C96-A935-4462-972F-5495A63584F7@doughellmann.com> Now that the infra manual includes the ?Project Creator?s Guide?, I have updated our wiki page to refer to it. I could use a sanity check to make sure I don?t have things in a bad order. If you have a few minutes to help with that, please look over https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary and http://docs.openstack.org/infra/manual/creators.html together. Thanks! Doug From devananda.vdv at gmail.com Tue Dec 9 20:19:09 2014 From: devananda.vdv at gmail.com (Devananda van der Veen) Date: Tue, 09 Dec 2014 20:19:09 +0000 Subject: [openstack-dev] [Ironic] Fuel agent proposal References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> Message-ID: Thank you for explaining in detail what Fuel's use case is. I was lacking this information, and taking the FuelAgent proposal in isolation. Allow me to respond to several points inline... On Tue Dec 09 2014 at 4:08:45 AM Vladimir Kozhukalov < vkozhukalov at mirantis.com> wrote: > Just a short explanation of Fuel use case. > > Fuel use case is not a cloud. > This is a fairly key point, and thank you for bringing it up. Ironic's primary aim is to better OpenStack, and as such, to be part of an "Open Source Cloud Computing platform." [0] Meeting a non-cloud use case has not been a priority for the project as a whole. It is from that perspective that my initial email was written, and I stand by what I said there -- FuelAgent does not appear to be significantly different from IPA when used within a "cloudy" use case. But, as you've pointed out, that's not your use case :) Enabling use outside of OpenStack has been generally accepted by the team, though I don't believe anyone on the core team has put a lot of effort into developing that yet. As I read this thread, I'm pleased to see more details about Fuel's architecture and goals -- I think there is a potential fit for Ironic here, though several points need further discussion. > Fuel is a deployment tool. We install OS on bare metal servers and on VMs > and then configure this OS using Puppet. We have been using Cobbler as our > OS provisioning tool since the beginning of Fuel. > However, Cobbler assumes using native OS installers (Anaconda and > Debian-installer). For some reasons we decided to > switch to image based approach for installing OS. > > One of Fuel features is the ability to provide advanced partitioning > schemes (including software RAIDs, LVM). > Native installers are quite difficult to customize in the field of > partitioning > (that was one of the reasons to switch to image based approach). Moreover, > we'd like to implement even more > flexible user experience. > The degree of customization and flexibility which you describe is very understandable within traditional IT shops. Don't get me wrong -- there's nothing inherently bad about wanting to give such flexibility to your users. However, infinite flexibility is counter-productive to two of the primary benefits of cloud computing: repeatability, and consistency. [snip] According Fuel itself, our nearest plan is to get rid of Cobbler because > in the case of image based approach it is huge overhead. The question is > which tool we can use instead of Cobbler. We need power management, > we need TFTP management, we need DHCP management. That is > exactly what Ironic is able to do. > You're only partly correct here. Ironic provides a vendor-neutral abstraction for power management and image deployment, but Ironic does not implement any DHCP management - Neutron is responsible for that, and Ironic calls out to Neutron's API only to adjust dhcpboot parameters. At no point is Ironic responsible for IP or DNS assignment. This same view is echoed in the spec [1] which I have left comments on: > Cobbler manages DHCP, DNS, TFTP services ... > OpenStack has Ironic in its core which is capable to do the same ... > Ironic can manage DHCP and it is planned to implement dnsmasq plugin. To reiterate, Ironic does not manage DHCP or DNS, it never has, and such is not on the roadmap for Kilo [2]. Two specs related to this were proposed last month [3] -- but a spec proposal does not equal project plans. One of the specs has been abandoned, and I am still waiting for the author to rewrite the other one. Neither are approved nor targeted to Kilo. In summary, if I understand correctly, it seems as though you're trying to fit Ironic into Cobbler's way of doing things, rather than recognize that Ironic approaches provisioning in a fundamentally different way. Your use case: * is not cloud-like * does not include Nova or Neutron, but will duplicate functionality of both (you need a scheduler and all the logic within nova.virt.ironic, and something to manage DHCP and DNS assignment) * would use Ironic to manage diverse hardware, which naturally requires some operator-driven customization, but still exposes the messy configuration bits^D^Dchoices to users at deploy time * duplicates some of the functionality already available in other drivers There are certain aspects of the proposal which I like, though: * using SSH rather than HTTP for remote access to the deploy agent * support for putting the root partition on a software RAID * integration with another provisioning system, without any API changes Regards, -Devananda [0] https://wiki.openstack.org/wiki/Main_Page [1] https://review.openstack.org/#/c/138301/8/specs/6.1/substitution-cobbler-with-openstack-ironic.rst [2] https://launchpad.net/ironic/kilo [3] https://review.openstack.org/#/c/132511/ and https://review.openstack.org/#/c/132744/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Tue Dec 9 20:20:15 2014 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 09 Dec 2014 15:20:15 -0500 Subject: [openstack-dev] [Mistral] Query on creating multiple resources In-Reply-To: <8CC8FE04-056A-4574-926B-BAF5B7749C84@mirantis.com> References: <1418048204426.95510@persistent.com> <1418048476946.19042@persistent.com> <1418050120681.46696@persistent.com> <5485C560.9060605@redhat.com> <8CC8FE04-056A-4574-926B-BAF5B7749C84@mirantis.com> Message-ID: <5487597F.9030500@redhat.com> On 09/12/14 03:48, Renat Akhmerov wrote: > Hey, > > I think it?s a question of what the final goal is. For just creating security groups as a resource I think Georgy and Zane are right, just use Heat. If the goal is to try Mistral or have this simple workflow as part of more complex then it?s totally fine to use Mistral. Sorry, I?m probably biased because Mistral is our baby :). Anyway, Nikolay has already answered the question technically, this ?for-each? feature will be available officially in about 2 weeks. :) They're not mutually exclusive, of course, and to clarify I wasn't suggesting replacing Mistral with Heat, I was suggesting replacing a bunch of 'create security group' steps in a larger workflow with a single 'create stack' step. In general, though: - When you are just trying to get to a particular end state and it doesn't matter how you get there, Heat is a good solution. - When you need to carry out a particular series of steps, and it is the steps that are well-defined, not the end state, then Mistral is a good solution. - When you have a well-defined end state but some steps need to be done in a particular way that isn't supported by Heat, then Mistral can be a solution (it's not a _good_ solution, but that isn't a criticism because it isn't Mistral's job to make up for deficiencies in Heat). - Both services are _highly_ complementary. For example, let's say you have a batch job to run regularly: you want to provision a server, do some work on it, and then remove the server when the work is complete. (An example that a lot of people will be doing pretty regularly might be building a custom VM image and uploading it to Glance.) This is a classic example of a workflow, and you should use Mistral to implement it. Now let's say that rather than just a single server you have a complex group of resources that need to be set up prior to running the job. You could encode all of the steps required to correctly set up and tear down all of those resources in the Mistral workflow, but that would be a mistake. While the overall process is still a workflow, the desired state after creating all of the resources but before running the job is known, and it doesn't matter how you get there. Therefore it's better to define the resources in a Heat template: unless you are doing something really weird it will Just Work(TM) for creating them all in the right order with optimal parallelisation, it knows how to delete them afterwards too without having to write it again backwards, and you can easily test it in isolation from the rest of the workflow. So you would replace the steps in the workflow that create and delete the server with steps that create and delete a stack. >> Create VM workflow was a demo example. Mistral potentially can be used by Heat or other orchestration tools to do actual interaction with API, but for user it might be easier to use Heat functionality. > > I kind of disagree with that statement. Mistral can be used by whoever finds its useful for their needs. Standard ?create_instance? workflow (which is in ?resources/workflows/create_instance.yaml?) is not so a demo example as well. It does a lot of good stuff you may really need in your case (e.g. retry policies). Even though it?s true that it has some limitations we?re aware of. For example, when it comes to configuring a network for newly created instance it?s now missing network related parameters to be able to alter behavior. I agree that it's unlikely that Heat should replace Mistral in many of the Mistral demo scenarios. I do think you could make a strong argument that Heat should replace *Nova* in many of those scenarios though. cheers, Zane. From berrange at redhat.com Tue Dec 9 17:20:42 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Tue, 9 Dec 2014 17:20:42 +0000 Subject: [openstack-dev] [Nova] question about "Get Guest Info" row in HypervisorSupportMatrix In-Reply-To: References: Message-ID: <20141209172042.GQ29167@redhat.com> On Tue, Dec 09, 2014 at 06:15:01PM +0100, Markus Zoeller wrote: > > > On Tue, Dec 09, 2014 at 06:33:47PM +0300, Dmitry Guryanov wrote: > > > > > > Hello! > > > > > > There is a feature in HypervisorSupportMatrix > > > (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called "Get > Guest > > > Info". Does anybody know, what does it mean? I haven't found anything > like > > > this neither in nova api nor in horizon and nova command line. > > I think this maps to the nova driver function "get_info": > https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4054 > > I believe (and didn't double-check) that this is used e.g. by the > Nova CLI via `nova show [--minimal] ` command. Ah yes, that would make sense Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From kevin.mitchell at rackspace.com Tue Dec 9 20:23:18 2014 From: kevin.mitchell at rackspace.com (Kevin L. Mitchell) Date: Tue, 9 Dec 2014 14:23:18 -0600 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <5487438C.2010504@dague.net> References: <5486DF7F.7080706@dague.net> <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> <54870FCC.3010006@dague.net> <1418144281.4233.5.camel@einstein.kev> <54872BDC.5070407@dague.net> <1418145653.4233.14.camel@einstein.kev> <5487438C.2010504@dague.net> Message-ID: <1418156598.4233.21.camel@einstein.kev> On Tue, 2014-12-09 at 13:46 -0500, Sean Dague wrote: > Yes, the following fails H305 and H306. > > nova/tests/fixtures.py > > """Fixtures for Nova tests.""" > from __future__ import absolute_import > > import gettext > import logging > import os > import uuid > > import fixtures > from oslo.config import cfg > > from nova.db import migration > from nova.db.sqlalchemy import api as session > from nova import service > > > Because name normalization is hard (fixtures is normalized to > nova.tests.fixtures so H305 thinks it should be in group 3, and H306 > thinks it should be after oslo.config import cfg). > > To sort things you have to normalize them. I agree you have to normalize imports to sort them, but to my mind the appropriate normalization here is purely textual; we shouldn't be expecting any relative imports (and should raise an error if there are any). Still, that does show that some work needs to be done to the simpler H306 test (probably involving changes to the core import normalization)? -- Kevin L. Mitchell Rackspace From devananda.vdv at gmail.com Tue Dec 9 20:54:39 2014 From: devananda.vdv at gmail.com (Devananda van der Veen) Date: Tue, 09 Dec 2014 20:54:39 +0000 Subject: [openstack-dev] [Ironic] Fuel agent proposal References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> <20141209150048.GB5494@jimrollenhagen.com> <548719A6.9030303@mirantis.com> Message-ID: On Tue Dec 09 2014 at 7:49:32 AM Yuriy Zveryanskyy < yzveryanskyy at mirantis.com> wrote: > On 12/09/2014 05:00 PM, Jim Rollenhagen wrote: > > On Tue, Dec 09, 2014 at 04:01:07PM +0400, Vladimir Kozhukalov wrote: > > >> Many many various cases are possible. If you ask why we'd like to > support > >> all those cases, the answer is simple: > >> because our users want us to support all those cases. > >> Obviously, many of those cases can not be implemented as image > internals, > >> some cases can not be also implemented on > >> configuration stage (placing root fs on lvm device). > >> > >> As far as those use cases were rejected to be implemented in term of > IPA, > >> we implemented so called Fuel Agent. > > This is *precisely* why I disagree with adding this driver. > > > > Nearly every feature that is listed here has been talked about before, > > within the Ironic community. Software RAID, LVM, user choosing the > > partition layout. These were reected from IPA because they do not fit in > > *Ironic*, not because they don't fit in IPA. > > Yes, they do not fit in Ironic *core* but this is a *driver*. > There is iLO driver for example. Good or bad is iLO management technology? > I don't know. But it is an existing vendor's solution. I should buy or rent > HP server for tests or experiments with iLO driver. Fuel is widely used > solution for deployment, and it is open-source. I think to have Fuel Agent > driver in Ironic will be better than driver for rare hardware XYZ for > example. > > This argument is completely hollow. Fuel is not a vendor-specific hardware-enablement driver. It *is* an open-source deployment driver providing much the same functionality as another open-source deployment driver which is already integrated with the project. To make my point another way, could I use Fuel with HP iLO driver? (the answer should be "yes" because they fill different roles within Ironic). But, on the other hand, could I use Fuel with the IPA driver? (nope - definitely not - they do the same thing.) -Deva -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivarlazzaro at gmail.com Tue Dec 9 20:59:13 2014 From: ivarlazzaro at gmail.com (Ivar Lazzaro) Date: Tue, 9 Dec 2014 12:59:13 -0800 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: I agree with Salvatore that the split is not an easy thing to achieve for vendors, and I would like to bring up my case to see if there are ways to make this at least a bit simpler. At some point I had the need to backport vendor code from Juno to Icehouse (see first attempt here [0]). That in [0] was some weird approach that put unnecessary burden on infra, neutron cores and even packagers, so I decided to move to a more decoupled approach that was basically completely splitting my code from Neutron. You can find the result here [1]. The focal points of this approach are: * **all** the vendor code is removed; * Neutron is used as a dependency, pulled directly from github for UTs (see test-requirements [2]) and explicitly required when installing the plugin; * The Database and Schema is the same as Neutron's; * A migration script exists for this driver, which uses a different (and unique) version_table (see env.py [3]); * Entry points are properly used in setup.cfg [4] in order to provide migration scripts and Driver/Plugin shortcuts for Neutron; * UTs are run by including Neutron in the venv [2]. * The boilerplate is taken care by cookiecutter [5]. The advantage of the above approach, is that it's very simple to pull off (only thing you need is cookiecutter, a repo, and then you can just replicate the same tree structure that existed in Neutron for your own vendor code). Also it has the advantage to remove all the vendor code from Neutron (did I say that already?). As far as the CI is concerned, it just needs to "learn" how to install the new plugin, which will require Neutron to be pre-existent. The typical installation workflow would be: - Install Neutron normally; - pull from pypi the vendor driver; - run the vendor db migration script; - Do everything else (configuration and execution) just like it was done before. Note that this same satellite approach is used by GBP (I know this is a bad word that once brought hundreds of ML replies, but that's just an example :) ) for the Juno timeframe [6]. This shows that the very same thing can be easily done for services. As far as ML2 is concerned, I think we should split it as well in order to treat all the plugins equally, but with the following caveats: * ML2 will be in a openstack repo under the networking program (kind of obvious); * The drivers can decide wether to stay in tree with ML2 or not (for a better community effort, but they will definitively evolve slower); * Don't care about the governance, Neutron will be in charge of this repo and will have the ability to promote whoever they want when needed. As far as cogating is concerned, I think that using the above approach the breaking will exist just as long as the infra job understands how to install the ML2 driver from it's own repo. I don't see it as a big issue, But maybe it's just me, and my fabulous world where stuff works for no good reason. We could at least ask the infra team to understand it it's feasible. Moreover, this is a work that we may need to do anyway! So it's better to just start it now thus creating an example for all the vendors that have to go through the split (back on Gary's point). Appreciate your feedback, Ivar. [0] https://review.openstack.org/#/c/123596/ [1] https://github.com/noironetworks/apic-ml2-driver/tree/icehouse [2] https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/test-requirements.txt [3] https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/apic_ml2/neutron/db/migration/alembic_migrations/env.py [4] https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/setup.cfg [5] https://github.com/openstack-dev/cookiecutter [6] https://github.com/stackforge/group-based-policy On Tue, Dec 9, 2014 at 9:53 AM, Salvatore Orlando wrote: > > > On 9 December 2014 at 17:32, Armando M. wrote: > >> >> >> On 9 December 2014 at 09:41, Salvatore Orlando >> wrote: >> >>> I would like to chime into this discussion wearing my plugin developer >>> hat. >>> >>> We (the VMware team) have looked very carefully at the current proposal >>> for splitting off drivers and plugins from the main source code tree. >>> Therefore the concerns you've heard from Gary are not just ramblings but >>> are the results of careful examination of this proposal. >>> >>> While we agree with the final goal, the feeling is that for many plugin >>> maintainers this process change might be too much for what can be >>> accomplished in a single release cycle. >>> >> We actually gave a lot more than a cycle: >> >> >> https://review.openstack.org/#/c/134680/16/specs/kilo/core-vendor-decomposition.rst >> LINE 416 >> >> And in all honestly, I can only tell that getting this done by such an >> experienced team like the Neutron team @VMware shouldn't take that long. >> > > We are probably not experienced enough. We always love to learn new things. > > >> >> By the way, if Kyle can do it in his teeny tiny time that he has left >> after his PTL duties, then anyone can do it! :) >> >> https://review.openstack.org/#/c/140191/ >> > > I think I should be able to use mv & git push as well - I think however > there's a bit more than that to it. > > >> >> As a member of the drivers team, I am still very supportive of the split, >>> I just want to make sure that it?s made in a sustainable way; I also >>> understand that ?sustainability? has been one of the requirements of the >>> current proposal, and therefore we should all be on the same page on this >>> aspect. >>> >>> However, we did a simple exercise trying to assess the amount of work >>> needed to achieve something which might be acceptable to satisfy the >>> process. Without going into too many details, this requires efforts for: >>> >>> - refactor the code to achieve a plugin module simple and thin enough to >>> satisfy the requirements. Unfortunately a radical approach like the one in >>> [1] with a reference to an external library is not pursuable for us >>> >>> - maintaining code repositories outside of the neutron scope and the >>> necessary infrastructure >>> >>> - reinforcing our CI infrastructure, and improve our error detection and >>> log analysis capabilities to improve reaction times upon failures triggered >>> by upstream changes. As you know, even if the plugin interface is >>> solid-ish, the dependency on the db base class increases the chances of >>> upstream changes breaking 3rd party plugins. >>> >> >> No-one is advocating for approach laid out in [1], but a lot of code can >> be moved elsewhere (like the nsxlib) without too much effort. Don't forget >> that not so long ago I was the maintainer of this plugin and the one who >> built the VMware NSX CI; I know very well what it takes to scope this >> effort, and I can support you in the process. >> > > Thanks for this clarification. I was sure that you guys were not > advocating for a ninja-split thing, but I wanted just to be sure of that. > I'm also pretty sure our engineering team values your support. > >> The feedback from our engineering team is that satisfying the >>> requirements of this new process might not be feasible in the Kilo >>> timeframe, both for existing plugins and for new plugins and drivers that >>> should be upstreamed (there are a few proposed on neutron-specs at the >>> moment, which are all in -2 status considering the impending approval of >>> the split out). >>> >> No new plugins can and will be accepted if they do not adopt the proposed >> model, let's be very clear about this. >> > > This is also what I gathered from the proposal. It seems that you're > however stating that there might be some flexibility in defining how much a > plugin complies with the new model. I will need to go back to the drawing > board with the rest of my team and see in which way this can work for us. > > >> The questions I would like to bring to the wider community are therefore >>> the following: >>> >>> 1 - Is there a possibility of making a further concession on the current >>> proposal, where maintainers are encouraged to experiment with the plugin >>> split in Kilo, but will actually required to do it in the next release? >>> >> This is exactly what the spec is proposing: get started now, and it does >> not matter if you don't finish in time. >> > > I think the deprecation note at line 416 still scares people off a bit. To > me your word is enough, no change is needed. > >> 2 - What could be considered as a acceptable as a new plugin? I >>> understand that they would be accepted only as ?thin integration modules?, >>> which ideally should just be a pointer to code living somewhere else. I?m >>> not questioning the validity of this approach, but it has been brought to >>> my attention that this will actually be troubling for teams which have made >>> an investment in the previous release cycles to upstream plugins following >>> the ?old? process >>> >> You are not alone. Other efforts went through the same process [1, 2, 3]. >> Adjusting is a way of life. No-one is advocating for throwing away existing >> investment. This proposal actually promotes new and pre-existing investment. >> >> [1] https://review.openstack.org/#/c/104452/ >> [2] https://review.openstack.org/#/c/103728/ >> [3] https://review.openstack.org/#/c/136091/ >> > 3 - Regarding the above discussion on "ML2 or not ML2". The point on >>> co-gating is well taken. Eventually we'd like to remove this binding - >>> because I believe the ML2 subteam would also like to have more freedom on >>> their plugin. Do we already have an idea about how doing that without >>> completely moving away from the db_base class approach? >>> >> Sure, if you like to participate in the process, we can only welcome you! >> > > I actually asked you if you already had an idea... should I take that as a > no? > >> Thanks for your attention and for reading through this >>> >>> Salvatore >>> >>> [1] >>> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/vmware/plugin.py#n22 >>> >>> On 8 December 2014 at 21:51, Maru Newby wrote: >>> >>>> >>>> On Dec 7, 2014, at 10:51 AM, Gary Kotton wrote: >>>> >>>> > Hi Kyle, >>>> > I am not missing the point. I understand the proposal. I just think >>>> that it has some shortcomings (unless I misunderstand, which will certainly >>>> not be the first time and most definitely not the last). The thinning out >>>> is to have a shim in place. I understand this and this will be the entry >>>> point for the plugin. I do not have a concern for this. My concern is that >>>> we are not doing this with the ML2 off the bat. That should lead by example >>>> as it is our reference architecture. Lets not kid anyone, but we are going >>>> to hit some problems with the decomposition. I would prefer that it be done >>>> with the default implementation. Why? >>>> >>>> The proposal is to move vendor-specific logic out of the tree to >>>> increase vendor control over such code while decreasing load on reviewers. >>>> ML2 doesn?t contain vendor-specific logic - that?s the province of ML2 >>>> drivers - so it is not a good target for the proposed decomposition by >>>> itself. >>>> >>>> >>>> > ? Cause we will fix them quicker as it is something that >>>> prevent Neutron from moving forwards >>>> > ? We will just need to fix in one place first and not in N >>>> (where N is the vendor plugins) >>>> > ? This is a community effort ? so we will have a lot more eyes >>>> on it >>>> > ? It will provide a reference architecture for all new plugins >>>> that want to be added to the tree >>>> > ? It will provide a working example for plugin that are already >>>> in tree and are to be replaced by the shim >>>> > If we really want to do this, we can say freeze all development >>>> (which is just approvals for patches) for a few days so that we will can >>>> just focus on this. I stated what I think should be the process on the >>>> review. For those who do not feel like finding the link: >>>> > ? Create a stack forge project for ML2 >>>> > ? Create the shim in Neutron >>>> > ? Update devstack for the to use the two repos and the shim >>>> > When #3 is up and running we switch for that to be the gate. Then we >>>> start a stopwatch on all other plugins. >>>> >>>> As was pointed out on the spec (see Miguel?s comment on r15), the ML2 >>>> plugin and the OVS mechanism driver need to remain in the main Neutron repo >>>> for now. Neutron gates on ML2+OVS and landing a breaking change in the >>>> Neutron repo along with its corresponding fix to a separate ML2 repo would >>>> be all but impossible under the current integrated gating scheme. >>>> Plugins/drivers that do not gate Neutron have no such constraint. >>>> >>>> >>>> Maru >>>> >>>> >>>> > Sure, I?ll catch you on IRC tomorrow. I guess that you guys will bash >>>> out the details at the meetup. Sadly I will not be able to attend ? so you >>>> will have to delay on the tar and feathers. >>>> > Thanks >>>> > Gary >>>> > >>>> > >>>> > From: "mestery at mestery.com" >>>> > Reply-To: OpenStack List >>>> > Date: Sunday, December 7, 2014 at 7:19 PM >>>> > To: OpenStack List >>>> > Cc: "openstack at lists.openstack.org" >>>> > Subject: Re: [openstack-dev] [Neutron] Core/Vendor code decomposition >>>> > >>>> > Gary, you are still miss the point of this proposal. Please see my >>>> comments in review. We are not forcing things out of tree, we are thinning >>>> them. The text you quoted in the review makes that clear. We will look at >>>> further decomposing ML2 post Kilo, but we have to be realistic with what we >>>> can accomplish during Kilo. >>>> > >>>> > Find me on IRC Monday morning and we can discuss further if you still >>>> have questions and concerns. >>>> > >>>> > Thanks! >>>> > Kyle >>>> > >>>> > On Sun, Dec 7, 2014 at 2:08 AM, Gary Kotton >>>> wrote: >>>> >> Hi, >>>> >> I have raised my concerns on the proposal. I think that all plugins >>>> should be treated on an equal footing. My main concern is having the ML2 >>>> plugin in tree whilst the others will be moved out of tree will be >>>> problematic. I think that the model will be complete if the ML2 was also >>>> out of tree. This will help crystalize the idea and make sure that the >>>> model works correctly. >>>> >> Thanks >>>> >> Gary >>>> >> >>>> >> From: "Armando M." >>>> >> Reply-To: OpenStack List >>>> >> Date: Saturday, December 6, 2014 at 1:04 AM >>>> >> To: OpenStack List , " >>>> openstack at lists.openstack.org" >>>> >> Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition >>>> >> >>>> >> Hi folks, >>>> >> >>>> >> For a few weeks now the Neutron team has worked tirelessly on [1]. >>>> >> >>>> >> This initiative stems from the fact that as the project matures, >>>> evolution of processes and contribution guidelines need to evolve with it. >>>> This is to ensure that the project can keep on thriving in order to meet >>>> the needs of an ever growing community. >>>> >> >>>> >> The effort of documenting intentions, and fleshing out the various >>>> details of the proposal is about to reach an end, and we'll soon kick the >>>> tires to put the proposal into practice. Since the spec has grown pretty >>>> big, I'll try to capture the tl;dr below. >>>> >> >>>> >> If you have any comment please do not hesitate to raise them here >>>> and/or reach out to us. >>>> >> >>>> >> tl;dr >>> >>>> >> >>>> >> From the Kilo release, we'll initiate a set of steps to change the >>>> following areas: >>>> >> ? Code structure: every plugin or driver that exists or wants >>>> to exist as part of Neutron project is decomposed in an slim vendor >>>> integration (which lives in the Neutron repo), plus a bulkier vendor >>>> library (which lives in an independent publicly available repo); >>>> >> ? Contribution process: this extends to the following aspects: >>>> >> ? Design and Development: the process is largely >>>> unchanged for the part that pertains the vendor integration; the maintainer >>>> team is fully auto governed for the design and development of the vendor >>>> library; >>>> >> ? Testing and Continuous Integration: maintainers will >>>> be required to support their vendor integration with 3rd CI testing; the >>>> requirements for 3rd CI testing are largely unchanged; >>>> >> ? Defect management: the process is largely unchanged, >>>> issues affecting the vendor library can be tracked with whichever >>>> tool/process the maintainer see fit. In cases where vendor library fixes >>>> need to be reflected in the vendor integration, the usual OpenStack defect >>>> management apply. >>>> >> ? Documentation: there will be some changes to the way >>>> plugins and drivers are documented with the intention of promoting >>>> discoverability of the integrated solutions. >>>> >> ? Adoption and transition plan: we strongly advise maintainers >>>> to stay abreast of the developments of this effort, as their code, their >>>> CI, etc will be affected. The core team will provide guidelines and support >>>> throughout this cycle the ensure a smooth transition. >>>> >> To learn more, please refer to [1]. >>>> >> >>>> >> Many thanks, >>>> >> Armando >>>> >> >>>> >> [1] https://review.openstack.org/#/c/134680 >>>> >> >>>> >> _______________________________________________ >>>> >> OpenStack-dev mailing list >>>> >> OpenStack-dev at lists.openstack.org >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>>> > >>>> > _______________________________________________ >>>> > OpenStack-dev mailing list >>>> > OpenStack-dev at lists.openstack.org >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From devananda.vdv at gmail.com Tue Dec 9 21:03:29 2014 From: devananda.vdv at gmail.com (Devananda van der Veen) Date: Tue, 09 Dec 2014 21:03:29 +0000 Subject: [openstack-dev] [Ironic] Fuel agent proposal References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> <20141209150048.GB5494@jimrollenhagen.com> <1A3C52DFCD06494D8528644858247BF0178162E7@EX10MBOX03.pnnl.gov> Message-ID: On Tue Dec 09 2014 at 10:13:52 AM Vladimir Kozhukalov < vkozhukalov at mirantis.com> wrote: > Kevin, > > Just to make sure everyone understands what Fuel Agent is about. Fuel > Agent is agnostic to image format. There are 3 possibilities for image > format > 1) DISK IMAGE contains GPT/MBR table and all partitions and metadata in > case of md or lvm. That is just something like what you get when run 'dd > if=/dev/sda of=disk_image.raw' > This is what IPA driver does today. > 2) FS IMAGE contains fs. Disk contains some partitions which then could be > used to create md device or volume group contains logical volumes. We then > can put a file system over plain partition or md device or logical volume. > This type of image is what you get when run 'dd if=/dev/sdaN > of=fs_image.raw' > This is what PXE driver does today, but it does so over a remote iSCSI connection. Work is being done to add support for this to IPA [0] > 3) TAR IMAGE contains files. It is when you run 'tar cf tar_image.tar /' > > Currently in Fuel we use FS images. Fuel Agent creates partitions, md and > lvm devices and then downloads FS images and put them on partition devices > (/dev/sdaN) or on lvm device (/dev/mapper/vgname/lvname) or md device > (/dev/md0) > > I believe the IPA team would welcome contributions that add support for software RAID for the root partition. > Fuel Agent is also able to install and configure grub. > Again, I think this would be welcomed by the IPA team... If this is what FuelAgent is about, why is there so much resistance to contributing that functionality to the component which is already integrated with Ironic? Why complicate matters for both users and developers by adding *another* deploy agent that does (or will soon do) the same things? -Deva [0] https://blueprints.launchpad.net/ironic/+spec/partition-image-support-for-agent-driver https://review.openstack.org/137363 -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Tue Dec 9 21:24:50 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Tue, 9 Dec 2014 13:24:50 -0800 Subject: [openstack-dev] [hacking] hacking package upgrade dependency with setuptools In-Reply-To: <548747E8.2040105@gmail.com> References: <548747E8.2040105@gmail.com> Message-ID: On Tue, Dec 9, 2014 at 11:05 AM, Surojit Pathak wrote: > Hi all, > > On a RHEL system, as I upgrade hacking package from 0.8.0 to 0.9.5, I see > flake8 stops working. Upgrading setuptools resolves the issue. But I do not > see a change in version for pep8 or setuptools, with the upgrade in > setuptools. > > Any issue in packaging? Any explanation of this behavior? > > Snippet - > [suro at poweredsoured ~]$ pip list | grep hacking > hacking (0.8.0) > [suro at poweredsoured ~]$ > [suro at poweredsoured app]$ sudo pip install hacking==0.9.5 > ... > [suro at poweredsoured app]$ flake8 neutron/ > ... > File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 546, in > resolve > raise DistributionNotFound(req) > pkg_resources.DistributionNotFound: pep8>=1.4.6 > [suro at poweredsoured app]$ pip list | grep pep8 > pep8 (1.5.6) > [suro at poweredsoured app]$ pip list | grep setuptools > setuptools (0.6c11) > [suro at poweredsoured app]$ sudo pip install -U setuptools > ... > Successfully installed setuptools > Cleaning up... > [suro at poweredsoured app]$ pip list | grep pep8 > pep8 (1.5.6) > [suro at poweredsoured app]$ pip list | grep setuptools > setuptools (0.6c11) > [suro at poweredsoured app]$ flake8 neutron/ > [suro at poweredsoured app]$ Could this be pbr related? -pbr>=0.5.21,<1.0 +pbr>=0.6,!=0.7,<1.0 > > > -- > Regards, > Surojit Pathak > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Tue Dec 9 21:32:05 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Tue, 9 Dec 2014 13:32:05 -0800 Subject: [openstack-dev] [qa] How to delete a VM which is in ERROR state? In-Reply-To: References: Message-ID: On Sat, Dec 6, 2014 at 5:08 PM, Danny Choi (dannchoi) wrote: > Hi, > > I have a VM which is in ERROR state. > > > +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ > > | ID | Name > | Status | Task State | Power State | Networks | > > > +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ > > | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | > cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR | - | > NOSTATE | | > > I tried in both CLI ?nova delete? and Horizon ?terminate instance?. > Both accepted the delete command without any error. > However, the VM never got deleted. > > Is there a way to remove the VM? > What version of nova are you using? This is definitely a serious bug, you should be able to delete an instance in error state. Can you file a bug that includes steps on how to reproduce the bug along with all relevant logs. bugs.launchpad.net/nova > > Thanks, > Danny > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From devananda.vdv at gmail.com Tue Dec 9 22:06:33 2014 From: devananda.vdv at gmail.com (Devananda van der Veen) Date: Tue, 09 Dec 2014 22:06:33 +0000 Subject: [openstack-dev] [Ironic] Fuel agent proposal References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> <20141209150048.GB5494@jimrollenhagen.com> <1A3C52DFCD06494D8528644858247BF0178162E7@EX10MBOX03.pnnl.gov> Message-ID: On Tue Dec 09 2014 at 9:45:51 AM Fox, Kevin M wrote: > We've been interested in Ironic as a replacement for Cobbler for some of > our systems and have been kicking the tires a bit recently. > > While initially I thought this thread was probably another "Fuel not > playing well with the community" kind of thing, I'm not thinking that any > more. Its deeper then that. > There are aspects to both conversations here, and you raise many valid points. Cloud provisioning is great. I really REALLY like it. But one of the things > that makes it great is the nice, pretty, cute, uniform, standard "hardware" > the vm gives the user. Ideally, the physical hardware would behave the > same. But, > ?No Battle Plan Survives Contact With the Enemy?. The sad reality is, > most hardware is different from each other. Different drivers, different > firmware, different different different. > Indeed, hardware is different. And no matter how homogeneous you *think* it is, at some point, some hardware is going to fail^D^D^Dbehave differently than some other piece of hardware. One of the primary goals of Ironic is to provide a common *abstraction* to all the vendor differences, driver differences, and hardware differences. There's no magic in that -- underneath the covers, each driver is going to have to deal with the unpleasant realities of actual hardware that is actually different. > One way the cloud enables this isolation is by forcing the cloud admin's > to install things and deal with the grungy hardware to make the interface > nice and clean for the user. For example, if you want greater mean time > between failures of nova compute nodes, you probably use a raid 1. Sure, > its kind of a pet kind of thing todo, but its up to the cloud admin to > decide what's "better", buying more hardware, or paying for more admin/user > time. Extra hard drives are dirt cheep... > > So, in reality Ironic is playing in a space somewhere between "I want to > use cloud tools to deploy hardware, yay!" and "ewww.., physical hardware's > nasty. you have to know all these extra things and do all these extra > things that you don't have to do with a vm"... I believe Ironic's going to > need to be able to deal with this messiness in as clean a way as possible. If by "clean" you mean, expose a common abstraction on top of all those messy differences -- then we're on the same page. I would welcome any feedback as to where that abstraction leaks today, and on both spec and code reviews that would degrade or violate that abstraction layer. I think it is one of, if not *the*, defining characteristic of the project. > But that's my opinion. If the team feels its not a valid use case, then > we'll just have to use something else for our needs. I really really want > to be able to use heat to deploy whole physical distributed systems though. > > Today, we're using software raid over two disks to deploy our nova > compute. Why? We have some very old disks we recovered for one of our > clouds and they fail often. nova-compute is pet enough to benefit somewhat > from being able to swap out a disk without much effort. If we were to use > Ironic to provision the compute nodes, we need to support a way to do the > same. > I have made the (apparently incorrect) assumption that anyone running anything sensitive to disk failures in production would naturally have a hardware RAID, and that, therefor, Ironic should be capable of setting up that RAID in accordance with a description in the Nova flavor metadata -- but did not need to be concerned with software RAIDs. Clearly, there are several folks who have the same use-case in mind, but do not have hardware RAID cards in their servers, so my initial assumption was incorrect :) I'm fairly sure that the IPA team would welcome contributions to this effect. We're looking into ways of building an image that has a software raid > presetup, and expand it on boot. Awesome! I hope that work will make its way into diskimage-builder ;) (As an aside, I suggested this to the Fuel team back in Atlanta...) > This requires each image to be customized for this case though. I can see > Fuel not wanting to provide two different sets of images, "hardware raid" > and "software raid", that have the same contents in them, with just > different partitioning layouts... If we want users to not have to care > about partition layout, this is also not ideal... > End-users are probably not generating their own images for bare metal (unless user == operator, in which case, it should be fine). > Assuming Ironic can be convinced that these features really would be > needed, perhaps the solution is a middle ground between the pxe driver and > the agent? > I've been rallying for a convergence between the feature sets of these drivers -- specifically, that the agent should support partition-based images, and also support copy-over-iscsi as a deployment model. In parallel, Lucas had started working on splitting the deploy interface into both boot and deploy, which point we may be able to deprecate the current family of pxe_* drivers. But I'm birdwalking... > Associate partition information at the flavor level. The admin can decide > the best partitioning layout for a given hardware... The user doesn't have > to care any more. Two flavors for the same hardware could be "4 9's" or "5 > 9's" or something that way. > Bingo. This is the approach we've been discussing over the past two years - nova flavors could include metadata which get passed down to Ironic and applied at deploy-time - but it hasn't been as high a priority as other things. Though not specifically covering partitions, there are specs up for Nova [0] and Ironic [1] for this workflow. > Modify the agent to support a pxe style image in addition to full layout, > and have the agent partition/setup raid and lay down the image into it. > Modify the agent to support running grub2 at the end of deployment. > Or at least make the agent plugable to support adding these options. > > This does seem a bit backwards from the way the agent has been going. the > pxe driver was kind of linux specific. the agent is not... So maybe that > does imply a 3rd driver may be beneficial... But it would be nice to have > one driver, the agent, in the end that supports everything. > We'll always need different drivers to handle different kinds of hardware. And we have two modes of deployment today (copy-image-over-iscsi, agent-downloads-locally) and could have more in the future (bittorrent, multicast, ...?). That said, I don't know why a single agent couldn't support multiple modes of deployment. -Devananda [0] https://review.openstack.org/#/c/136104/ [1] http://specs.openstack.org/openstack/ironic-specs/specs/backlog/driver-capabilities.html and https://review.openstack.org/#/c/137363/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbayer at redhat.com Tue Dec 9 22:20:57 2014 From: mbayer at redhat.com (Mike Bayer) Date: Tue, 9 Dec 2014 17:20:57 -0500 Subject: [openstack-dev] [oslo.db] engine facade status, should reader transactions COMMIT or ROLLBACK? Message-ID: <1975EC2E-5793-4A5B-862F-A99DF53D98CE@redhat.com> Hi folks - Just a reminder that the majority of the enginefacade implementation is up for review, see that at: https://review.openstack.org/#/c/138215/. Needs a lot more people looking at it. Matthew Booth raised a good point which I also came across, which is of transactions that are ?read only?. What is the opinion of openstack-dev for a transaction that is marked as ?reader? and emits only SELECT statements, do we prefer that it COMMIT at the end or just ROLLBACK? Doesn?t matter much to me. In my own work I tend to use ROLLBACK, but the current design of most openstack database code I see is doing simple ?with session.begin()?, which means it?s currently all COMMIT. Thanks for your attention! - mike From armamig at gmail.com Tue Dec 9 22:21:10 2014 From: armamig at gmail.com (Armando M.) Date: Tue, 9 Dec 2014 15:21:10 -0700 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: On 9 December 2014 at 13:59, Ivar Lazzaro wrote: > I agree with Salvatore that the split is not an easy thing to achieve for > vendors, and I would like to bring up my case to see if there are ways to > make this at least a bit simpler. > > At some point I had the need to backport vendor code from Juno to Icehouse > (see first attempt here [0]). That in [0] was some weird approach that put > unnecessary burden on infra, neutron cores and even packagers, so I decided > to move to a more decoupled approach that was basically completely > splitting my code from Neutron. You can find the result here [1]. > The focal points of this approach are: > > * **all** the vendor code is removed; > * Neutron is used as a dependency, pulled directly from github for UTs > (see test-requirements [2]) and explicitly required when installing the > plugin; > * The Database and Schema is the same as Neutron's; > * A migration script exists for this driver, which uses a different (and > unique) version_table (see env.py [3]); > * Entry points are properly used in setup.cfg [4] in order to provide > migration scripts and Driver/Plugin shortcuts for Neutron; > * UTs are run by including Neutron in the venv [2]. > * The boilerplate is taken care by cookiecutter [5]. > > The advantage of the above approach, is that it's very simple to pull off > (only thing you need is cookiecutter, a repo, and then you can just > replicate the same tree structure that existed in Neutron for your own > vendor code). Also it has the advantage to remove all the vendor code from > Neutron (did I say that already?). As far as the CI is concerned, it just > needs to "learn" how to install the new plugin, which will require Neutron > to be pre-existent. > > The typical installation workflow would be: > - Install Neutron normally; > - pull from pypi the vendor driver; > - run the vendor db migration script; > - Do everything else (configuration and execution) just like it was done > before. > > Note that this same satellite approach is used by GBP (I know this is a > bad word that once brought hundreds of ML replies, but that's just an > example :) ) for the Juno timeframe [6]. This shows that the very same > thing can be easily done for services. > Okay, so if I understand you correctly, you're saying that was easier for you to go entirely out of tree, and that you have done so already. Okay, good for you, problem solved! One point that should be clear here is that, if someone is completely comfortable with being entirely out of tree, and he/she has done so already (I know of a few other examples besides the apic driver), then this proposal does not apply to them. They are way ahead of us, and kudos to them! As far you're concerned, Ivar, if you want to promote this model for new plugins/drivers contributions, by all means, I encourage you to document this in a blog or the wiki and disseminate your findings so that people can adopt your model if they wanted to. > > As far as ML2 is concerned, I think we should split it as well in order to > treat all the plugins equally, but with the following caveats: > > * ML2 will be in a openstack repo under the networking program (kind of > obvious); > * The drivers can decide wether to stay in tree with ML2 or not (for a > better community effort, but they will definitively evolve slower); > * Don't care about the governance, Neutron will be in charge of this repo > and will have the ability to promote whoever they want when needed. > We discussed this point before, and the decision is that we don't want to be prescriptive about this. If people interested in ML2 drivers want to stay close to each other or not, we should not mandate one or the other. > As far as cogating is concerned, I think that using the above approach the > breaking will exist just as long as the infra job understands how to > install the ML2 driver from it's own repo. I don't see it as a big issue, > But maybe it's just me, and my fabulous world where stuff works for no good > reason. We could at least ask the infra team to understand it it's feasible. > Well, you live in a nice wonderland, I envy you! This co-gating decision is the result of a discussion with infra folks. > Moreover, this is a work that we may need to do anyway! So it's better to > just start it now thus creating an example for all the vendors that have to > go through the split (back on Gary's point). > Plenty of examples to go by, as mentioned time and time again. > > Appreciate your feedback, > Ivar. > > [0] https://review.openstack.org/#/c/123596/ > [1] https://github.com/noironetworks/apic-ml2-driver/tree/icehouse > [2] > https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/test-requirements.txt > [3] > https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/apic_ml2/neutron/db/migration/alembic_migrations/env.py > [4] > https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/setup.cfg > [5] https://github.com/openstack-dev/cookiecutter > [6] https://github.com/stackforge/group-based-policy > > On Tue, Dec 9, 2014 at 9:53 AM, Salvatore Orlando > wrote: > >> >> >> On 9 December 2014 at 17:32, Armando M. wrote: >> >>> >>> >>> On 9 December 2014 at 09:41, Salvatore Orlando >>> wrote: >>> >>>> I would like to chime into this discussion wearing my plugin developer >>>> hat. >>>> >>>> We (the VMware team) have looked very carefully at the current proposal >>>> for splitting off drivers and plugins from the main source code tree. >>>> Therefore the concerns you've heard from Gary are not just ramblings but >>>> are the results of careful examination of this proposal. >>>> >>>> While we agree with the final goal, the feeling is that for many plugin >>>> maintainers this process change might be too much for what can be >>>> accomplished in a single release cycle. >>>> >>> We actually gave a lot more than a cycle: >>> >>> >>> https://review.openstack.org/#/c/134680/16/specs/kilo/core-vendor-decomposition.rst >>> LINE 416 >>> >>> And in all honestly, I can only tell that getting this done by such an >>> experienced team like the Neutron team @VMware shouldn't take that long. >>> >> >> We are probably not experienced enough. We always love to learn new >> things. >> >> >>> >>> By the way, if Kyle can do it in his teeny tiny time that he has left >>> after his PTL duties, then anyone can do it! :) >>> >>> https://review.openstack.org/#/c/140191/ >>> >> >> I think I should be able to use mv & git push as well - I think however >> there's a bit more than that to it. >> >> >>> >>> As a member of the drivers team, I am still very supportive of the >>>> split, I just want to make sure that it?s made in a sustainable way; I also >>>> understand that ?sustainability? has been one of the requirements of the >>>> current proposal, and therefore we should all be on the same page on this >>>> aspect. >>>> >>>> However, we did a simple exercise trying to assess the amount of work >>>> needed to achieve something which might be acceptable to satisfy the >>>> process. Without going into too many details, this requires efforts for: >>>> >>>> - refactor the code to achieve a plugin module simple and thin enough >>>> to satisfy the requirements. Unfortunately a radical approach like the one >>>> in [1] with a reference to an external library is not pursuable for us >>>> >>>> - maintaining code repositories outside of the neutron scope and the >>>> necessary infrastructure >>>> >>>> - reinforcing our CI infrastructure, and improve our error detection >>>> and log analysis capabilities to improve reaction times upon failures >>>> triggered by upstream changes. As you know, even if the plugin interface is >>>> solid-ish, the dependency on the db base class increases the chances of >>>> upstream changes breaking 3rd party plugins. >>>> >>> >>> No-one is advocating for approach laid out in [1], but a lot of code can >>> be moved elsewhere (like the nsxlib) without too much effort. Don't forget >>> that not so long ago I was the maintainer of this plugin and the one who >>> built the VMware NSX CI; I know very well what it takes to scope this >>> effort, and I can support you in the process. >>> >> >> Thanks for this clarification. I was sure that you guys were not >> advocating for a ninja-split thing, but I wanted just to be sure of that. >> I'm also pretty sure our engineering team values your support. >> >>> The feedback from our engineering team is that satisfying the >>>> requirements of this new process might not be feasible in the Kilo >>>> timeframe, both for existing plugins and for new plugins and drivers that >>>> should be upstreamed (there are a few proposed on neutron-specs at the >>>> moment, which are all in -2 status considering the impending approval of >>>> the split out). >>>> >>> No new plugins can and will be accepted if they do not adopt the >>> proposed model, let's be very clear about this. >>> >> >> This is also what I gathered from the proposal. It seems that you're >> however stating that there might be some flexibility in defining how much a >> plugin complies with the new model. I will need to go back to the drawing >> board with the rest of my team and see in which way this can work for us. >> >> >>> The questions I would like to bring to the wider community are therefore >>>> the following: >>>> >>>> 1 - Is there a possibility of making a further concession on the >>>> current proposal, where maintainers are encouraged to experiment with the >>>> plugin split in Kilo, but will actually required to do it in the next >>>> release? >>>> >>> This is exactly what the spec is proposing: get started now, and it does >>> not matter if you don't finish in time. >>> >> >> I think the deprecation note at line 416 still scares people off a bit. >> To me your word is enough, no change is needed. >> >>> 2 - What could be considered as a acceptable as a new plugin? I >>>> understand that they would be accepted only as ?thin integration modules?, >>>> which ideally should just be a pointer to code living somewhere else. I?m >>>> not questioning the validity of this approach, but it has been brought to >>>> my attention that this will actually be troubling for teams which have made >>>> an investment in the previous release cycles to upstream plugins following >>>> the ?old? process >>>> >>> You are not alone. Other efforts went through the same process [1, 2, >>> 3]. Adjusting is a way of life. No-one is advocating for throwing away >>> existing investment. This proposal actually promotes new and pre-existing >>> investment. >>> >>> [1] https://review.openstack.org/#/c/104452/ >>> [2] https://review.openstack.org/#/c/103728/ >>> [3] https://review.openstack.org/#/c/136091/ >>> >> 3 - Regarding the above discussion on "ML2 or not ML2". The point on >>>> co-gating is well taken. Eventually we'd like to remove this binding - >>>> because I believe the ML2 subteam would also like to have more freedom on >>>> their plugin. Do we already have an idea about how doing that without >>>> completely moving away from the db_base class approach? >>>> >>> Sure, if you like to participate in the process, we can only welcome >>> you! >>> >> >> I actually asked you if you already had an idea... should I take that as >> a no? >> >>> Thanks for your attention and for reading through this >>>> >>>> Salvatore >>>> >>>> [1] >>>> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/vmware/plugin.py#n22 >>>> >>>> On 8 December 2014 at 21:51, Maru Newby wrote: >>>> >>>>> >>>>> On Dec 7, 2014, at 10:51 AM, Gary Kotton wrote: >>>>> >>>>> > Hi Kyle, >>>>> > I am not missing the point. I understand the proposal. I just think >>>>> that it has some shortcomings (unless I misunderstand, which will certainly >>>>> not be the first time and most definitely not the last). The thinning out >>>>> is to have a shim in place. I understand this and this will be the entry >>>>> point for the plugin. I do not have a concern for this. My concern is that >>>>> we are not doing this with the ML2 off the bat. That should lead by example >>>>> as it is our reference architecture. Lets not kid anyone, but we are going >>>>> to hit some problems with the decomposition. I would prefer that it be done >>>>> with the default implementation. Why? >>>>> >>>>> The proposal is to move vendor-specific logic out of the tree to >>>>> increase vendor control over such code while decreasing load on reviewers. >>>>> ML2 doesn?t contain vendor-specific logic - that?s the province of ML2 >>>>> drivers - so it is not a good target for the proposed decomposition by >>>>> itself. >>>>> >>>>> >>>>> > ? Cause we will fix them quicker as it is something that >>>>> prevent Neutron from moving forwards >>>>> > ? We will just need to fix in one place first and not in N >>>>> (where N is the vendor plugins) >>>>> > ? This is a community effort ? so we will have a lot more eyes >>>>> on it >>>>> > ? It will provide a reference architecture for all new plugins >>>>> that want to be added to the tree >>>>> > ? It will provide a working example for plugin that are >>>>> already in tree and are to be replaced by the shim >>>>> > If we really want to do this, we can say freeze all development >>>>> (which is just approvals for patches) for a few days so that we will can >>>>> just focus on this. I stated what I think should be the process on the >>>>> review. For those who do not feel like finding the link: >>>>> > ? Create a stack forge project for ML2 >>>>> > ? Create the shim in Neutron >>>>> > ? Update devstack for the to use the two repos and the shim >>>>> > When #3 is up and running we switch for that to be the gate. Then we >>>>> start a stopwatch on all other plugins. >>>>> >>>>> As was pointed out on the spec (see Miguel?s comment on r15), the ML2 >>>>> plugin and the OVS mechanism driver need to remain in the main Neutron repo >>>>> for now. Neutron gates on ML2+OVS and landing a breaking change in the >>>>> Neutron repo along with its corresponding fix to a separate ML2 repo would >>>>> be all but impossible under the current integrated gating scheme. >>>>> Plugins/drivers that do not gate Neutron have no such constraint. >>>>> >>>>> >>>>> Maru >>>>> >>>>> >>>>> > Sure, I?ll catch you on IRC tomorrow. I guess that you guys will >>>>> bash out the details at the meetup. Sadly I will not be able to attend ? so >>>>> you will have to delay on the tar and feathers. >>>>> > Thanks >>>>> > Gary >>>>> > >>>>> > >>>>> > From: "mestery at mestery.com" >>>>> > Reply-To: OpenStack List >>>>> > Date: Sunday, December 7, 2014 at 7:19 PM >>>>> > To: OpenStack List >>>>> > Cc: "openstack at lists.openstack.org" >>>>> > Subject: Re: [openstack-dev] [Neutron] Core/Vendor code decomposition >>>>> > >>>>> > Gary, you are still miss the point of this proposal. Please see my >>>>> comments in review. We are not forcing things out of tree, we are thinning >>>>> them. The text you quoted in the review makes that clear. We will look at >>>>> further decomposing ML2 post Kilo, but we have to be realistic with what we >>>>> can accomplish during Kilo. >>>>> > >>>>> > Find me on IRC Monday morning and we can discuss further if you >>>>> still have questions and concerns. >>>>> > >>>>> > Thanks! >>>>> > Kyle >>>>> > >>>>> > On Sun, Dec 7, 2014 at 2:08 AM, Gary Kotton >>>>> wrote: >>>>> >> Hi, >>>>> >> I have raised my concerns on the proposal. I think that all plugins >>>>> should be treated on an equal footing. My main concern is having the ML2 >>>>> plugin in tree whilst the others will be moved out of tree will be >>>>> problematic. I think that the model will be complete if the ML2 was also >>>>> out of tree. This will help crystalize the idea and make sure that the >>>>> model works correctly. >>>>> >> Thanks >>>>> >> Gary >>>>> >> >>>>> >> From: "Armando M." >>>>> >> Reply-To: OpenStack List >>>>> >> Date: Saturday, December 6, 2014 at 1:04 AM >>>>> >> To: OpenStack List , " >>>>> openstack at lists.openstack.org" >>>>> >> Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition >>>>> >> >>>>> >> Hi folks, >>>>> >> >>>>> >> For a few weeks now the Neutron team has worked tirelessly on [1]. >>>>> >> >>>>> >> This initiative stems from the fact that as the project matures, >>>>> evolution of processes and contribution guidelines need to evolve with it. >>>>> This is to ensure that the project can keep on thriving in order to meet >>>>> the needs of an ever growing community. >>>>> >> >>>>> >> The effort of documenting intentions, and fleshing out the various >>>>> details of the proposal is about to reach an end, and we'll soon kick the >>>>> tires to put the proposal into practice. Since the spec has grown pretty >>>>> big, I'll try to capture the tl;dr below. >>>>> >> >>>>> >> If you have any comment please do not hesitate to raise them here >>>>> and/or reach out to us. >>>>> >> >>>>> >> tl;dr >>> >>>>> >> >>>>> >> From the Kilo release, we'll initiate a set of steps to change the >>>>> following areas: >>>>> >> ? Code structure: every plugin or driver that exists or wants >>>>> to exist as part of Neutron project is decomposed in an slim vendor >>>>> integration (which lives in the Neutron repo), plus a bulkier vendor >>>>> library (which lives in an independent publicly available repo); >>>>> >> ? Contribution process: this extends to the following aspects: >>>>> >> ? Design and Development: the process is largely >>>>> unchanged for the part that pertains the vendor integration; the maintainer >>>>> team is fully auto governed for the design and development of the vendor >>>>> library; >>>>> >> ? Testing and Continuous Integration: maintainers will >>>>> be required to support their vendor integration with 3rd CI testing; the >>>>> requirements for 3rd CI testing are largely unchanged; >>>>> >> ? Defect management: the process is largely unchanged, >>>>> issues affecting the vendor library can be tracked with whichever >>>>> tool/process the maintainer see fit. In cases where vendor library fixes >>>>> need to be reflected in the vendor integration, the usual OpenStack defect >>>>> management apply. >>>>> >> ? Documentation: there will be some changes to the way >>>>> plugins and drivers are documented with the intention of promoting >>>>> discoverability of the integrated solutions. >>>>> >> ? Adoption and transition plan: we strongly advise maintainers >>>>> to stay abreast of the developments of this effort, as their code, their >>>>> CI, etc will be affected. The core team will provide guidelines and support >>>>> throughout this cycle the ensure a smooth transition. >>>>> >> To learn more, please refer to [1]. >>>>> >> >>>>> >> Many thanks, >>>>> >> Armando >>>>> >> >>>>> >> [1] https://review.openstack.org/#/c/134680 >>>>> >> >>>>> >> _______________________________________________ >>>>> >> OpenStack-dev mailing list >>>>> >> OpenStack-dev at lists.openstack.org >>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >> >>>>> > >>>>> > _______________________________________________ >>>>> > OpenStack-dev mailing list >>>>> > OpenStack-dev at lists.openstack.org >>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mscherbakov at mirantis.com Tue Dec 9 22:43:06 2014 From: mscherbakov at mirantis.com (Mike Scherbakov) Date: Tue, 9 Dec 2014 14:43:06 -0800 Subject: [openstack-dev] [Fuel] Hard Code Freeze for 6.0 Message-ID: Hi all, I'm glad to announce that we've reached Hard Code Freeze (HCF) [1] criteria for 6.0 milestone. stable/6.0 branches for our repos were created. Bug reporters, please do not forget to target both 6.1 (master) and 6.0 (stable/6.0) milestones since now. If the fix is merged to master, it has to be backported to stable/6.0 to make it available in 6.0. Please ensure that you do NOT merge changes to stable branch first. It always has to be a backport with the same Change-ID. Please see more on this at [2]. I hope Fuel DevOps team can quickly update nightly builds [3] to reflect changes. [1] https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze [2] https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series [3] https://fuel-jenkins.mirantis.com/view/ISO/ Thanks, -- Mike Scherbakov #mihgen -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Tue Dec 9 22:43:04 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Tue, 9 Dec 2014 14:43:04 -0800 Subject: [openstack-dev] [keystone][all] Max Complexity Check Considered Harmful In-Reply-To: References: Message-ID: On Mon, Dec 8, 2014 at 5:03 PM, Brant Knudson wrote: > > Not too long ago projects added a maximum complexity check to tox.ini, for > example keystone has "max-complexity=24". Seemed like a good idea at the > time, but in a recent attempt to lower the maximum complexity check in > keystone[1][2], I found that the maximum complexity check can actually lead > to less understandable code. This is because the check includes an embedded > function's "complexity" in the function that it's in. > This behavior is expected. Nested functions cannot be unit tested on there own. Part of the issue is that nested functions can access variables scoped to the outer function, so the following function is valid: def outer(): var = "outer" def inner(): print var inner() Because nested functions cannot easily be unit tested, and can be harder to reason about since they can access variables that are part of the outer function, I don't think they are easier to understand (there are still cases where a nested function makes sense though). > The way I would have lowered the complexity of the function in keystone is > to extract the complex part into a new function. This can make the existing > function much easier to understand for all the reasons that one defines a > function for code. Since this new function is obviously only called from > the function it's currently in, it makes sense to keep the new function > inside the existing function. It's simpler to think about an embedded > function because then you know it's only called from one place. The problem > is, because of the existing complexity check behavior, this doesn't lower > the "complexity" according to the complexity check, so you wind up putting > the function as a new top-level, and now a reader is has to assume that the > function could be called from anywhere and has to be much more cautious > about changes to the function. > > Since the complexity check can lead to code that's harder to understand, > it must be considered harmful and should be removed, at least until the > incorrect behavior is corrected. > Why do you think the max complexity check is harmful? because it prevents large amounts of nested functions? > > [1] https://review.openstack.org/#/c/139835/ > [2] https://review.openstack.org/#/c/139836/ > [3] https://review.openstack.org/#/c/140188/ > > - Brant > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Tue Dec 9 23:06:32 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Tue, 9 Dec 2014 15:06:32 -0800 Subject: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches In-Reply-To: <54820ad4d5c3d_2da2df3e8cb9@ovo.mains.priv.notmuch> References: <5480D2CF.8050301@linux.vnet.ibm.com> <5480E275.8020005@linux.vnet.ibm.com> <5480E534.2050601@dague.net> <54820ad4d5c3d_2da2df3e8cb9@ovo.mains.priv.notmuch> Message-ID: On Fri, Dec 5, 2014 at 11:43 AM, Ian Main wrote: > Sean Dague wrote: > > On 12/04/2014 05:38 PM, Matt Riedemann wrote: > > > > > > > > > On 12/4/2014 4:06 PM, Michael Still wrote: > > >> +Eric and Ian > > >> > > >> On Fri, Dec 5, 2014 at 8:31 AM, Matt Riedemann > > >> wrote: > > >>> This came up in the nova meeting today, I've opened a bug [1] for it. > > >>> Since > > >>> this isn't maintained by infra we don't have log indexing so I can't > use > > >>> logstash to see how pervasive it us, but multiple people are > > >>> reporting the > > >>> same thing in IRC. > > >>> > > >>> Who is maintaining the nova-docker CI and can look at this? > > >>> > > >>> It also looks like the log format for the nova-docker CI is a bit > > >>> weird, can > > >>> that be cleaned up to be more consistent with other CI log results? > > >>> > > >>> [1] https://bugs.launchpad.net/nova-docker/+bug/1399443 > > >>> > > >>> -- > > >>> > > >>> Thanks, > > >>> > > >>> Matt Riedemann > > >>> > > >>> > > >>> _______________________________________________ > > >>> OpenStack-dev mailing list > > >>> OpenStack-dev at lists.openstack.org > > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >> > > >> > > >> > > > > > > Also, according to the 3rd party CI requirements [1] I should see > > > nova-docker CI in the third party wiki page [2] so I can get details on > > > who to contact when this fails but that's not done. > > > > > > [1] http://ci.openstack.org/third_party.html#requirements > > > [2] https://wiki.openstack.org/wiki/ThirdPartySystems > > > > It's not the 3rd party CI job we are talking about, it's the one in the > > check queue which is run by infra. > > > > But, more importantly, jobs in those queues need shepards that will fix > > them. Otherwise they will get deleted. > > > > Clarkb provided the fix for the log structure right now - > > https://review.openstack.org/#/c/139237/1 so at least it will look > > vaguely sane on failures > > > > -Sean > > This is one of the reasons we might like to have this in nova core. > Otherwise > we will just keep addressing issues as they come up. We would likely be > involved doing this if it were part of nova core anyway. > While gating on nova-docker will prevent patches that cause nova-docker to break 100% to land, it won't do a lot to prevent transient failures. To fix those we need people dedicated to making sure nova-docker is working. > > Ian > > > -- > > Sean Dague > > http://dague.net > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From asalkeld at mirantis.com Tue Dec 9 23:16:40 2014 From: asalkeld at mirantis.com (Angus Salkeld) Date: Wed, 10 Dec 2014 09:16:40 +1000 Subject: [openstack-dev] [keystone][all] Max Complexity Check Considered Harmful In-Reply-To: References: Message-ID: On Wed, Dec 10, 2014 at 8:43 AM, Joe Gordon wrote: > > > On Mon, Dec 8, 2014 at 5:03 PM, Brant Knudson wrote: > >> >> Not too long ago projects added a maximum complexity check to tox.ini, >> for example keystone has "max-complexity=24". Seemed like a good idea at >> the time, but in a recent attempt to lower the maximum complexity check in >> keystone[1][2], I found that the maximum complexity check can actually lead >> to less understandable code. This is because the check includes an embedded >> function's "complexity" in the function that it's in. >> > > This behavior is expected. > > Nested functions cannot be unit tested on there own. Part of the issue is > that nested functions can access variables scoped to the outer function, so > the following function is valid: > > def outer(): > var = "outer" > def inner(): > print var > inner() > > > Because nested functions cannot easily be unit tested, and can be harder > to reason about since they can access variables that are part of the outer > function, I don't think they are easier to understand (there are still > cases where a nested function makes sense though). > I think the improvement in ease of unit testing is a huge plus from my point of view (when splitting the function to the same level). This seems in the balance to be far more helpful than harmful. -Angus > >> The way I would have lowered the complexity of the function in keystone >> is to extract the complex part into a new function. This can make the >> existing function much easier to understand for all the reasons that one >> defines a function for code. Since this new function is obviously only >> called from the function it's currently in, it makes sense to keep the new >> function inside the existing function. It's simpler to think about an >> embedded function because then you know it's only called from one place. >> The problem is, because of the existing complexity check behavior, this >> doesn't lower the "complexity" according to the complexity check, so you >> wind up putting the function as a new top-level, and now a reader is has to >> assume that the function could be called from anywhere and has to be much >> more cautious about changes to the function. >> > >> Since the complexity check can lead to code that's harder to understand, >> it must be considered harmful and should be removed, at least until the >> incorrect behavior is corrected. >> > > Why do you think the max complexity check is harmful? because it prevents > large amounts of nested functions? > > > >> >> [1] https://review.openstack.org/#/c/139835/ >> [2] https://review.openstack.org/#/c/139836/ >> [3] https://review.openstack.org/#/c/140188/ >> >> - Brant >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at windisch.us Tue Dec 9 23:18:24 2014 From: eric at windisch.us (Eric Windisch) Date: Tue, 9 Dec 2014 18:18:24 -0500 Subject: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches In-Reply-To: References: <5480D2CF.8050301@linux.vnet.ibm.com> <5480E275.8020005@linux.vnet.ibm.com> <5480E534.2050601@dague.net> <54820ad4d5c3d_2da2df3e8cb9@ovo.mains.priv.notmuch> Message-ID: > > > While gating on nova-docker will prevent patches that cause nova-docker to > break 100% to land, it won't do a lot to prevent transient failures. To fix > those we need people dedicated to making sure nova-docker is working. > > What would be helpful for me is a way to know that our tests are breaking without manually checking Kibana, such as an email. Regards, Eric Windisch -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Tue Dec 9 23:20:07 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Tue, 9 Dec 2014 15:20:07 -0800 Subject: [openstack-dev] [NFV][Telco] pxe-boot In-Reply-To: <547ED500.1070008@dektech.com.au> References: <20141125223355.Horde.8UWcgskXSBVvU-RJrCgbtw7@mail.dektech.com.au> <693765041.19659732.1417044729676.JavaMail.zimbra@redhat.com> <547ED500.1070008@dektech.com.au> Message-ID: On Wed, Dec 3, 2014 at 1:16 AM, Pasquale Porreca < pasquale.porreca at dektech.com.au> wrote: > The use case we were thinking about is a Network Function (e.g. IMS Nodes) > implementation in which the high availability is based on OpenSAF. In this > scenario there is an Active/Standby cluster of 2 System Controllers (SC) > plus several Payloads (PL) that boot from network, controlled by the SC. > The logic of which service to deploy on each payload is inside the SC. > > In OpenStack both SCs and PLs will be instances running in the cloud, > anyway the PLs should still boot from network under the control of the SC. > In fact to use Glance to store the image for the PLs and keep the control > of the PLs in the SC, the SC should trigger the boot of the PLs with > requests to Nova/Glance, but an application running inside an instance > should not directly interact with a cloud infrastructure service like > Glance or Nova. > Why not? This is a fairly common practice. > > We know that it is yet possible to achieve network booting in OpenStack > using an image stored in Glance that acts like a pxe client, anyway this > workaround has some drawbacks, mainly due to the fact it won't be possible > to choose the specific virtual NIC on which the network boot will happen, > causing DHCP requests to flow on networks where they don't belong to and > possible delays in the boot of the instances. > > > On 11/27/14 00:32, Steve Gordon wrote: > >> ----- Original Message ----- >> >>> From: "Angelo Matarazzo" >>> To: "OpenStack Development Mailing" , >>> openstack-operators at lists.openstack.org >>> >>> >>> Hi all, >>> my team and I are working on pxe boot feature very similar to the >>> "Discless VM" one in Active blueprint list[1] >>> The blueprint [2] is no longer active and we created a new spec [3][4]. >>> >>> Nova core reviewers commented our spec and the first and the most >>> important objection is that there is not a compelling reason to >>> provide this kind of feature : booting from network. >>> >>> Aside from the specific implementation, I think that some members of >>> TelcoWorkingGroup could be interested in and provide a use case. >>> I would also like to add this item to the agenda of next meeting >>> >>> Any thought? >>> >> We did discuss this today, and granted it is listed as a blueprint >> someone in the group had expressed interest in at a point in time - though >> I don't believe any further work was done. The general feeling was that >> there isn't anything really NFV or Telco specific about this over and above >> the more generic use case of legacy applications. Are you able to further >> elaborate on the reason it's NFV or Telco specific other than because of >> who is requesting it in this instance? >> >> Thanks! >> >> -Steve >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -- > Pasquale Porreca > > DEK Technologies > Via dei Castelli Romani, 22 > 00040 Pomezia (Roma) > > Mobile +39 3394823805 > Skype paskporr > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From asalkeld at mirantis.com Tue Dec 9 23:25:59 2014 From: asalkeld at mirantis.com (Angus Salkeld) Date: Wed, 10 Dec 2014 09:25:59 +1000 Subject: [openstack-dev] People of OpenStack (and their IRC nicks) In-Reply-To: <54874973.5060903@openstack.org> References: <20141209125828.GA10149@redhat.redhat.com> <20141209140407.GO2497@yuggoth.org> <54874973.5060903@openstack.org> Message-ID: On Wed, Dec 10, 2014 at 5:11 AM, Stefano Maffulli wrote: > On 12/09/2014 06:04 AM, Jeremy Stanley wrote: > > We already have a solution for tracking the contributor->IRC > > mapping--add it to your Foundation Member Profile. For example, mine > > is in there already: > > > > http://www.openstack.org/community/members/profile/5479 > > I recommend updating the openstack.org member profile and add IRC > nickname there (and while you're there, update your affiliation history). > > There is also a search engine on: > > http://www.openstack.org/community/members/ > > Except that info doesn't appear nicely in review. Some people put their nick in their "Full Name" in gerrit. Hopefully Clint doesn't mind: https://review.openstack.org/#/q/owner:%22Clint+%27SpamapS%27+Byrum%22+status:open,n,z I *think* that's done here: https://review.openstack.org/#/settings/contact At least with that it is really obvious without having to go to another site what your nick is. -Angus > /stef > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Tue Dec 9 23:29:04 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Tue, 9 Dec 2014 15:29:04 -0800 Subject: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches In-Reply-To: References: <5480D2CF.8050301@linux.vnet.ibm.com> <5480E275.8020005@linux.vnet.ibm.com> <5480E534.2050601@dague.net> <54820ad4d5c3d_2da2df3e8cb9@ovo.mains.priv.notmuch> Message-ID: On Tue, Dec 9, 2014 at 3:18 PM, Eric Windisch wrote: > >> While gating on nova-docker will prevent patches that cause nova-docker >> to break 100% to land, it won't do a lot to prevent transient failures. To >> fix those we need people dedicated to making sure nova-docker is working. >> >> > > What would be helpful for me is a way to know that our tests are breaking > without manually checking Kibana, such as an email. > > There is also graphite [0], but since the docker-job is running on the check queue the data we are producing is very dirty. Since check jobs often run on broken patches. [0] http://graphite.openstack.org/render/?from=-10days&height=500&until=now&width=1200&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.check-tempest-dsvm-docker.FAILURE,sum(stats.zuul.pipeline.check.job.check-tempest-dsvm-docker.{SUCCESS,FAILURE})),%2736hours%27),%20%27check-tempest-dsvm-docker%27),%27orange%27)&title=Docker%20Failure%20Rates%20(10%20days)&_t=0.3702208176255226 > Regards, > Eric Windisch > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smelikyan at mirantis.com Tue Dec 9 23:36:24 2014 From: smelikyan at mirantis.com (Serg Melikyan) Date: Tue, 9 Dec 2014 15:36:24 -0800 Subject: [openstack-dev] [murano] Review Day Announcement Message-ID: I would like to announce Review Day for Murano, that is scheduled to this Friday, Dec 12. We will start at 10:00 UTC and will continue until 10:00 of the next day to span across the globe. We encourage to join our review day anyone who is interested in Murano, not only core-reviewers. Rules: * review as much as possible * stay in channel and respond to questions and comments promptly -- Serg Melikyan, Senior Software Engineer at Mirantis, Inc. http://mirantis.com | smelikyan at mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Tue Dec 9 23:38:23 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Tue, 9 Dec 2014 15:38:23 -0800 Subject: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra. In-Reply-To: <415171050.18479930.1416952866863.JavaMail.zimbra@redhat.com> References: <624382760.7330435.1415819475865.JavaMail.zimbra@redhat.com> <5463C7AF.4080709@redhat.com> <20141113092703.GA31809@redhat.com> <5464EA21.1080802@danplanet.com> <20141113173242.GH31809@redhat.com> <5464EC12.1070201@danplanet.com> <20141113174314.GI31809@redhat.com> <20141113180035.GJ31809@redhat.com> <415171050.18479930.1416952866863.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Nov 25, 2014 at 2:01 PM, Steve Gordon wrote: > ----- Original Message ----- > > From: "Daniel P. Berrange" > > To: "Dan Smith" > > > > On Thu, Nov 13, 2014 at 05:43:14PM +0000, Daniel P. Berrange wrote: > > > On Thu, Nov 13, 2014 at 09:36:18AM -0800, Dan Smith wrote: > > > > > That sounds like something worth exploring at least, I didn't know > > > > > about that kernel build option until now :-) It sounds like it > ought > > > > > to be enough to let us test the NUMA topology handling, CPU pinning > > > > > and probably huge pages too. > > > > > > > > Okay. I've been vaguely referring to this as a potential test vector, > > > > but only just now looked up the details. That's my bad :) > > > > > > > > > The main gap I'd see is NUMA aware PCI device assignment since the > > > > > PCI <-> NUMA node mapping data comes from the BIOS and it does not > > > > > look like this is fakeable as is. > > > > > > > > Yeah, although I'd expect that the data is parsed and returned by a > > > > library or utility that may be a hook for fakeification. However, it > may > > > > very well be more trouble than it's worth. > > > > > > > > I still feel like we should be able to test generic PCI in a similar > way > > > > (passing something like a USB controller through to the guest, etc). > > > > However, I'm willing to believe that the intersection of PCI and > NUMA is > > > > a higher order complication :) > > > > > > Oh I forgot to mention with PCI device assignment (as well as having a > > > bunch of PCI devices available[1]), the key requirement is an IOMMU. > > > AFAIK, neither Xen or KVM provide any IOMMU emulation, so I think we're > > > out of luck for even basic PCI assignment testing inside VMs. > > > > Ok, turns out that wasn't entirely accurate in general. > > > > KVM *can* emulate an IOMMU, but it requires that the guest be booted > > with the q35 machine type, instead of the ancient PIIX4 machine type, > > and also QEMU must be launched with "-machine iommu=on". We can't do > > this in Nova, so although it is theoretically possible, it is not > > doable for us in reality :-( > > > > Regards, > > Daniel > > Is it worth still pursuing virtual testing of the NUMA awareness work you, > nikola, and others have been doing? It seems to me it would still be > preferable to do this virtually (and ideally in the gate) wherever possible? > The more we can test in the gate and without real hardware the better. > > Thanks, > > Steve > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ken1ohmichi at gmail.com Tue Dec 9 23:54:07 2014 From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi) Date: Wed, 10 Dec 2014 08:54:07 +0900 Subject: [openstack-dev] [qa] How to delete a VM which is in ERROR state? In-Reply-To: References: Message-ID: Hi, This case is always tested by Tempest on the gate. https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_delete_server.py#L152 So I guess this problem wouldn't happen on the latest version at least. Thanks Ken'ichi Ohmichi --- 2014-12-10 6:32 GMT+09:00 Joe Gordon : > > > On Sat, Dec 6, 2014 at 5:08 PM, Danny Choi (dannchoi) > wrote: >> >> Hi, >> >> I have a VM which is in ERROR state. >> >> >> +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ >> >> | ID | Name >> | Status | Task State | Power State | Networks | >> >> >> +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ >> >> | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | >> cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR | - | NOSTATE >> | | >> >> >> I tried in both CLI ?nova delete? and Horizon ?terminate instance?. >> Both accepted the delete command without any error. >> However, the VM never got deleted. >> >> Is there a way to remove the VM? > > > What version of nova are you using? This is definitely a serious bug, you > should be able to delete an instance in error state. Can you file a bug that > includes steps on how to reproduce the bug along with all relevant logs. > > bugs.launchpad.net/nova > >> >> >> Thanks, >> Danny >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From clint at fewbar.com Tue Dec 9 23:56:28 2014 From: clint at fewbar.com (Clint Byrum) Date: Tue, 09 Dec 2014 15:56:28 -0800 Subject: [openstack-dev] People of OpenStack (and their IRC nicks) In-Reply-To: References: <20141209125828.GA10149@redhat.redhat.com> <20141209140407.GO2497@yuggoth.org> <54874973.5060903@openstack.org> Message-ID: <1418169303-sup-199@fewbar.com> Excerpts from Angus Salkeld's message of 2014-12-09 15:25:59 -0800: > On Wed, Dec 10, 2014 at 5:11 AM, Stefano Maffulli > wrote: > > > On 12/09/2014 06:04 AM, Jeremy Stanley wrote: > > > We already have a solution for tracking the contributor->IRC > > > mapping--add it to your Foundation Member Profile. For example, mine > > > is in there already: > > > > > > http://www.openstack.org/community/members/profile/5479 > > > > I recommend updating the openstack.org member profile and add IRC > > nickname there (and while you're there, update your affiliation history). > > > > There is also a search engine on: > > > > http://www.openstack.org/community/members/ > > > > > Except that info doesn't appear nicely in review. Some people put their > nick in their "Full Name" in > gerrit. Hopefully Clint doesn't mind: > > https://review.openstack.org/#/q/owner:%22Clint+%27SpamapS%27+Byrum%22+status:open,n,z > Indeed, I really didn't like that I'd be reviewing somebody's change, and talking to them on IRC, and not know if they knew who I was. It also has the odd side effect that gerritbot triggers my IRC filters when I 'git review'. From vadivel.openstack at gmail.com Wed Dec 10 00:11:42 2014 From: vadivel.openstack at gmail.com (Vadivel Poonathan) Date: Tue, 9 Dec 2014 16:11:42 -0800 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: @Armando, Okay, so if I understand you correctly, you're saying that was easier for you to go entirely out of tree, and that you have done so already. Okay, good for you, problem solved! One point that should be clear here is that, if someone is completely comfortable with being entirely out of tree, and he/she has done so already (I know of a few other examples besides the apic driver), then this proposal does not apply to them. >>[vad] how about the documentation in this case?... bcos it needs some place to document (a short desc and a link to vendor page) or list these kind of out-of-tree plugins/drivers... just to make the user aware of the availability of such plugins/driers which is compatible with so and so openstack release. I checked with the documentation team and according to them, only the following plugins/drivers only will get documented... 1) in-tree plugins/drivers (full documentation) 2) third-party plugins/drivers (ie, one implements and follows this new proposal, a.k.a partially-in-tree due to the integration module/code)... *** no listing/mention about such completely out-of-tree plugins/drivers*** In my case, i have a fully working/deployed out-of-tree plugin/mech_driver (compatible with Havana, Icehouse and Juno releases), but i couldnt get it listed or mentioned in the openstack plugin/driver documentation. Now only way to get my (out-of-tree) driver documented is to make it 'partially-in-the-tree' driver by implementing this new proposal. Though this is far better then earlier requirements/approval/maintenance overhead, i got to implement this proposal just for documentation purpose. Thanks, Vad -- On Tue, Dec 9, 2014 at 2:21 PM, Armando M. wrote: > > > On 9 December 2014 at 13:59, Ivar Lazzaro wrote: > >> I agree with Salvatore that the split is not an easy thing to achieve for >> vendors, and I would like to bring up my case to see if there are ways to >> make this at least a bit simpler. >> >> At some point I had the need to backport vendor code from Juno to >> Icehouse (see first attempt here [0]). That in [0] was some weird approach >> that put unnecessary burden on infra, neutron cores and even packagers, so >> I decided to move to a more decoupled approach that was basically >> completely splitting my code from Neutron. You can find the result here [1]. >> The focal points of this approach are: >> >> * **all** the vendor code is removed; >> * Neutron is used as a dependency, pulled directly from github for UTs >> (see test-requirements [2]) and explicitly required when installing the >> plugin; >> * The Database and Schema is the same as Neutron's; >> * A migration script exists for this driver, which uses a different (and >> unique) version_table (see env.py [3]); >> * Entry points are properly used in setup.cfg [4] in order to provide >> migration scripts and Driver/Plugin shortcuts for Neutron; >> * UTs are run by including Neutron in the venv [2]. >> * The boilerplate is taken care by cookiecutter [5]. >> > >> The advantage of the above approach, is that it's very simple to pull off >> (only thing you need is cookiecutter, a repo, and then you can just >> replicate the same tree structure that existed in Neutron for your own >> vendor code). Also it has the advantage to remove all the vendor code from >> Neutron (did I say that already?). As far as the CI is concerned, it just >> needs to "learn" how to install the new plugin, which will require Neutron >> to be pre-existent. >> >> The typical installation workflow would be: >> - Install Neutron normally; >> - pull from pypi the vendor driver; >> - run the vendor db migration script; >> - Do everything else (configuration and execution) just like it was done >> before. >> >> Note that this same satellite approach is used by GBP (I know this is a >> bad word that once brought hundreds of ML replies, but that's just an >> example :) ) for the Juno timeframe [6]. This shows that the very same >> thing can be easily done for services. >> > > Okay, so if I understand you correctly, you're saying that was easier for > you to go entirely out of tree, and that you have done so already. Okay, > good for you, problem solved! > > One point that should be clear here is that, if someone is completely > comfortable with being entirely out of tree, and he/she has done so already > (I know of a few other examples besides the apic driver), then this > proposal does not apply to them. > > They are way ahead of us, and kudos to them! > > As far you're concerned, Ivar, if you want to promote this model for new > plugins/drivers contributions, by all means, I encourage you to document > this in a blog or the wiki and disseminate your findings so that people can > adopt your model if they wanted to. > > >> >> As far as ML2 is concerned, I think we should split it as well in order >> to treat all the plugins equally, but with the following caveats: >> >> * ML2 will be in a openstack repo under the networking program (kind of >> obvious); >> * The drivers can decide wether to stay in tree with ML2 or not (for a >> better community effort, but they will definitively evolve slower); >> * Don't care about the governance, Neutron will be in charge of this repo >> and will have the ability to promote whoever they want when needed. >> > > We discussed this point before, and the decision is that we don't want to > be prescriptive about this. If people interested in ML2 drivers want to > stay close to each other or not, we should not mandate one or the other. > > >> As far as cogating is concerned, I think that using the above approach >> the breaking will exist just as long as the infra job understands how to >> install the ML2 driver from it's own repo. I don't see it as a big issue, >> But maybe it's just me, and my fabulous world where stuff works for no good >> reason. We could at least ask the infra team to understand it it's feasible. >> > > Well, you live in a nice wonderland, I envy you! This co-gating decision > is the result of a discussion with infra folks. > > >> Moreover, this is a work that we may need to do anyway! So it's better to >> just start it now thus creating an example for all the vendors that have to >> go through the split (back on Gary's point). >> > > Plenty of examples to go by, as mentioned time and time again. > > >> >> Appreciate your feedback, >> Ivar. >> >> [0] https://review.openstack.org/#/c/123596/ >> [1] https://github.com/noironetworks/apic-ml2-driver/tree/icehouse >> [2] >> https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/test-requirements.txt >> [3] >> https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/apic_ml2/neutron/db/migration/alembic_migrations/env.py >> [4] >> https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/setup.cfg >> [5] https://github.com/openstack-dev/cookiecutter >> [6] https://github.com/stackforge/group-based-policy >> >> On Tue, Dec 9, 2014 at 9:53 AM, Salvatore Orlando >> wrote: >> >>> >>> >>> On 9 December 2014 at 17:32, Armando M. wrote: >>> >>>> >>>> >>>> On 9 December 2014 at 09:41, Salvatore Orlando >>>> wrote: >>>> >>>>> I would like to chime into this discussion wearing my plugin developer >>>>> hat. >>>>> >>>>> We (the VMware team) have looked very carefully at the current >>>>> proposal for splitting off drivers and plugins from the main source code >>>>> tree. Therefore the concerns you've heard from Gary are not just ramblings >>>>> but are the results of careful examination of this proposal. >>>>> >>>>> While we agree with the final goal, the feeling is that for many >>>>> plugin maintainers this process change might be too much for what can be >>>>> accomplished in a single release cycle. >>>>> >>>> We actually gave a lot more than a cycle: >>>> >>>> >>>> https://review.openstack.org/#/c/134680/16/specs/kilo/core-vendor-decomposition.rst >>>> LINE 416 >>>> >>>> And in all honestly, I can only tell that getting this done by such an >>>> experienced team like the Neutron team @VMware shouldn't take that long. >>>> >>> >>> We are probably not experienced enough. We always love to learn new >>> things. >>> >>> >>>> >>>> By the way, if Kyle can do it in his teeny tiny time that he has left >>>> after his PTL duties, then anyone can do it! :) >>>> >>>> https://review.openstack.org/#/c/140191/ >>>> >>> >>> I think I should be able to use mv & git push as well - I think however >>> there's a bit more than that to it. >>> >>> >>>> >>>> As a member of the drivers team, I am still very supportive of the >>>>> split, I just want to make sure that it?s made in a sustainable way; I also >>>>> understand that ?sustainability? has been one of the requirements of the >>>>> current proposal, and therefore we should all be on the same page on this >>>>> aspect. >>>>> >>>>> However, we did a simple exercise trying to assess the amount of work >>>>> needed to achieve something which might be acceptable to satisfy the >>>>> process. Without going into too many details, this requires efforts for: >>>>> >>>>> - refactor the code to achieve a plugin module simple and thin enough >>>>> to satisfy the requirements. Unfortunately a radical approach like the one >>>>> in [1] with a reference to an external library is not pursuable for us >>>>> >>>>> - maintaining code repositories outside of the neutron scope and the >>>>> necessary infrastructure >>>>> >>>>> - reinforcing our CI infrastructure, and improve our error detection >>>>> and log analysis capabilities to improve reaction times upon failures >>>>> triggered by upstream changes. As you know, even if the plugin interface is >>>>> solid-ish, the dependency on the db base class increases the chances of >>>>> upstream changes breaking 3rd party plugins. >>>>> >>>> >>>> No-one is advocating for approach laid out in [1], but a lot of code >>>> can be moved elsewhere (like the nsxlib) without too much effort. Don't >>>> forget that not so long ago I was the maintainer of this plugin and the one >>>> who built the VMware NSX CI; I know very well what it takes to scope this >>>> effort, and I can support you in the process. >>>> >>> >>> Thanks for this clarification. I was sure that you guys were not >>> advocating for a ninja-split thing, but I wanted just to be sure of that. >>> I'm also pretty sure our engineering team values your support. >>> >>>> The feedback from our engineering team is that satisfying the >>>>> requirements of this new process might not be feasible in the Kilo >>>>> timeframe, both for existing plugins and for new plugins and drivers that >>>>> should be upstreamed (there are a few proposed on neutron-specs at the >>>>> moment, which are all in -2 status considering the impending approval of >>>>> the split out). >>>>> >>>> No new plugins can and will be accepted if they do not adopt the >>>> proposed model, let's be very clear about this. >>>> >>> >>> This is also what I gathered from the proposal. It seems that you're >>> however stating that there might be some flexibility in defining how much a >>> plugin complies with the new model. I will need to go back to the drawing >>> board with the rest of my team and see in which way this can work for us. >>> >>> >>>> The questions I would like to bring to the wider community are >>>>> therefore the following: >>>>> >>>>> 1 - Is there a possibility of making a further concession on the >>>>> current proposal, where maintainers are encouraged to experiment with the >>>>> plugin split in Kilo, but will actually required to do it in the next >>>>> release? >>>>> >>>> This is exactly what the spec is proposing: get started now, and it >>>> does not matter if you don't finish in time. >>>> >>> >>> I think the deprecation note at line 416 still scares people off a bit. >>> To me your word is enough, no change is needed. >>> >>>> 2 - What could be considered as a acceptable as a new plugin? I >>>>> understand that they would be accepted only as ?thin integration modules?, >>>>> which ideally should just be a pointer to code living somewhere else. I?m >>>>> not questioning the validity of this approach, but it has been brought to >>>>> my attention that this will actually be troubling for teams which have made >>>>> an investment in the previous release cycles to upstream plugins following >>>>> the ?old? process >>>>> >>>> You are not alone. Other efforts went through the same process [1, 2, >>>> 3]. Adjusting is a way of life. No-one is advocating for throwing away >>>> existing investment. This proposal actually promotes new and pre-existing >>>> investment. >>>> >>>> [1] https://review.openstack.org/#/c/104452/ >>>> [2] https://review.openstack.org/#/c/103728/ >>>> [3] https://review.openstack.org/#/c/136091/ >>>> >>> 3 - Regarding the above discussion on "ML2 or not ML2". The point on >>>>> co-gating is well taken. Eventually we'd like to remove this binding - >>>>> because I believe the ML2 subteam would also like to have more freedom on >>>>> their plugin. Do we already have an idea about how doing that without >>>>> completely moving away from the db_base class approach? >>>>> >>>> Sure, if you like to participate in the process, we can only welcome >>>> you! >>>> >>> >>> I actually asked you if you already had an idea... should I take that as >>> a no? >>> >>>> Thanks for your attention and for reading through this >>>>> >>>>> Salvatore >>>>> >>>>> [1] >>>>> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/vmware/plugin.py#n22 >>>>> >>>>> On 8 December 2014 at 21:51, Maru Newby wrote: >>>>> >>>>>> >>>>>> On Dec 7, 2014, at 10:51 AM, Gary Kotton wrote: >>>>>> >>>>>> > Hi Kyle, >>>>>> > I am not missing the point. I understand the proposal. I just think >>>>>> that it has some shortcomings (unless I misunderstand, which will certainly >>>>>> not be the first time and most definitely not the last). The thinning out >>>>>> is to have a shim in place. I understand this and this will be the entry >>>>>> point for the plugin. I do not have a concern for this. My concern is that >>>>>> we are not doing this with the ML2 off the bat. That should lead by example >>>>>> as it is our reference architecture. Lets not kid anyone, but we are going >>>>>> to hit some problems with the decomposition. I would prefer that it be done >>>>>> with the default implementation. Why? >>>>>> >>>>>> The proposal is to move vendor-specific logic out of the tree to >>>>>> increase vendor control over such code while decreasing load on reviewers. >>>>>> ML2 doesn?t contain vendor-specific logic - that?s the province of ML2 >>>>>> drivers - so it is not a good target for the proposed decomposition by >>>>>> itself. >>>>>> >>>>>> >>>>>> > ? Cause we will fix them quicker as it is something that >>>>>> prevent Neutron from moving forwards >>>>>> > ? We will just need to fix in one place first and not in N >>>>>> (where N is the vendor plugins) >>>>>> > ? This is a community effort ? so we will have a lot more >>>>>> eyes on it >>>>>> > ? It will provide a reference architecture for all new >>>>>> plugins that want to be added to the tree >>>>>> > ? It will provide a working example for plugin that are >>>>>> already in tree and are to be replaced by the shim >>>>>> > If we really want to do this, we can say freeze all development >>>>>> (which is just approvals for patches) for a few days so that we will can >>>>>> just focus on this. I stated what I think should be the process on the >>>>>> review. For those who do not feel like finding the link: >>>>>> > ? Create a stack forge project for ML2 >>>>>> > ? Create the shim in Neutron >>>>>> > ? Update devstack for the to use the two repos and the shim >>>>>> > When #3 is up and running we switch for that to be the gate. Then >>>>>> we start a stopwatch on all other plugins. >>>>>> >>>>>> As was pointed out on the spec (see Miguel?s comment on r15), the ML2 >>>>>> plugin and the OVS mechanism driver need to remain in the main Neutron repo >>>>>> for now. Neutron gates on ML2+OVS and landing a breaking change in the >>>>>> Neutron repo along with its corresponding fix to a separate ML2 repo would >>>>>> be all but impossible under the current integrated gating scheme. >>>>>> Plugins/drivers that do not gate Neutron have no such constraint. >>>>>> >>>>>> >>>>>> Maru >>>>>> >>>>>> >>>>>> > Sure, I?ll catch you on IRC tomorrow. I guess that you guys will >>>>>> bash out the details at the meetup. Sadly I will not be able to attend ? so >>>>>> you will have to delay on the tar and feathers. >>>>>> > Thanks >>>>>> > Gary >>>>>> > >>>>>> > >>>>>> > From: "mestery at mestery.com" >>>>>> > Reply-To: OpenStack List >>>>>> > Date: Sunday, December 7, 2014 at 7:19 PM >>>>>> > To: OpenStack List >>>>>> > Cc: "openstack at lists.openstack.org" >>>>>> > Subject: Re: [openstack-dev] [Neutron] Core/Vendor code >>>>>> decomposition >>>>>> > >>>>>> > Gary, you are still miss the point of this proposal. Please see my >>>>>> comments in review. We are not forcing things out of tree, we are thinning >>>>>> them. The text you quoted in the review makes that clear. We will look at >>>>>> further decomposing ML2 post Kilo, but we have to be realistic with what we >>>>>> can accomplish during Kilo. >>>>>> > >>>>>> > Find me on IRC Monday morning and we can discuss further if you >>>>>> still have questions and concerns. >>>>>> > >>>>>> > Thanks! >>>>>> > Kyle >>>>>> > >>>>>> > On Sun, Dec 7, 2014 at 2:08 AM, Gary Kotton >>>>>> wrote: >>>>>> >> Hi, >>>>>> >> I have raised my concerns on the proposal. I think that all >>>>>> plugins should be treated on an equal footing. My main concern is having >>>>>> the ML2 plugin in tree whilst the others will be moved out of tree will be >>>>>> problematic. I think that the model will be complete if the ML2 was also >>>>>> out of tree. This will help crystalize the idea and make sure that the >>>>>> model works correctly. >>>>>> >> Thanks >>>>>> >> Gary >>>>>> >> >>>>>> >> From: "Armando M." >>>>>> >> Reply-To: OpenStack List >>>>>> >> Date: Saturday, December 6, 2014 at 1:04 AM >>>>>> >> To: OpenStack List , " >>>>>> openstack at lists.openstack.org" >>>>>> >> Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition >>>>>> >> >>>>>> >> Hi folks, >>>>>> >> >>>>>> >> For a few weeks now the Neutron team has worked tirelessly on [1]. >>>>>> >> >>>>>> >> This initiative stems from the fact that as the project matures, >>>>>> evolution of processes and contribution guidelines need to evolve with it. >>>>>> This is to ensure that the project can keep on thriving in order to meet >>>>>> the needs of an ever growing community. >>>>>> >> >>>>>> >> The effort of documenting intentions, and fleshing out the various >>>>>> details of the proposal is about to reach an end, and we'll soon kick the >>>>>> tires to put the proposal into practice. Since the spec has grown pretty >>>>>> big, I'll try to capture the tl;dr below. >>>>>> >> >>>>>> >> If you have any comment please do not hesitate to raise them here >>>>>> and/or reach out to us. >>>>>> >> >>>>>> >> tl;dr >>> >>>>>> >> >>>>>> >> From the Kilo release, we'll initiate a set of steps to change the >>>>>> following areas: >>>>>> >> ? Code structure: every plugin or driver that exists or wants >>>>>> to exist as part of Neutron project is decomposed in an slim vendor >>>>>> integration (which lives in the Neutron repo), plus a bulkier vendor >>>>>> library (which lives in an independent publicly available repo); >>>>>> >> ? Contribution process: this extends to the following aspects: >>>>>> >> ? Design and Development: the process is largely >>>>>> unchanged for the part that pertains the vendor integration; the maintainer >>>>>> team is fully auto governed for the design and development of the vendor >>>>>> library; >>>>>> >> ? Testing and Continuous Integration: maintainers >>>>>> will be required to support their vendor integration with 3rd CI testing; >>>>>> the requirements for 3rd CI testing are largely unchanged; >>>>>> >> ? Defect management: the process is largely >>>>>> unchanged, issues affecting the vendor library can be tracked with >>>>>> whichever tool/process the maintainer see fit. In cases where vendor >>>>>> library fixes need to be reflected in the vendor integration, the usual >>>>>> OpenStack defect management apply. >>>>>> >> ? Documentation: there will be some changes to the >>>>>> way plugins and drivers are documented with the intention of promoting >>>>>> discoverability of the integrated solutions. >>>>>> >> ? Adoption and transition plan: we strongly advise >>>>>> maintainers to stay abreast of the developments of this effort, as their >>>>>> code, their CI, etc will be affected. The core team will provide guidelines >>>>>> and support throughout this cycle the ensure a smooth transition. >>>>>> >> To learn more, please refer to [1]. >>>>>> >> >>>>>> >> Many thanks, >>>>>> >> Armando >>>>>> >> >>>>>> >> [1] https://review.openstack.org/#/c/134680 >>>>>> >> >>>>>> >> _______________________________________________ >>>>>> >> OpenStack-dev mailing list >>>>>> >> OpenStack-dev at lists.openstack.org >>>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >> >>>>>> > >>>>>> > _______________________________________________ >>>>>> > OpenStack-dev mailing list >>>>>> > OpenStack-dev at lists.openstack.org >>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> OpenStack-dev mailing list >>>>>> OpenStack-dev at lists.openstack.org >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt.r.taylor at gmail.com Wed Dec 10 00:39:44 2014 From: kurt.r.taylor at gmail.com (Kurt Taylor) Date: Tue, 9 Dec 2014 18:39:44 -0600 Subject: [openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time In-Reply-To: <5487296C.5010406@anteaya.info> References: <5487296C.5010406@anteaya.info> Message-ID: Anita, I really appreciate your willingness to host a meeting, but, the response from the group in the previous thread was in support of the alternating meeting approach. Daya, Erlon, Trinath, Steve, Misha and others all agreed. I am confused why you would not want to go with the consensus on this. Thanks again for everything that you do for us! Kurt Taylor (krtaylor) On Tue, Dec 9, 2014 at 10:55 AM, Anita Kuno wrote: > On 12/09/2014 08:32 AM, Kurt Taylor wrote: > > All of the feedback so far has supported moving the existing IRC > > Third-party CI meeting to better fit a worldwide audience. > > > > The consensus is that we will have only 1 meeting per week at alternating > > times. You can see examples of other teams with alternating meeting times > > at: https://wiki.openstack.org/wiki/Meetings > > > > This way, one week we are good for one part of the world, the next week > for > > the other. You will not need to attend both meetings, just the meeting > time > > every other week that fits your schedule. > > > > Proposed times in UTC are being voted on here: > > https://www.google.com/moderator/#16/e=21b93c > > > > Please vote on the time that is best for you. I would like to finalize > the > > new times this week. > > > > Thanks! > > Kurt Taylor (krtaylor) > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Note that Kurt is welcome to do as he pleases with his own time. > > I will be having meetings in the irc channel for the times that I have > booked. > > Thanks, > Anita. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gus at inodes.org Wed Dec 10 00:43:04 2014 From: gus at inodes.org (Angus Lees) Date: Wed, 10 Dec 2014 00:43:04 +0000 Subject: [openstack-dev] [neutron] Linux capabilities vs sudo/rootwrap? Message-ID: [I tried to find any previous discussion of this and failed - I'd appreciate a pointer to any email threads / specs where this has already been discussed.] Currently neutron is given the ability to do just about anything to networking via rootwrap, sudo, and the IpFilter check (allow anything except 'netns exec'). This is completely in line with the role a typical neutron agent is expected to play on the local system. There are also recurring discussions/issues around the overhead of rootwrap, costs of sudo calls, etc - and projects such as rootwrap daemon underway to improve this. How crazy would it be to just give neutron CAP_NET_ADMIN (where required), and allow it to make network changes via ip (netlink) calls directly? We will still need rootwrap/sudo for other cases, but this should remove a lot of the separate process overhead for common operations, make us independent of iproute cli versions, and allow us to use a direct programmatic API (rtnetlink and other syscalls) rather than generating command lines and regex parsing output everywhere. For what it's worth, CAP_NET_ADMIN is not sufficient to allow 'netns exec' (requires CAP_SYS_ADMIN), so it preserves the IpFilter semantics. On the downside, many of the frequent rootwrap calls _do_ involve creating/modifying network namespaces so we wouldn't see advantages for these cases. I need to experiment further before having a proposal for that part (just granting CAP_SYS_ADMIN too is too broad; user namespaces help a lot, but they're very new and scary so not available everywhere). Thoughts? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt.r.taylor at gmail.com Wed Dec 10 00:46:48 2014 From: kurt.r.taylor at gmail.com (Kurt Taylor) Date: Tue, 9 Dec 2014 18:46:48 -0600 Subject: [openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time In-Reply-To: References: Message-ID: So far it looks like we have centered around 2 options: Option "A" 1200 and 2200 UTC Option "D" 1500 and 0400 UTC There is still time to pick your best time. Please vote at https://www.google.com/moderator/#16/e=21b93c Special thanks to Steve, Daya, Markus, Mikhail, Emily, Nurit, Edwin and Ramy for taking the time to vote. Kurt Taylor (krtaylor) On Tue, Dec 9, 2014 at 9:32 AM, Kurt Taylor wrote: > All of the feedback so far has supported moving the existing IRC > Third-party CI meeting to better fit a worldwide audience. > > The consensus is that we will have only 1 meeting per week at alternating > times. You can see examples of other teams with alternating meeting times > at: https://wiki.openstack.org/wiki/Meetings > > This way, one week we are good for one part of the world, the next week > for the other. You will not need to attend both meetings, just the meeting > time every other week that fits your schedule. > > Proposed times in UTC are being voted on here: > https://www.google.com/moderator/#16/e=21b93c > > Please vote on the time that is best for you. I would like to finalize the > new times this week. > > Thanks! > Kurt Taylor (krtaylor) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.shuklin at gmail.com Wed Dec 10 01:38:56 2014 From: george.shuklin at gmail.com (George Shuklin) Date: Wed, 10 Dec 2014 03:38:56 +0200 Subject: [openstack-dev] [neutron] Linux capabilities vs sudo/rootwrap? In-Reply-To: References: Message-ID: <5487A430.6090408@gmail.com> Is ovs-vsctl gonna be happy with CAP_NET_ADMIN? On 12/10/2014 02:43 AM, Angus Lees wrote: > [I tried to find any previous discussion of this and failed - I'd > appreciate a pointer to any email threads / specs where this has > already been discussed.] > > Currently neutron is given the ability to do just about anything to > networking via rootwrap, sudo, and the IpFilter check (allow anything > except 'netns exec'). This is completely in line with the role a > typical neutron agent is expected to play on the local system. > > There are also recurring discussions/issues around the overhead of > rootwrap, costs of sudo calls, etc - and projects such as rootwrap > daemon underway to improve this. > > How crazy would it be to just give neutron CAP_NET_ADMIN (where > required), and allow it to make network changes via ip (netlink) calls > directly? > We will still need rootwrap/sudo for other cases, but this should > remove a lot of the separate process overhead for common operations, > make us independent of iproute cli versions, and allow us to use a > direct programmatic API (rtnetlink and other syscalls) rather than > generating command lines and regex parsing output everywhere. > > For what it's worth, CAP_NET_ADMIN is not sufficient to allow 'netns > exec' (requires CAP_SYS_ADMIN), so it preserves the IpFilter > semantics. On the downside, many of the frequent rootwrap calls _do_ > involve creating/modifying network namespaces so we wouldn't see > advantages for these cases. I need to experiment further before > having a proposal for that part (just granting CAP_SYS_ADMIN too is > too broad; user namespaces help a lot, but they're very new and scary > so not available everywhere). > > Thoughts? > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivarlazzaro at gmail.com Wed Dec 10 02:18:51 2014 From: ivarlazzaro at gmail.com (Ivar Lazzaro) Date: Tue, 9 Dec 2014 18:18:51 -0800 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: Hi Armando, Thanks for your feedback! Okay, so if I understand you correctly, you're saying that was easier for > you to go entirely out of tree, and that you have done so already. Okay, > good for you, problem solved! > > One point that should be clear here is that, if someone is completely > comfortable with being entirely out of tree, and he/she has done so already > (I know of a few other examples besides the apic driver), then this > proposal does not apply to them. > > They are way ahead of us, and kudos to them! > Actually, the apic driver lives out of tree only for the icehouse release, that was just a practical example of how the "satellite" approach would work. Honestly, I think we can only declare victory and say that the "problem is solved" only when we finally define a standard way for this split to happen! And that's the whole point of the current proposal [0] and also the point of this discussion. If it wasn't the case, we could have just said "by M all the vendor code will be removed, it's your problem now!" :) My concern about the current proposal is that it still keeps a lot of vendor code in tree. I was so wondering why this is necessary, thus looking for feedback on the approach hinted in my previous email. Well, you live in a nice wonderland, I envy you! This co-gating decision is > the result of a discussion with infra folks. Then the white rabbit in my head lied to me :) !! Thanks for the clarification. The ML2 part of my previous email was just an almost orthogonal discussion though, it doesn't change the fact that IMHO we could handle the standard split differently. Ivar. [0] https://review.openstack.org/#/c/134680 On Tue, Dec 9, 2014 at 2:21 PM, Armando M. wrote: > > > On 9 December 2014 at 13:59, Ivar Lazzaro wrote: > >> I agree with Salvatore that the split is not an easy thing to achieve for >> vendors, and I would like to bring up my case to see if there are ways to >> make this at least a bit simpler. >> >> At some point I had the need to backport vendor code from Juno to >> Icehouse (see first attempt here [0]). That in [0] was some weird approach >> that put unnecessary burden on infra, neutron cores and even packagers, so >> I decided to move to a more decoupled approach that was basically >> completely splitting my code from Neutron. You can find the result here [1]. >> The focal points of this approach are: >> >> * **all** the vendor code is removed; >> * Neutron is used as a dependency, pulled directly from github for UTs >> (see test-requirements [2]) and explicitly required when installing the >> plugin; >> * The Database and Schema is the same as Neutron's; >> * A migration script exists for this driver, which uses a different (and >> unique) version_table (see env.py [3]); >> * Entry points are properly used in setup.cfg [4] in order to provide >> migration scripts and Driver/Plugin shortcuts for Neutron; >> * UTs are run by including Neutron in the venv [2]. >> * The boilerplate is taken care by cookiecutter [5]. >> > >> The advantage of the above approach, is that it's very simple to pull off >> (only thing you need is cookiecutter, a repo, and then you can just >> replicate the same tree structure that existed in Neutron for your own >> vendor code). Also it has the advantage to remove all the vendor code from >> Neutron (did I say that already?). As far as the CI is concerned, it just >> needs to "learn" how to install the new plugin, which will require Neutron >> to be pre-existent. >> >> The typical installation workflow would be: >> - Install Neutron normally; >> - pull from pypi the vendor driver; >> - run the vendor db migration script; >> - Do everything else (configuration and execution) just like it was done >> before. >> >> Note that this same satellite approach is used by GBP (I know this is a >> bad word that once brought hundreds of ML replies, but that's just an >> example :) ) for the Juno timeframe [6]. This shows that the very same >> thing can be easily done for services. >> > > Okay, so if I understand you correctly, you're saying that was easier for > you to go entirely out of tree, and that you have done so already. Okay, > good for you, problem solved! > > One point that should be clear here is that, if someone is completely > comfortable with being entirely out of tree, and he/she has done so already > (I know of a few other examples besides the apic driver), then this > proposal does not apply to them. > > They are way ahead of us, and kudos to them! > > As far you're concerned, Ivar, if you want to promote this model for new > plugins/drivers contributions, by all means, I encourage you to document > this in a blog or the wiki and disseminate your findings so that people can > adopt your model if they wanted to. > > >> >> As far as ML2 is concerned, I think we should split it as well in order >> to treat all the plugins equally, but with the following caveats: >> >> * ML2 will be in a openstack repo under the networking program (kind of >> obvious); >> * The drivers can decide wether to stay in tree with ML2 or not (for a >> better community effort, but they will definitively evolve slower); >> * Don't care about the governance, Neutron will be in charge of this repo >> and will have the ability to promote whoever they want when needed. >> > > We discussed this point before, and the decision is that we don't want to > be prescriptive about this. If people interested in ML2 drivers want to > stay close to each other or not, we should not mandate one or the other. > > >> As far as cogating is concerned, I think that using the above approach >> the breaking will exist just as long as the infra job understands how to >> install the ML2 driver from it's own repo. I don't see it as a big issue, >> But maybe it's just me, and my fabulous world where stuff works for no good >> reason. We could at least ask the infra team to understand it it's feasible. >> > > Well, you live in a nice wonderland, I envy you! This co-gating decision > is the result of a discussion with infra folks. > > >> Moreover, this is a work that we may need to do anyway! So it's better to >> just start it now thus creating an example for all the vendors that have to >> go through the split (back on Gary's point). >> > > Plenty of examples to go by, as mentioned time and time again. > > >> >> Appreciate your feedback, >> Ivar. >> >> [0] https://review.openstack.org/#/c/123596/ >> [1] https://github.com/noironetworks/apic-ml2-driver/tree/icehouse >> [2] >> https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/test-requirements.txt >> [3] >> https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/apic_ml2/neutron/db/migration/alembic_migrations/env.py >> [4] >> https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/setup.cfg >> [5] https://github.com/openstack-dev/cookiecutter >> [6] https://github.com/stackforge/group-based-policy >> >> On Tue, Dec 9, 2014 at 9:53 AM, Salvatore Orlando >> wrote: >> >>> >>> >>> On 9 December 2014 at 17:32, Armando M. wrote: >>> >>>> >>>> >>>> On 9 December 2014 at 09:41, Salvatore Orlando >>>> wrote: >>>> >>>>> I would like to chime into this discussion wearing my plugin developer >>>>> hat. >>>>> >>>>> We (the VMware team) have looked very carefully at the current >>>>> proposal for splitting off drivers and plugins from the main source code >>>>> tree. Therefore the concerns you've heard from Gary are not just ramblings >>>>> but are the results of careful examination of this proposal. >>>>> >>>>> While we agree with the final goal, the feeling is that for many >>>>> plugin maintainers this process change might be too much for what can be >>>>> accomplished in a single release cycle. >>>>> >>>> We actually gave a lot more than a cycle: >>>> >>>> >>>> https://review.openstack.org/#/c/134680/16/specs/kilo/core-vendor-decomposition.rst >>>> LINE 416 >>>> >>>> And in all honestly, I can only tell that getting this done by such an >>>> experienced team like the Neutron team @VMware shouldn't take that long. >>>> >>> >>> We are probably not experienced enough. We always love to learn new >>> things. >>> >>> >>>> >>>> By the way, if Kyle can do it in his teeny tiny time that he has left >>>> after his PTL duties, then anyone can do it! :) >>>> >>>> https://review.openstack.org/#/c/140191/ >>>> >>> >>> I think I should be able to use mv & git push as well - I think however >>> there's a bit more than that to it. >>> >>> >>>> >>>> As a member of the drivers team, I am still very supportive of the >>>>> split, I just want to make sure that it?s made in a sustainable way; I also >>>>> understand that ?sustainability? has been one of the requirements of the >>>>> current proposal, and therefore we should all be on the same page on this >>>>> aspect. >>>>> >>>>> However, we did a simple exercise trying to assess the amount of work >>>>> needed to achieve something which might be acceptable to satisfy the >>>>> process. Without going into too many details, this requires efforts for: >>>>> >>>>> - refactor the code to achieve a plugin module simple and thin enough >>>>> to satisfy the requirements. Unfortunately a radical approach like the one >>>>> in [1] with a reference to an external library is not pursuable for us >>>>> >>>>> - maintaining code repositories outside of the neutron scope and the >>>>> necessary infrastructure >>>>> >>>>> - reinforcing our CI infrastructure, and improve our error detection >>>>> and log analysis capabilities to improve reaction times upon failures >>>>> triggered by upstream changes. As you know, even if the plugin interface is >>>>> solid-ish, the dependency on the db base class increases the chances of >>>>> upstream changes breaking 3rd party plugins. >>>>> >>>> >>>> No-one is advocating for approach laid out in [1], but a lot of code >>>> can be moved elsewhere (like the nsxlib) without too much effort. Don't >>>> forget that not so long ago I was the maintainer of this plugin and the one >>>> who built the VMware NSX CI; I know very well what it takes to scope this >>>> effort, and I can support you in the process. >>>> >>> >>> Thanks for this clarification. I was sure that you guys were not >>> advocating for a ninja-split thing, but I wanted just to be sure of that. >>> I'm also pretty sure our engineering team values your support. >>> >>>> The feedback from our engineering team is that satisfying the >>>>> requirements of this new process might not be feasible in the Kilo >>>>> timeframe, both for existing plugins and for new plugins and drivers that >>>>> should be upstreamed (there are a few proposed on neutron-specs at the >>>>> moment, which are all in -2 status considering the impending approval of >>>>> the split out). >>>>> >>>> No new plugins can and will be accepted if they do not adopt the >>>> proposed model, let's be very clear about this. >>>> >>> >>> This is also what I gathered from the proposal. It seems that you're >>> however stating that there might be some flexibility in defining how much a >>> plugin complies with the new model. I will need to go back to the drawing >>> board with the rest of my team and see in which way this can work for us. >>> >>> >>>> The questions I would like to bring to the wider community are >>>>> therefore the following: >>>>> >>>>> 1 - Is there a possibility of making a further concession on the >>>>> current proposal, where maintainers are encouraged to experiment with the >>>>> plugin split in Kilo, but will actually required to do it in the next >>>>> release? >>>>> >>>> This is exactly what the spec is proposing: get started now, and it >>>> does not matter if you don't finish in time. >>>> >>> >>> I think the deprecation note at line 416 still scares people off a bit. >>> To me your word is enough, no change is needed. >>> >>>> 2 - What could be considered as a acceptable as a new plugin? I >>>>> understand that they would be accepted only as ?thin integration modules?, >>>>> which ideally should just be a pointer to code living somewhere else. I?m >>>>> not questioning the validity of this approach, but it has been brought to >>>>> my attention that this will actually be troubling for teams which have made >>>>> an investment in the previous release cycles to upstream plugins following >>>>> the ?old? process >>>>> >>>> You are not alone. Other efforts went through the same process [1, 2, >>>> 3]. Adjusting is a way of life. No-one is advocating for throwing away >>>> existing investment. This proposal actually promotes new and pre-existing >>>> investment. >>>> >>>> [1] https://review.openstack.org/#/c/104452/ >>>> [2] https://review.openstack.org/#/c/103728/ >>>> [3] https://review.openstack.org/#/c/136091/ >>>> >>> 3 - Regarding the above discussion on "ML2 or not ML2". The point on >>>>> co-gating is well taken. Eventually we'd like to remove this binding - >>>>> because I believe the ML2 subteam would also like to have more freedom on >>>>> their plugin. Do we already have an idea about how doing that without >>>>> completely moving away from the db_base class approach? >>>>> >>>> Sure, if you like to participate in the process, we can only welcome >>>> you! >>>> >>> >>> I actually asked you if you already had an idea... should I take that as >>> a no? >>> >>>> Thanks for your attention and for reading through this >>>>> >>>>> Salvatore >>>>> >>>>> [1] >>>>> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/vmware/plugin.py#n22 >>>>> >>>>> On 8 December 2014 at 21:51, Maru Newby wrote: >>>>> >>>>>> >>>>>> On Dec 7, 2014, at 10:51 AM, Gary Kotton wrote: >>>>>> >>>>>> > Hi Kyle, >>>>>> > I am not missing the point. I understand the proposal. I just think >>>>>> that it has some shortcomings (unless I misunderstand, which will certainly >>>>>> not be the first time and most definitely not the last). The thinning out >>>>>> is to have a shim in place. I understand this and this will be the entry >>>>>> point for the plugin. I do not have a concern for this. My concern is that >>>>>> we are not doing this with the ML2 off the bat. That should lead by example >>>>>> as it is our reference architecture. Lets not kid anyone, but we are going >>>>>> to hit some problems with the decomposition. I would prefer that it be done >>>>>> with the default implementation. Why? >>>>>> >>>>>> The proposal is to move vendor-specific logic out of the tree to >>>>>> increase vendor control over such code while decreasing load on reviewers. >>>>>> ML2 doesn?t contain vendor-specific logic - that?s the province of ML2 >>>>>> drivers - so it is not a good target for the proposed decomposition by >>>>>> itself. >>>>>> >>>>>> >>>>>> > ? Cause we will fix them quicker as it is something that >>>>>> prevent Neutron from moving forwards >>>>>> > ? We will just need to fix in one place first and not in N >>>>>> (where N is the vendor plugins) >>>>>> > ? This is a community effort ? so we will have a lot more >>>>>> eyes on it >>>>>> > ? It will provide a reference architecture for all new >>>>>> plugins that want to be added to the tree >>>>>> > ? It will provide a working example for plugin that are >>>>>> already in tree and are to be replaced by the shim >>>>>> > If we really want to do this, we can say freeze all development >>>>>> (which is just approvals for patches) for a few days so that we will can >>>>>> just focus on this. I stated what I think should be the process on the >>>>>> review. For those who do not feel like finding the link: >>>>>> > ? Create a stack forge project for ML2 >>>>>> > ? Create the shim in Neutron >>>>>> > ? Update devstack for the to use the two repos and the shim >>>>>> > When #3 is up and running we switch for that to be the gate. Then >>>>>> we start a stopwatch on all other plugins. >>>>>> >>>>>> As was pointed out on the spec (see Miguel?s comment on r15), the ML2 >>>>>> plugin and the OVS mechanism driver need to remain in the main Neutron repo >>>>>> for now. Neutron gates on ML2+OVS and landing a breaking change in the >>>>>> Neutron repo along with its corresponding fix to a separate ML2 repo would >>>>>> be all but impossible under the current integrated gating scheme. >>>>>> Plugins/drivers that do not gate Neutron have no such constraint. >>>>>> >>>>>> >>>>>> Maru >>>>>> >>>>>> >>>>>> > Sure, I?ll catch you on IRC tomorrow. I guess that you guys will >>>>>> bash out the details at the meetup. Sadly I will not be able to attend ? so >>>>>> you will have to delay on the tar and feathers. >>>>>> > Thanks >>>>>> > Gary >>>>>> > >>>>>> > >>>>>> > From: "mestery at mestery.com" >>>>>> > Reply-To: OpenStack List >>>>>> > Date: Sunday, December 7, 2014 at 7:19 PM >>>>>> > To: OpenStack List >>>>>> > Cc: "openstack at lists.openstack.org" >>>>>> > Subject: Re: [openstack-dev] [Neutron] Core/Vendor code >>>>>> decomposition >>>>>> > >>>>>> > Gary, you are still miss the point of this proposal. Please see my >>>>>> comments in review. We are not forcing things out of tree, we are thinning >>>>>> them. The text you quoted in the review makes that clear. We will look at >>>>>> further decomposing ML2 post Kilo, but we have to be realistic with what we >>>>>> can accomplish during Kilo. >>>>>> > >>>>>> > Find me on IRC Monday morning and we can discuss further if you >>>>>> still have questions and concerns. >>>>>> > >>>>>> > Thanks! >>>>>> > Kyle >>>>>> > >>>>>> > On Sun, Dec 7, 2014 at 2:08 AM, Gary Kotton >>>>>> wrote: >>>>>> >> Hi, >>>>>> >> I have raised my concerns on the proposal. I think that all >>>>>> plugins should be treated on an equal footing. My main concern is having >>>>>> the ML2 plugin in tree whilst the others will be moved out of tree will be >>>>>> problematic. I think that the model will be complete if the ML2 was also >>>>>> out of tree. This will help crystalize the idea and make sure that the >>>>>> model works correctly. >>>>>> >> Thanks >>>>>> >> Gary >>>>>> >> >>>>>> >> From: "Armando M." >>>>>> >> Reply-To: OpenStack List >>>>>> >> Date: Saturday, December 6, 2014 at 1:04 AM >>>>>> >> To: OpenStack List , " >>>>>> openstack at lists.openstack.org" >>>>>> >> Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition >>>>>> >> >>>>>> >> Hi folks, >>>>>> >> >>>>>> >> For a few weeks now the Neutron team has worked tirelessly on [1]. >>>>>> >> >>>>>> >> This initiative stems from the fact that as the project matures, >>>>>> evolution of processes and contribution guidelines need to evolve with it. >>>>>> This is to ensure that the project can keep on thriving in order to meet >>>>>> the needs of an ever growing community. >>>>>> >> >>>>>> >> The effort of documenting intentions, and fleshing out the various >>>>>> details of the proposal is about to reach an end, and we'll soon kick the >>>>>> tires to put the proposal into practice. Since the spec has grown pretty >>>>>> big, I'll try to capture the tl;dr below. >>>>>> >> >>>>>> >> If you have any comment please do not hesitate to raise them here >>>>>> and/or reach out to us. >>>>>> >> >>>>>> >> tl;dr >>> >>>>>> >> >>>>>> >> From the Kilo release, we'll initiate a set of steps to change the >>>>>> following areas: >>>>>> >> ? Code structure: every plugin or driver that exists or wants >>>>>> to exist as part of Neutron project is decomposed in an slim vendor >>>>>> integration (which lives in the Neutron repo), plus a bulkier vendor >>>>>> library (which lives in an independent publicly available repo); >>>>>> >> ? Contribution process: this extends to the following aspects: >>>>>> >> ? Design and Development: the process is largely >>>>>> unchanged for the part that pertains the vendor integration; the maintainer >>>>>> team is fully auto governed for the design and development of the vendor >>>>>> library; >>>>>> >> ? Testing and Continuous Integration: maintainers >>>>>> will be required to support their vendor integration with 3rd CI testing; >>>>>> the requirements for 3rd CI testing are largely unchanged; >>>>>> >> ? Defect management: the process is largely >>>>>> unchanged, issues affecting the vendor library can be tracked with >>>>>> whichever tool/process the maintainer see fit. In cases where vendor >>>>>> library fixes need to be reflected in the vendor integration, the usual >>>>>> OpenStack defect management apply. >>>>>> >> ? Documentation: there will be some changes to the >>>>>> way plugins and drivers are documented with the intention of promoting >>>>>> discoverability of the integrated solutions. >>>>>> >> ? Adoption and transition plan: we strongly advise >>>>>> maintainers to stay abreast of the developments of this effort, as their >>>>>> code, their CI, etc will be affected. The core team will provide guidelines >>>>>> and support throughout this cycle the ensure a smooth transition. >>>>>> >> To learn more, please refer to [1]. >>>>>> >> >>>>>> >> Many thanks, >>>>>> >> Armando >>>>>> >> >>>>>> >> [1] https://review.openstack.org/#/c/134680 >>>>>> >> >>>>>> >> _______________________________________________ >>>>>> >> OpenStack-dev mailing list >>>>>> >> OpenStack-dev at lists.openstack.org >>>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >> >>>>>> > >>>>>> > _______________________________________________ >>>>>> > OpenStack-dev mailing list >>>>>> > OpenStack-dev at lists.openstack.org >>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> OpenStack-dev mailing list >>>>>> OpenStack-dev at lists.openstack.org >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From keshava.a at hp.com Wed Dec 10 04:18:34 2014 From: keshava.a at hp.com (A, Keshava) Date: Wed, 10 Dec 2014 04:18:34 +0000 Subject: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Message-ID: <891761EAFA335D44AD1FFDB9B4A8C063D7EE0A@G4W3216.americas.hpqcorp.net> [cid:image003.png at 01D0145E.75813DF0] I have some of the basic question w.r.t Service-VM running the NFV. (These Service-VMs can be vNAT, vFW, vDPI, vRouting , vCPE etc ), 1. When these service-VM runs over the cloud (over OpenStack CN) each will be treated as Routable entity in the network ? i.e, Each Service-VM will run its own routing protocol so that it is a reachable entity in the network ? 2. Will there be any basic framework / basic elements that needs to run in these VM ? a. Each Service-VM should also its routing instance + L3 Forwarding .. If so are they optional or mandatory 3. When these Service-VM runs (which may be part vCPE) then each service packet will be carried till Service-VM or it will be handled in the OVS of the compute node it self ? Then how this will be handled for routing packet ? 4. If there are multiple 'features running with in a Service-VM' (example NAT,FW,IPSEC), a. Then depending on the prefix(tenant/user traffic) may need to chain them differently . i. Example for tenant-1 packet prefix P1: Service execution may be NAT --> FW --> IPSEC ii. Tennant-2 p2 : it may NAT->IPSEC->AAA 5. How the Service chain Execution, which may be running across multiple Service-VM is controlled. a. Is it controlled by configuring the OVS (Open V Switch ) running in the compute node ? b. When the Service-VMs are running across different Compute nodes they need to be chained across OVS . This needs to controlled by NFV Service Layer + OpenStack Controller ? In my opinion there should be a some basic discussion on how these Service-VM's framework and how they needs to be chained ,which entity are mandatory in such service to run over the cloud. Please let me know if such discussion already happened ? Let me know others opinion for the same. Thanks & regards, keshava -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: oledata.mso Type: application/octet-stream Size: 13825 bytes Desc: oledata.mso URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 18034 bytes Desc: image003.png URL: From gus at inodes.org Wed Dec 10 04:59:50 2014 From: gus at inodes.org (Angus Lees) Date: Wed, 10 Dec 2014 04:59:50 +0000 Subject: [openstack-dev] [neutron] Linux capabilities vs sudo/rootwrap? References: <5487A430.6090408@gmail.com> Message-ID: Afaik ovs-vsctl just talks to ovsdb-server via some regular IPC mechanism - usually a unix socket, see --db. So yes, provided the file permissions on the unix socket work out, then no root powers (or any kernel capability) is necessary. Having said that, there will be plenty of examples where sudo/rootwrap is still required. I'm not suggesting we can (or should!) get rid of it entirely - just for basic network configuration. On Wed Dec 10 2014 at 12:42:46 PM George Shuklin wrote: > Is ovs-vsctl gonna be happy with CAP_NET_ADMIN? > > On 12/10/2014 02:43 AM, Angus Lees wrote: > > [I tried to find any previous discussion of this and failed - I'd > appreciate a pointer to any email threads / specs where this has already > been discussed.] > > Currently neutron is given the ability to do just about anything to > networking via rootwrap, sudo, and the IpFilter check (allow anything > except 'netns exec'). This is completely in line with the role a typical > neutron agent is expected to play on the local system. > > There are also recurring discussions/issues around the overhead of > rootwrap, costs of sudo calls, etc - and projects such as rootwrap daemon > underway to improve this. > > How crazy would it be to just give neutron CAP_NET_ADMIN (where > required), and allow it to make network changes via ip (netlink) calls > directly? > We will still need rootwrap/sudo for other cases, but this should remove a > lot of the separate process overhead for common operations, make us > independent of iproute cli versions, and allow us to use a direct > programmatic API (rtnetlink and other syscalls) rather than generating > command lines and regex parsing output everywhere. > > For what it's worth, CAP_NET_ADMIN is not sufficient to allow 'netns > exec' (requires CAP_SYS_ADMIN), so it preserves the IpFilter semantics. On > the downside, many of the frequent rootwrap calls _do_ involve > creating/modifying network namespaces so we wouldn't see advantages for > these cases. I need to experiment further before having a proposal for > that part (just granting CAP_SYS_ADMIN too is too broad; user namespaces > help a lot, but they're very new and scary so not available everywhere). > > Thoughts? > > > _______________________________________________ > OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brandon.logan at RACKSPACE.COM Wed Dec 10 05:48:52 2014 From: brandon.logan at RACKSPACE.COM (Brandon Logan) Date: Wed, 10 Dec 2014 05:48:52 +0000 Subject: [openstack-dev] [neutron][lbaas] Kilo Midcycle Meetup In-Reply-To: <1417541257.4057.1.camel@localhost> References: <1416867738.3960.19.camel@localhost> <1417541257.4057.1.camel@localhost> Message-ID: <1418190532.10125.1.camel@localhost> It's set. We'll be having the meetup on Feb 2-6 in San Antonio at RAX HQ. I'll add a list of hotels and the address on the etherpad. https://etherpad.openstack.org/p/lbaas-kilo-meetup Thanks, Brandon On Tue, 2014-12-02 at 17:27 +0000, Brandon Logan wrote: > Per the meeting, put together an etherpad here: > > https://etherpad.openstack.org/p/lbaas-kilo-meetup > > I would like to get the location and dates finalized ASAP (preferrably > the next couple of days). > > We'll also try to do the same as the neutron and octava meetups for > remote attendees. > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From swaminathan.vasudevan at hp.com Wed Dec 10 06:14:05 2014 From: swaminathan.vasudevan at hp.com (Vasudevan, Swaminathan (PNB Roseville)) Date: Wed, 10 Dec 2014 06:14:05 +0000 Subject: [openstack-dev] DVR meeting tomorrow - will be cancelled. Message-ID: <4094DC7712AF5D488899847517A3C5B064E6391D@G9W0338.americas.hpqcorp.net> Hi Folks, There will be not DVR meeting tomorrow. See you all next week. Thanks Swaminathan Vasudevan Systems Software Engineer (TC) HP Networking Hewlett-Packard 8000 Foothills Blvd M/S 5541 Roseville, CA - 95747 tel: 916.785.0937 fax: 916.785.1815 email: swaminathan.vasudevan at hp.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Wed Dec 10 06:17:52 2014 From: sgordon at redhat.com (Steve Gordon) Date: Wed, 10 Dec 2014 01:17:52 -0500 (EST) Subject: [openstack-dev] [Telco][NFV] Meeting Reminder - Wednesday Dec 9 @ 2200 UTC in #openstack-meeting In-Reply-To: <116250306.33028596.1418192185798.JavaMail.zimbra@redhat.com> Message-ID: <1603362032.33028824.1418192272555.JavaMail.zimbra@redhat.com> Hi all, Just a reminder that the Telco Working Group will be meeting @ 2200 UTC in #openstack-meeting on Wednesday December 9th. Draft agenda is available here: https://etherpad.openstack.org/p/nfv-meeting-agenda Please feel free to add anything you think I missed. Thanks, Steve From sgordon at redhat.com Wed Dec 10 06:18:31 2014 From: sgordon at redhat.com (Steve Gordon) Date: Wed, 10 Dec 2014 01:18:31 -0500 (EST) Subject: [openstack-dev] [Telco][NFV] Meeting Reminder - Wednesday Dec 10 @ 2200 UTC in #openstack-meeting In-Reply-To: <1603362032.33028824.1418192272555.JavaMail.zimbra@redhat.com> References: <1603362032.33028824.1418192272555.JavaMail.zimbra@redhat.com> Message-ID: <1945979081.33029055.1418192311313.JavaMail.zimbra@redhat.com> ...and of course I meant December *10th* - sorry! -Steve ----- Original Message ----- > From: "Steve Gordon" > > Hi all, > > Just a reminder that the Telco Working Group will be meeting @ 2200 UTC in > #openstack-meeting on Wednesday December 9th. Draft agenda is available > here: > > https://etherpad.openstack.org/p/nfv-meeting-agenda > > Please feel free to add anything you think I missed. > > Thanks, > > Steve -- Steve Gordon, RHCE Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform From rakhmerov at mirantis.com Wed Dec 10 06:37:09 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Wed, 10 Dec 2014 12:37:09 +0600 Subject: [openstack-dev] [Mistral] Query on creating multiple resources In-Reply-To: <5487597F.9030500@redhat.com> References: <1418048204426.95510@persistent.com> <1418048476946.19042@persistent.com> <1418050120681.46696@persistent.com> <5485C560.9060605@redhat.com> <8CC8FE04-056A-4574-926B-BAF5B7749C84@mirantis.com> <5487597F.9030500@redhat.com> Message-ID: <20B7BF60-330F-4025-829C-061513097548@mirantis.com> Hundred percent agree with all said by Zane. Renat Akhmerov @ Mirantis Inc. > On 10 Dec 2014, at 02:20, Zane Bitter wrote: > > On 09/12/14 03:48, Renat Akhmerov wrote: >> Hey, >> >> I think it?s a question of what the final goal is. For just creating security groups as a resource I think Georgy and Zane are right, just use Heat. If the goal is to try Mistral or have this simple workflow as part of more complex then it?s totally fine to use Mistral. Sorry, I?m probably biased because Mistral is our baby :). Anyway, Nikolay has already answered the question technically, this ?for-each? feature will be available officially in about 2 weeks. > > :) > > They're not mutually exclusive, of course, and to clarify I wasn't suggesting replacing Mistral with Heat, I was suggesting replacing a bunch of 'create security group' steps in a larger workflow with a single 'create stack' step. > > In general, though: > - When you are just trying to get to a particular end state and it doesn't matter how you get there, Heat is a good solution. > - When you need to carry out a particular series of steps, and it is the steps that are well-defined, not the end state, then Mistral is a good solution. > - When you have a well-defined end state but some steps need to be done in a particular way that isn't supported by Heat, then Mistral can be a solution (it's not a _good_ solution, but that isn't a criticism because it isn't Mistral's job to make up for deficiencies in Heat). > - Both services are _highly_ complementary. For example, let's say you have a batch job to run regularly: you want to provision a server, do some work on it, and then remove the server when the work is complete. (An example that a lot of people will be doing pretty regularly might be building a custom VM image and uploading it to Glance.) This is a classic example of a workflow, and you should use Mistral to implement it. Now let's say that rather than just a single server you have a complex group of resources that need to be set up prior to running the job. You could encode all of the steps required to correctly set up and tear down all of those resources in the Mistral workflow, but that would be a mistake. While the overall process is still a workflow, the desired state after creating all of the resources but before running the job is known, and it doesn't matter how you get there. Therefore it's better to define the resources in a Heat template: unless you are doing something really weird it will Just Work(TM) for creating them all in the right order with optimal parallelisation, it knows how to delete them afterwards too without having to write it again backwards, and you can easily test it in isolation from the rest of the workflow. So you would replace the steps in the workflow that create and delete the server with steps that create and delete a stack. > >>> Create VM workflow was a demo example. Mistral potentially can be used by Heat or other orchestration tools to do actual interaction with API, but for user it might be easier to use Heat functionality. >> >> I kind of disagree with that statement. Mistral can be used by whoever finds its useful for their needs. Standard ?create_instance? workflow (which is in ?resources/workflows/create_instance.yaml?) is not so a demo example as well. It does a lot of good stuff you may really need in your case (e.g. retry policies). Even though it?s true that it has some limitations we?re aware of. For example, when it comes to configuring a network for newly created instance it?s now missing network related parameters to be able to alter behavior. > > I agree that it's unlikely that Heat should replace Mistral in many of the Mistral demo scenarios. I do think you could make a strong argument that Heat should replace *Nova* in many of those scenarios though. > > cheers, > Zane. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From raghavendra.lad at accenture.com Wed Dec 10 06:46:19 2014 From: raghavendra.lad at accenture.com (raghavendra.lad at accenture.com) Date: Wed, 10 Dec 2014 06:46:19 +0000 Subject: [openstack-dev] [Murano] Cannot find murano.conf Message-ID: <4728f25da73e44ee880f567206f381c5@BY2PR42MB101.048d.mgd.msft.net> HI Team, I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the below install murano-api I encounter the below error. Please assist. When I try to install $ tox -e venv -- murano-api --config-file ./etc/murano/murano.conf pip can't proceed with requirement 'pycrypto>=2.6 (from -r /home/ubuntu/murano/murano/requirements.t xt (line 18))' due to a pre-existing build directory. location: /home/ubuntu/murano/murano/.tox/venv/build/pycrypto This is likely due to a previous installation that failed. pip is being responsible and not assuming it can delete this. Please delete it and try again. Storing debug log for failure in /home/ubuntu/.pip/pip.log ERROR: could not install deps [-r/home/ubuntu/murano/murano/requirements.txt, -r/home/ubuntu/murano/m urano/test-requirements.txt] Warm Regards, Raghavendra Lad ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougw at a10networks.com Wed Dec 10 06:48:53 2014 From: dougw at a10networks.com (Doug Wiegley) Date: Wed, 10 Dec 2014 06:48:53 +0000 Subject: [openstack-dev] [neutron][third-party] failing CIs Message-ID: Hi all, Most of the Neutron third-party Cis are failing at the moment, and it?s because of the ongoing services split. The patch to get devstack working again will likely merge tomorrow. We will send another ML message when the split is ?finished?. Thanks, Doug From raghavendra.lad at accenture.com Wed Dec 10 06:55:04 2014 From: raghavendra.lad at accenture.com (raghavendra.lad at accenture.com) Date: Wed, 10 Dec 2014 06:55:04 +0000 Subject: [openstack-dev] [Murano] Oslo.messaging error Message-ID: <94a869fdf4a441edb487156cdc8bf7ec@BY2PR42MB101.048d.mgd.msft.net> HI Team, I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the below install murano-api I encounter the below error. Please assist. When I try to install I am using the Murano guide link provided below: https://murano.readthedocs.org/en/latest/install/manual.html I am trying to execute the section 7 1. Open a new console and launch Murano API. A separate terminal is required because the console will be locked by a running process. 2. $ cd ~/murano/murano 3. $ tox -e venv -- murano-api \ 4. > --config-file ./etc/murano/murano.conf I am getting the below error : I have a Juno Openstack ready and trying to integrate Murano 2014-12-10 12:10:30.396 7721 DEBUG murano.openstack.common.service [-] neutron.endpoint_type = publicURL log_opt_values /home/ ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-] neutron.insecure = False log_opt_values /home/ubun tu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-] **************************************************************** **************** log_opt_values /home/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2050 2014-12-10 12:10:30.400 7721 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on controller:5672 2014-12-10 12:10:30.408 7721 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on controller:5672 2014-12-10 12:10:30.416 7721 INFO eventlet.wsgi [-] (7721) wsgi starting up on http://0.0.0.0:8082/ 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Updating statistic information. update_stats /home/ubuntu/murano/muran o/murano/common/statservice.py:57 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats object: update_stats /home/ubuntu/murano/murano/murano/common/statservice.py:58 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats: Requests:0 Errors: 0 Ave.Res.Time 0.0000 Per tenant: {} update_stats /home/ubuntu/murano/murano/murano/common/statservice.py:64 2014-12-10 12:10:30.433 7721 DEBUG oslo.db.sqlalchemy.session [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZER O_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /hom e/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py:509 2014-12-10 12:10:33.464 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server controller:5672 closed the connection. Check log in credentials: Socket closed 2014-12-10 12:10:33.465 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server controller:5672 closed the connection. Check log in credentials: Socket closed 2014-12-10 12:10:37.483 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server controller:5672 closed the connection. Check log in credentials: Socket closed 2014-12-10 12:10:37.484 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server controller:5672 closed the connection. Check log in credentials: Socket closed Warm Regards, Raghavendra Lad ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From m4d.coder at gmail.com Wed Dec 10 07:12:04 2014 From: m4d.coder at gmail.com (W Chan) Date: Tue, 9 Dec 2014 23:12:04 -0800 Subject: [openstack-dev] [Mistral] Global Context and Execution Environment Message-ID: Nikolay, Regarding whether the execution environment BP is the same as this global context BP, I think the difference is in the scope of the variables. The global context that I'm proposing is provided to the workflow at execution and is only relevant to this execution. For example, some contextual information about this specific workflow execution (i.e. a reference to a record in an external system related such as a service ticket ID or CMDB record ID). The values do not necessary carry across multiple executions. But as I understand, the execution environment configuration is a set of reusable configuration that can be shared across multiple workflow executions. The fact where action parameters are specified explicitly over and over again is a common problem in the DSL. Winson -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp at jamezpolley.com Wed Dec 10 07:29:38 2014 From: jp at jamezpolley.com (James Polley) Date: Wed, 10 Dec 2014 08:29:38 +0100 Subject: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid? In-Reply-To: <54539C86.3040307@nemebean.com> References: <79E00D9220302D448C1D59B5224ED12D8EC3D065@EchoDB02.spil.local> <54539C86.3040307@nemebean.com> Message-ID: On Fri, Oct 31, 2014 at 3:28 PM, Ben Nemec wrote: > On 10/29/2014 10:17 AM, Kyle Mestery wrote: > > On Wed, Oct 29, 2014 at 7:25 AM, Hly wrote: > >> > >> > >> Sent from my iPad > >> > >> On 2014-10-29, at ??8:01, Robert van Leeuwen < > Robert.vanLeeuwen at spilgames.com> wrote: > >> > >>>>> I find our current design is remove all flows then add flow by > entry, this > >>>>> will cause every network node will break off all tunnels between > other > >>>>> network node and all compute node. > >>>> Perhaps a way around this would be to add a flag on agent startup > >>>> which would have it skip reprogramming flows. This could be used for > >>>> the upgrade case. > >>> > >>> I hit the same issue last week and filed a bug here: > >>> https://bugs.launchpad.net/neutron/+bug/1383674 > >>> > >>> From an operators perspective this is VERY annoying since you also > cannot push any config changes that requires/triggers a restart of the > agent. > >>> e.g. something simple like changing a log setting becomes a hassle. > >>> I would prefer the default behaviour to be to not clear the flows or > at the least an config option to disable it. > >>> > >> > >> +1, we also suffered from this even when a very little patch is done > >> > > I'd really like to get some input from the tripleo folks, because they > > were the ones who filed the original bug here and were hit by the > > agent NOT reprogramming flows on agent restart. It does seem fairly > > obvious that adding an option around this would be a good way forward, > > however. > > Since nobody else has commented, I'll put in my two cents (though I > might be overcharging you ;-). I've also added the TripleO tag to the > subject, although with Summit coming up I don't know if that will help. > Summit did lead to some delays - I started this response and then got distracted, and only just found the draft again > > Anyway, if the bug you're referring to is the one I think, then our > issue was just with the flows not existing. I don't think we care > whether they get reprogrammed on agent restart or not as long as they > somehow come into existence at some point. > Is https://bugs.launchpad.net/bugs/1290486 the bug in you'rethinking of? That seems to have been solved with https://review.openstack.org/#/c/96919/ My memory of that problem is that prior to 96919, when the daemon was restarted, existing flows were thrown away. We'd end up with just a NORMAL flow, which didn't route the traffic where we need it. The fix implemented there seems to have been to implement a canary rule to detect when this happens - ie, detect that all the existing flows had been thrown away. Once we know they've been thrown away, we know we need to recreate the flows that were thrown away when the daemon restarted. If my memory is correct (and it may not be, I'm not 100% sure I fully understood the problem at the time), the root cause here is not the change added in 96919 - by the time that code is triggered and the flows are reprogrammed, they've already been lost. > It's possible I'm wrong about that, and probably the best person to talk > to would be Robert Collins since I think he's the one who actually > tracked down the problem in the first place. > I think (if I'm looking at the right bug) that you're referring to his comment: we're trying to do things before ovs-db is up and running and neutron- openvswitch-agent is not handling ovs-db being down properly - it should back off and retry, or alternatively, do a full sync once the db is available. As far as I can tell, everything after that point (ie, once I got involved) focused on the latter, which is why we ended up with the canary and the reprogramming. Assuming he's right about the race condition, it sounds as though fixing that might be preferable. Later discussion on this thread has centered around a full flow-synchornization approach: it sounds to me as though handling the db being unavailable will need to be part of that approach (we don't want to synchronize towards "no rules" just because we can't get a canonical list of rules from the DB) > -Ben > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From loywolfe at gmail.com Wed Dec 10 07:32:50 2014 From: loywolfe at gmail.com (loy wolfe) Date: Wed, 10 Dec 2014 15:32:50 +0800 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: Remove everything out of tree, and leave only Neutron API framework as integration platform, would lower the attractions of the whole Openstack Project. Without a default "good enough" reference backend from community, customers have to depends on packagers to fully test all backends for them. Can we image nova without kvm, glance without swift? Cinder is weak because of default lvm backend, if in the future Ceph became the default it would be much better. If the goal of this decomposition is eventually moving default reference driver out, and the in-tree OVS backend is an eyesore, then it's better to split the Neutron core with base repo and vendor repo. They only share common base API/DB model, each vendor can extend their API, DB model freely, using a shim proxy to delegate all the service logic to their backend controller. They can choose to keep out of tree, or in tree (vendor repo) with the previous policy that contribute code reviewing for their code being reviewed by other vendors. From irenab at mellanox.com Wed Dec 10 07:41:55 2014 From: irenab at mellanox.com (Irena Berezovsky) Date: Wed, 10 Dec 2014 07:41:55 +0000 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: <20141209140411.GI29167@redhat.com> References: <20141209140411.GI29167@redhat.com> Message-ID: Hi Daniel, Please see inline -----Original Message----- From: Daniel P. Berrange [mailto:berrange at redhat.com] Sent: Tuesday, December 09, 2014 4:04 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver On Tue, Dec 09, 2014 at 10:53:19AM +0100, Maxime Leroy wrote: > I have also proposed a blueprint to have a new plugin mechanism in > nova to load external vif driver. (nova-specs: > https://review.openstack.org/#/c/136827/ and nova (rfc patch): > https://review.openstack.org/#/c/136857/) > > From my point-of-view of a developer having a plugin framework for > internal/external vif driver seems to be a good thing. > It makes the code more modular and introduce a clear api for vif driver classes. > > So far, it raises legitimate questions concerning API stability and > public API that request a wider discussion on the ML (as asking by > John Garbut). > > I think having a plugin mechanism and a clear api for vif driver is > not going against this policy: > http://docs.openstack.org/developer/nova/devref/policies.html#out-of-tree-support. > > There is no needs to have a stable API. It is up to the owner of the > external VIF driver to ensure that its driver is supported by the > latest API. And not the nova community to manage a stable API for this > external VIF driver. Does it make senses ? Experiance has shown that even if it is documented as unsupported, once the extension point exists, vendors & users will ignore the small print about support status. There will be complaints raised every time it gets broken until we end up being forced to maintain it as stable API whether we want to or not. That's not a route we want to go down. [IB] It should be up to the vendor to maintain it and make sure it's not broken. Having proper 3rd party CI in place should catch API contract changes. > Considering the network V2 API, L2/ML2 mechanism driver and VIF driver > need to exchange information such as: binding:vif_type and > binding:vif_details. > > From my understanding, 'binding:vif_type' and 'binding:vif_details' as > a field part of the public network api. There is no validation > constraints for these fields (see > http://docs.openstack.org/api/openstack-network/2.0/content/binding_ex > t_ports.html), it means that any value is accepted by the API. So, the > values set in 'binding:vif_type' and 'binding:vif_details' are not > part of the public API. Is my understanding correct ? The VIF parameters are mapped into the nova.network.model.VIF class which is doing some crude validation. I would anticipate that this validation will be increasing over time, because any functional data flowing over the API and so needs to be carefully managed for upgrade reasons. Even if the Neutron impl is out of tree, I would still expect both Nova and Neutron core to sign off on any new VIF type name and its associated details (if any). [IB] This maybe the reasonable integration point. But it requires nova team review and approval. From my experience nova team is extremely overloaded, therefor getting this code reviewed become very difficult mission. > What other reasons am I missing to not have VIF driver classes as a > public extension point ? Having to find & install VIF driver classes from countless different vendors, each hiding their code away in their own obsecure website, will lead to awful end user experiance when deploying Nova. Users are better served by having it all provided when they deploy Nova IMHO If every vendor goes off & works in their own isolated world we also loose the scope to align the implementations, so that common concepts work the same way in all cases and allow us to minimize the number of new VIF types required. The proposed vhostuser VIF type is a good example of this - it allows a single Nova VIF driver to be capable of potentially supporting multiple different impls on the Neutron side. If every vendor worked in their own world, we would have ended up with multiple VIF drivers doing the same thing in Nova, each with their own set of bugs & quirks. [IB] I think that most of the vendors that maintain vif_driver out of nova, do not do it on purpose and would prefer to see it upstream. Sometimes host side binding is not fully integrated with libvirt and requires some temporary additional code, till libvirt provides complete support. Sometimes, it is just lack of nova team attention to the proposed spec/code to be reviewed and accepted on time, which ends up with fully supported neutron part and missing small but critical vif_driver piece. I expect the quality of the code the operator receives will be lower if it is never reviewed by anyone except the vendor who writes it in the first place. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From henry4hly at gmail.com Wed Dec 10 07:48:30 2014 From: henry4hly at gmail.com (henry hly) Date: Wed, 10 Dec 2014 15:48:30 +0800 Subject: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup? In-Reply-To: References: <87ppc090ne.fsf@metaswitch.com> <87d27z8kxj.fsf@metaswitch.com> Message-ID: Hi Kevin, Does it make sense to introduce "GeneralvSwitch MD", working with VIF_TYPE_TAP? It just do very simple port bind just like OVS and bridge. Then anyone can implement their backend and agent without patch neutron drivers. Best Regards Henry On Fri, Dec 5, 2014 at 4:23 PM, Kevin Benton wrote: > I see the difference now. > The main concern I see with the NOOP type is that creating the virtual > interface could require different logic for certain hypervisors. In that > case Neutron would now have to know things about nova and to me it seems > like that's slightly too far the other direction. > > On Thu, Dec 4, 2014 at 8:00 AM, Neil Jerram > wrote: >> >> Kevin Benton writes: >> >> > What you are proposing sounds very reasonable. If I understand >> > correctly, the idea is to make Nova just create the TAP device and get >> > it attached to the VM and leave it 'unplugged'. This would work well >> > and might eliminate the need for some drivers. I see no reason to >> > block adding a VIF type that does this. >> >> I was actually floating a slightly more radical option than that: the >> idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does >> absolutely _nothing_, not even create the TAP device. >> >> (My pending Nova spec at https://review.openstack.org/#/c/130732/ >> proposes VIF_TYPE_TAP, for which Nova _does_ creates the TAP device, but >> then does nothing else - i.e. exactly what you've described just above. >> But in this email thread I was musing about going even further, towards >> providing a platform for future networking experimentation where Nova >> isn't involved at all in the networking setup logic.) >> >> > However, there is a good reason that the VIF type for some OVS-based >> > deployments require this type of setup. The vSwitches are connected to >> > a central controller using openflow (or ovsdb) which configures >> > forwarding rules/etc. Therefore they don't have any agents running on >> > the compute nodes from the Neutron side to perform the step of getting >> > the interface plugged into the vSwitch in the first place. For this >> > reason, we will still need both types of VIFs. >> >> Thanks. I'm not advocating that existing VIF types should be removed, >> though - rather wondering if similar function could in principle be >> implemented without Nova VIF plugging - or what that would take. >> >> For example, suppose someone came along and wanted to implement a new >> OVS-like networking infrastructure? In principle could they do that >> without having to enhance the Nova VIF driver code? I think at the >> moment they couldn't, but that they would be able to if VIF_TYPE_NOOP >> (or possibly VIF_TYPE_TAP) was already in place. In principle I think >> it would then be possible for the new implementation to specify >> VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does the kind >> of configuration and vSwitch plugging that you've described above. >> >> Does that sound correct, or am I missing something else? >> >> >> 1 .When the port is created in the Neutron DB, and handled (bound >> > etc.) >> > by the plugin and/or mechanism driver, the TAP device name is already >> > present at that time. >> > >> > This is backwards. The tap device name is derived from the port ID, so >> > the port has already been created in Neutron at that point. It is just >> > unbound. The steps are roughly as follows: Nova calls neutron for a >> > port, Nova creates/plugs VIF based on port, Nova updates port on >> > Neutron, Neutron binds the port and notifies agent/plugin/whatever to >> > finish the plumbing, Neutron notifies Nova that port is active, Nova >> > unfreezes the VM. >> > >> > None of that should be affected by what you are proposing. The only >> > difference is that your Neutron agent would also perform the >> > 'plugging' operation. >> >> Agreed - but thanks for clarifying the exact sequence of events. >> >> I wonder if what I'm describing (either VIF_TYPE_NOOP or VIF_TYPE_TAP) >> might fit as part of the "Nova-network/Neutron Migration" priority >> that's just been announced for Kilo. I'm aware that a part of that >> priority is concerned with live migration, but perhaps it could also >> include the goal of future networking work not having to touch Nova >> code? >> >> Regards, >> Neil > > > > > -- > Kevin Benton > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From rakhmerov at mirantis.com Wed Dec 10 08:57:45 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Wed, 10 Dec 2014 14:57:45 +0600 Subject: [openstack-dev] [Mistral] In-Reply-To: References: <2E47CCEA-109A-4FD3-9984-9457C565089F@stackstorm.com> Message-ID: <1B5A8AB7-9F8B-4CF6-A35E-849790056C7D@mirantis.com> Agree with Nikolay. Winson, did you see https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment ? The concept described their is pretty simple and syntactically we can have a predefined key in workflow context, for example accessible as $.__env (similar to $.__execution), which contains environment variables. They are the same for the whole workflow including subworkflows. One additional BP that we filed after Paris Summit is https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-constants which is a little bit related to it. According to what described in this BP we can just define workflow scoped constants for convenience which are accessible as regular workflow input variables. Btw, as an idea: they can be initilalized by variables from execution environment. And there?s even one more BP https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values that suggests having default workflow input values that is related to those two. Btw, it can be extended to having default values for action input values as well. So, I would suggest you take a look at all these BPs and continue to discuss this topic. I feel it?s really important since all these things are intended to improve usability. Thanks Renat Akhmerov @ Mirantis Inc. > On 10 Dec 2014, at 00:17, Nikolay Makhotkin wrote: > > Guys, > > May be I misunderstood something here but what is the difference between this one and https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment ? > > On Tue, Dec 9, 2014 at 5:35 PM, Dmitri Zimine > wrote: > Winson, > > thanks for filing the blueprint: > https://blueprints.launchpad.net/mistral/+spec/mistral-global-context , > > some clarification questions: > 1) how exactly would the user describe these global variables syntactically? In DSL? What can we use as syntax? In the initial workflow input? > 2) what is the visibility scope: this and child workflows, or "truly global?? > 3) What are the good default behavior? > > Let?s detail it a bit more. > > DZ> > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Best Regards, > Nikolay > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Wed Dec 10 09:16:31 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Wed, 10 Dec 2014 15:16:31 +0600 Subject: [openstack-dev] [Mistral] Global Context and Execution Environment In-Reply-To: References: Message-ID: <968DEFB3-E976-4D3F-9FAB-37077229A61E@mirantis.com> Winson, ok, I got the idea. Just a crazy idea that came to my mind. What if we just mark some of the input parameters as ?global?? For example, wf: type: direct input: - param1 - param2: global One way or another we?re talking about different scopes. I see the following possible scopes: * local - default scope, only current workflow tasks can see it * global - all entities can see it: this workflow itself (its tasks), its nested workflows and actions * workflow - only this workflow and actions called from this workflow can see it However, if we follow that path we would need to change how Mistral validates workflow input parameters. Currently, if we pass something into workflow it must be declared as an input parameter. In case of ?global? scope and nested workflows this mechanism is too primitive because a nested workflow may get something that it doesn?t expect. So it may not be that straightforward. Thoughts? Just in case, I?ll repeat related BPs from another thread: * https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment * https://blueprints.launchpad.net/mistral/+spec/mistral-global-context * https://blueprints.launchpad.net/mistral/+spec/mistral-workflow-constants * https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values Renat Akhmerov @ Mirantis Inc. > On 10 Dec 2014, at 13:12, W Chan wrote: > > Nikolay, > > Regarding whether the execution environment BP is the same as this global context BP, I think the difference is in the scope of the variables. The global context that I'm proposing is provided to the workflow at execution and is only relevant to this execution. For example, some contextual information about this specific workflow execution (i.e. a reference to a record in an external system related such as a service ticket ID or CMDB record ID). The values do not necessary carry across multiple executions. But as I understand, the execution environment configuration is a set of reusable configuration that can be shared across multiple workflow executions. The fact where action parameters are specified explicitly over and over again is a common problem in the DSL. > > Winson > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From berrange at redhat.com Wed Dec 10 09:31:01 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Wed, 10 Dec 2014 09:31:01 +0000 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: References: <20141209140411.GI29167@redhat.com> Message-ID: <20141210093101.GC6450@redhat.com> On Wed, Dec 10, 2014 at 07:41:55AM +0000, Irena Berezovsky wrote: > Hi Daniel, > Please see inline > > -----Original Message----- > From: Daniel P. Berrange [mailto:berrange at redhat.com] > Sent: Tuesday, December 09, 2014 4:04 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver > > The VIF parameters are mapped into the nova.network.model.VIF class which is doing some crude validation. I would anticipate that this validation will be increasing over time, because any functional data flowing over the API and so needs to be carefully managed for upgrade reasons. > > Even if the Neutron impl is out of tree, I would still expect both Nova and Neutron core to sign off on any new VIF type name and its associated details (if any). > > [IB] This maybe the reasonable integration point. But it requires nova team review and approval. From my experience nova team is extremely overloaded, therefor getting this code reviewed become very difficult mission. > > > What other reasons am I missing to not have VIF driver classes as a > > public extension point ? > > Having to find & install VIF driver classes from countless different vendors, each hiding their code away in their own obsecure website, will lead to awful end user experiance when deploying Nova. Users are better served by having it all provided when they deploy Nova IMHO > > If every vendor goes off & works in their own isolated world we also loose the scope to align the implementations, so that common concepts work the same way in all cases and allow us to minimize the number of new VIF types required. The proposed vhostuser VIF type is a good example of this - it allows a single Nova VIF driver to be capable of potentially supporting multiple different impls on the Neutron side. > If every vendor worked in their own world, we would have ended up with multiple VIF drivers doing the same thing in Nova, each with their own set of bugs & quirks. > > [IB] I think that most of the vendors that maintain vif_driver out of nova, do not do it on purpose and would prefer to see it upstream. Sometimes host side binding is not fully integrated with libvirt and requires some temporary additional code, till libvirt provides complete support. Sometimes, it is just lack of nova team attention to the proposed spec/code to be reviewed and accepted on time, which ends up with fully supported neutron part and missing small but critical vif_driver piece. So the problem of Nova review bandwidth is a constant problem across all areas of the code. We need to solve this problem for the team as a whole in a much broader fashion than just for people writing VIF drivers. The VIF drivers are really small pieces of code that should be straightforward to review & get merged in any release cycle in which they are proposed. I think we need to make sure that we focus our energy on doing this and not ignoring the problem by breaking stuff off out of tree. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From aaronorosen at gmail.com Wed Dec 10 09:33:49 2014 From: aaronorosen at gmail.com (Aaron Rosen) Date: Wed, 10 Dec 2014 01:33:49 -0800 Subject: [openstack-dev] python-congressclient 1.0.1 released Message-ID: The congress team is pleased to announce the release of the python-congressclient 1.0.1. This release includes several bug fixes as well as many other changes - a few highlights: - python34 compatibility - New CLI command to simulate results of rule - openstack congress policy simulate Show the result of simulation. - Add new CLI command to check the status of a datasource - openstack congress datasource status list - Add new CLI for viewing schemas - openstack congress datasource table schema show Show schema for datasource table. - openstack congress datasource schema show Show schema for datasource. - Add missing CLI command - openstack congress policy rule show $ git log --abbrev-commit --pretty=oneline --no-merges 1.0.0..1.0.1 1e31e9d Fix version issue 53dccd7 Workflow documentation is now in infra-manual bab2c9e Updated from global requirements 9941dd6 Updated from global requirements 7be067e Used schema to compute columns for datasource rows 7a81c74 Added datasource schema bcb1b90 Added datasource status command f5fe21a Add news file about what was added in each release 3c4867d Make client work with python34 aa9eb14 Use a more simple policy rule for test f5e0a70 Adding missing CLI command congress policy rule show d8d9adc Updated from global requirements d46cee8 Added command for policy engine's simulation functionality 255d834 Work toward Python 3.4 support and testing Please report issues through launchpad: https://launchpad.net/python-congressclient Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Wed Dec 10 09:56:06 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Wed, 10 Dec 2014 09:56:06 +0000 Subject: [openstack-dev] [horizon] REST and Django References: <547DD08A.7000402@redhat.com> Message-ID: Sorry I didn't respond to this earlier today, I had intended to. What you're describing isn't REST, and the principles of REST are what have been guiding the design of the new API so far. I see a lot of value in using REST approaches, mostly around clarity of the interface. While the idea of a very thin proxy seemed like a great idea at one point, my conversations at the summit convinced me that there was value in both using the client interfaces present in the openstack_dashboard/api code base (since they abstract away many issues in the apis including across versions) and also value in us being able to clean up (for example, using "project_id" rather than "project" in the user API we've already implemented) and extend those interfaces (to allow batched operations). We want to be careful about what we expose in Horizon to the JS clients through this API. That necessitates some amount of code in Horizon. About half of the current API for keysone represents that control (the other half is docstrings :) Richard On Tue Dec 09 2014 at 9:37:47 PM Tihomir Trifonov wrote: > Sorry for the late reply, just few thoughts on the matter. > > IMO the REST middleware should be as thin as possible. And I mean thin in > terms of processing - it should not do pre/post processing of the requests, > but just unpack/pack. So here is an example: > > instead of making AJAX calls that contain instructions: > > ?? >> POST --json --data {"action": "delete", "data": [ {"name": >> "item1"}, {"name": "item2"}, {"name": "item3" ]} > > > I think a better approach is just to pack/unpack batch commands, and leave > execution to the frontend/backend and not middleware: > > > ?? >> POST --json --data {" >> ?batch >> ": >> ?[ >> {? >> " >> ? >> action" >> ? : "delete"? >> , >> ?"payload": ? >> {"name": "item1"} >> ?, >> {? >> " >> ? >> action" >> ? : "delete"? >> , >> ? >> "payload": >> ? >> {"name": "item >> ?2 >> "} >> ?, >> {? >> " >> ? >> action" >> ? : "delete"? >> , >> ? >> "payload": >> ? >> {"name": "item >> ?3 >> "} >> ? ] ? >> ? >> ? >> } > > > ?The idea is that the middleware should not know the actual data. It > should ideally just unpack the data: > > ??responses = [] >> > > for cmd in >> ? ? >> ? >> ? >> request.POST['batch']:? > > >> ? >> ??responses >> ?.append(? >> ? >> getattr(controller, cmd['action'] >> ?)(** >> cmd['?payload'] >> ?)?) >> > >> ?return responses? >> > > > ?and the frontend(JS) will just send batches of simple commands, and will > receive a list of responses for each command in the batch. The error > handling will be done in the frontend?(JS) as well. > > ? > > For the more complex example of 'put()' where we have dependent objects: > > project = api.keystone.tenant_get(request, id) >> kwargs = self._tenant_kwargs_from_DATA(request.DATA, enabled=None) >> api.keystone.tenant_update(request, project, **kwargs) > > > > In practice the project data should be already present in the > frontend(assuming that we already loaded it to render the project > form/view), so > > ? > ? > POST --json --data {" > ?batch > ": > ?[ > {? > " > ? > action" > ? : "tenant_update"? > , > ?"payload": ? > {"project": js_project_object.id, "name": "some name", "prop1": "some > prop", "prop2": "other prop, etc."} > ? > ? ] ? > ? > ? > }? > > So in general we don't need to recreate the full state on each REST call, > if we make the Frontent full-featured application. This way - the frontend > will construct the object, will hold the cached value, and will just send > the needed requests as single ones or in batches, will receive the response > from the API backend, and will render the results. The whole processing > logic will be held in the Frontend(JS), while the middleware will just act > as proxy(un/packer). This way we will maintain just the logic in the > frontend, and will not need to duplicate some logic in the middleware. > > > > > On Tue, Dec 2, 2014 at 4:45 PM, Adam Young wrote: > >> On 12/02/2014 12:39 AM, Richard Jones wrote: >> >> On Mon Dec 01 2014 at 4:18:42 PM Thai Q Tran wrote: >> >>> I agree that keeping the API layer thin would be ideal. I should add >>> that having discrete API calls would allow dynamic population of table. >>> However, I will make a case where it *might* be necessary to add >>> additional APIs. Consider that you want to delete 3 items in a given table. >>> >>> If you do this on the client side, you would need to perform: n * (1 API >>> request + 1 AJAX request) >>> If you have some logic on the server side that batch delete actions: n * >>> (1 API request) + 1 AJAX request >>> >>> Consider the following: >>> n = 1, client = 2 trips, server = 2 trips >>> n = 3, client = 6 trips, server = 4 trips >>> n = 10, client = 20 trips, server = 11 trips >>> n = 100, client = 200 trips, server 101 trips >>> >>> As you can see, this does not scale very well.... something to >>> consider... >>> >> This is not something Horizon can fix. Horizon can make matters worse, >> but cannot make things better. >> >> If you want to delete 3 users, Horizon still needs to make 3 distinct >> calls to Keystone. >> >> To fix this, we need either batch calls or a standard way to do multiples >> of the same operation. >> >> The unified API effort it the right place to drive this. >> >> >> >> >> >> >> >> Yep, though in the above cases the client is still going to be hanging, >> waiting for those server-backend calls, with no feedback until it's all >> done. I would hope that the client-server call overhead is minimal, but I >> guess that's probably wishful thinking when in the land of random Internet >> users hitting some provider's Horizon :) >> >> So yeah, having mulled it over myself I agree that it's useful to have >> batch operations implemented in the POST handler, the most common operation >> being DELETE. >> >> Maybe one day we could transition to a batch call with user feedback >> using a websocket connection. >> >> >> Richard >> >>> Richard Jones ---11/27/2014 05:38:53 PM---On Fri Nov 28 2014 at >>> 5:58:00 AM Tripp, Travis S wrote: >>> >>> From: Richard Jones >>> To: "Tripp, Travis S" , OpenStack List < >>> openstack-dev at lists.openstack.org> >>> Date: 11/27/2014 05:38 PM >>> Subject: Re: [openstack-dev] [horizon] REST and Django >>> >>> ------------------------------ >>> >>> >>> >>> >>> On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S <*travis.tripp at hp.com* >>> > wrote: >>> >>> Hi Richard, >>> >>> You are right, we should put this out on the main ML, so copying >>> thread out to there. ML: FYI that this started after some impromptu IRC >>> discussions about a specific patch led into an impromptu google hangout >>> discussion with all the people on the thread below. >>> >>> >>> Thanks Travis! >>> >>> >>> >>> As I mentioned in the review[1], Thai and I were mainly discussing >>> the possible performance implications of network hops from client to >>> horizon server and whether or not any aggregation should occur server side. >>> In other words, some views require several APIs to be queried before any >>> data can displayed and it would eliminate some extra network requests from >>> client to server if some of the data was first collected on the server side >>> across service APIs. For example, the launch instance wizard will need to >>> collect data from quite a few APIs before even the first step is displayed >>> (I?ve listed those out in the blueprint [2]). >>> >>> The flip side to that (as you also pointed out) is that if we keep >>> the API?s fine grained then the wizard will be able to optimize in one >>> place the calls for data as it is needed. For example, the first step may >>> only need half of the API calls. It also could lead to perceived >>> performance increases just due to the wizard making a call for different >>> data independently and displaying it as soon as it can. >>> >>> >>> Indeed, looking at the current launch wizard code it seems like you >>> wouldn't need to load all that data for the wizard to be displayed, since >>> only some subset of it would be necessary to display any given panel of the >>> wizard. >>> >>> >>> >>> I tend to lean towards your POV and starting with discrete API calls >>> and letting the client optimize calls. If there are performance problems >>> or other reasons then doing data aggregation on the server side could be >>> considered at that point. >>> >>> >>> I'm glad to hear it. I'm a fan of optimising when necessary, and not >>> beforehand :) >>> >>> >>> >>> Of course if anybody is able to do some performance testing between >>> the two approaches then that could affect the direction taken. >>> >>> >>> I would certainly like to see us take some measurements when performance >>> issues pop up. Optimising without solid metrics is bad idea :) >>> >>> >>> Richard >>> >>> >>> >>> [1] >>> *https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py* >>> >>> [2] >>> *https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign* >>> >>> >>> -Travis >>> >>> *From: *Richard Jones <*r1chardj0n3s at gmail.com* >>> > >>> * Date: *Wednesday, November 26, 2014 at 11:55 PM >>> * To: *Travis Tripp <*travis.tripp at hp.com* >, Thai >>> Q Tran/Silicon Valley/IBM <*tqtran at us.ibm.com* >, >>> David Lyle <*dklyle0 at gmail.com* >, Maxime Vidori < >>> *maxime.vidori at enovance.com* >, >>> "Wroblewski, Szymon" <*szymon.wroblewski at intel.com* >>> >, "Wood, Matthew David (HP Cloud - >>> Horizon)" <*matt.wood at hp.com* >, "Chen, Shaoquan" < >>> *sean.chen2 at hp.com* >, "Farina, Matt (HP Cloud)" < >>> *matthew.farina at hp.com* >, Cindy Lu/Silicon >>> Valley/IBM <*clu at us.ibm.com* >, Justin >>> Pomeroy/Rochester/IBM <*jpomero at us.ibm.com* >, >>> Neill Cox <*neill.cox at ingenious.com.au* > >>> * Subject: *Re: REST and Django >>> >>> I'm not sure whether this is the appropriate place to discuss this, >>> or whether I should be posting to the list under [Horizon] but I think we >>> need to have a clear idea of what goes in the REST API and what goes in the >>> client (angular) code. >>> >>> In my mind, the thinner the REST API the better. Indeed if we can >>> get away with proxying requests through without touching any *client code, >>> that would be great. >>> >>> Coding additional logic into the REST API means that a developer >>> would need to look in two places, instead of one, to determine what was >>> happening for a particular call. If we keep it thin then the API presented >>> to the client developer is very, very similar to the API presented by the >>> services. Minimum surprise. >>> >>> Your thoughts? >>> >>> >>> Richard >>> >>> >>> On Wed Nov 26 2014 at 2:40:52 PM Richard Jones < >>> *r1chardj0n3s at gmail.com* > wrote: >>> >>> >>> Thanks for the great summary, Travis. >>> >>> I've completed the work I pledged this morning, so now the REST >>> API change set has: >>> >>> - no rest framework dependency >>> - AJAX scaffolding in openstack_dashboard.api.rest.utils >>> - code in openstack_dashboard/api/rest/ >>> - renamed the API from "identity" to "keystone" to be consistent >>> - added a sample of testing, mostly for my own sanity to check >>> things were working >>> >>> *https://review.openstack.org/#/c/136676* >>> >>> >>> >>> Richard >>> >>> On Wed Nov 26 2014 at 12:18:25 PM Tripp, Travis S < >>> *travis.tripp at hp.com* > wrote: >>> >>> >>> Hello all, >>> >>> Great discussion on the REST urls today! I think that we are >>> on track to come to a common REST API usage pattern. To provide quick >>> summary: >>> >>> We all agreed that going to a straight REST pattern rather >>> than through tables was a good idea. We discussed using direct get / post >>> in Django views like what Max originally used[1][2] and Thai also >>> started[3] with the identity table rework or to go with djangorestframework >>> [5] like what Richard was prototyping with[4]. >>> >>> The main things we would use from Django Rest Framework were >>> built in JSON serialization (avoid boilerplate), better exception handling, >>> and some request wrapping. However, we all weren?t sure about the need for >>> a full new framework just for that. At the end of the conversation, we >>> decided that it was a cleaner approach, but Richard would see if he could >>> provide some utility code to do that much for us without requiring the full >>> framework. David voiced that he doesn?t want us building out a whole >>> framework on our own either. >>> >>> So, Richard will do some investigation during his day today >>> and get back to us. Whatever the case, we?ll get a patch in horizon for >>> the base dependency (framework or Richard?s utilities) that both Thai?s >>> work and the launch instance work is dependent upon. We?ll build REST >>> style API?s using the same pattern. We will likely put the rest api?s in >>> horizon/openstack_dashboard/api/rest/. >>> >>> [1] >>> *https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/keypair.py* >>> >>> [2] >>> *https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/launch.py* >>> >>> [3] >>> *https://review.openstack.org/#/c/133767/8/openstack_dashboard/dashboards/identity/users/views.py* >>> >>> [4] >>> *https://review.openstack.org/#/c/136676/4/openstack_dashboard/rest_api/identity.py* >>> >>> [5] *http://www.django-rest-framework.org/* >>> >>> >>> Thanks, >>> >>> >>> Travis_______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> _______________________________________________ >> OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Regards, > Tihomir Trifonov > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From matthew.gilliard at gmail.com Wed Dec 10 10:30:21 2014 From: matthew.gilliard at gmail.com (Matthew Gilliard) Date: Wed, 10 Dec 2014 10:30:21 +0000 Subject: [openstack-dev] People of OpenStack (and their IRC nicks) In-Reply-To: <1418169303-sup-199@fewbar.com> References: <20141209125828.GA10149@redhat.redhat.com> <20141209140407.GO2497@yuggoth.org> <54874973.5060903@openstack.org> <1418169303-sup-199@fewbar.com> Message-ID: So, are we agreed that http://www.openstack.org/community/members/ is the authoritative place for IRC lookups? In which case, I'll take the old content out of https://wiki.openstack.org/wiki/People and leave a message directing people where to look. I don't have the imagination to use anything other than my real name on IRC but for people who do, should we try to encourage putting the IRC nick in the gerrit name? On Tue, Dec 9, 2014 at 11:56 PM, Clint Byrum wrote: > Excerpts from Angus Salkeld's message of 2014-12-09 15:25:59 -0800: >> On Wed, Dec 10, 2014 at 5:11 AM, Stefano Maffulli >> wrote: >> >> > On 12/09/2014 06:04 AM, Jeremy Stanley wrote: >> > > We already have a solution for tracking the contributor->IRC >> > > mapping--add it to your Foundation Member Profile. For example, mine >> > > is in there already: >> > > >> > > http://www.openstack.org/community/members/profile/5479 >> > >> > I recommend updating the openstack.org member profile and add IRC >> > nickname there (and while you're there, update your affiliation history). >> > >> > There is also a search engine on: >> > >> > http://www.openstack.org/community/members/ >> > >> > >> Except that info doesn't appear nicely in review. Some people put their >> nick in their "Full Name" in >> gerrit. Hopefully Clint doesn't mind: >> >> https://review.openstack.org/#/q/owner:%22Clint+%27SpamapS%27+Byrum%22+status:open,n,z >> > > Indeed, I really didn't like that I'd be reviewing somebody's change, > and talking to them on IRC, and not know if they knew who I was. > > It also has the odd side effect that gerritbot triggers my IRC filters > when I 'git review'. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From slukjanov at mirantis.com Wed Dec 10 10:42:12 2014 From: slukjanov at mirantis.com (Sergey Lukjanov) Date: Wed, 10 Dec 2014 14:42:12 +0400 Subject: [openstack-dev] [sahara] HDP2 testing in Sahara CI Message-ID: Hi folks, we have some issues with testing HDP2 in Sahara CI starting from the weekend, so, please, consider the HDP2 job unstable, but do not approve changes that could directly affect HDP2 plugin with failed job. We're now trying to make it working. Thanks. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbalukoff at bluebox.net Wed Dec 10 10:50:24 2014 From: sbalukoff at bluebox.net (Stephen Balukoff) Date: Wed, 10 Dec 2014 02:50:24 -0800 Subject: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration In-Reply-To: <891761EAFA335D44AD1FFDB9B4A8C063D7DF66@G4W3216.americas.hpqcorp.net> References: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP> <9EDBC83C95615E4A97C964A30A3E18AC2F695F28@ORD1EXD01.RACKSPACE.CORP> <891761EAFA335D44AD1FFDB9B4A8C063D7DF66@G4W3216.americas.hpqcorp.net> Message-ID: Hi Keshava, For the purposes of Octavia, it's going to be service VMs (or containers or what have you). However, service VM or tenant VM the concept is roughly similar: We need some kind of layer-3 routing capability which works something like Neutron floating IPs (though not just a static NAT in this case) but which can distribute traffic to a set of back-end VMs running on a Neutron network according to some predictable algorithm (probably a distributed hash). The idea behind ACTIVE-ACTIVE is that you have many service VMs (we call them amphorae) which service the same "public" IP in some way-- this allows for horizontal scaling of services which need it (ie. anything which does TLS termination with a significant amount of load). Does this make sense to you? Thanks, Stephen On Mon, Dec 8, 2014 at 9:56 PM, A, Keshava wrote: > Stephen, > > > > Interesting to know what is ?ACTIVE-ACTIVE topology of load balancing VMs?. > > What is the scenario is it Service-VM (of NFV) or Tennant VM ? > > Curious to know the background of this thoughts . > > > > keshava > > > > > > *From:* Stephen Balukoff [mailto:sbalukoff at bluebox.net] > *Sent:* Tuesday, December 09, 2014 7:18 AM > *To:* OpenStack Development Mailing List (not for usage questions) > > *Subject:* Re: [openstack-dev] [Neutron] [RFC] Floating IP idea > solicitation and collaboration > > > > For what it's worth, I know that the Octavia project will need something > which can do more advanced layer-3 networking in order to deliver and > ACTIVE-ACTIVE topology of load balancing VMs / containers / machines. > That's still a "down the road" feature for us, but it would be great to be > able to do more advanced layer-3 networking in earlier releases of Octavia > as well. (Without this, we might have to go through back doors to get > Neutron to do what we need it to, and I'd rather avoid that.) > > > > I'm definitely up for learning more about your proposal for this project, > though I've not had any practical experience with Ryu yet. I would also > like to see whether it's possible to do the sort of advanced layer-3 > networking you've described without using OVS. (We have found that OVS > tends to be not quite mature / stable enough for our needs and have moved > most of our clouds to use ML2 / standard linux bridging.) > > > > Carl: I'll also take a look at the two gerrit reviews you've linked. Is > this week's L3 meeting not happening then? (And man-- I wish it were an > hour or two later in the day. Coming at y'all from PST timezone here.) > > > > Stephen > > > > On Mon, Dec 8, 2014 at 11:57 AM, Carl Baldwin wrote: > > Ryan, > > I'll be traveling around the time of the L3 meeting this week. My > flight leaves 40 minutes after the meeting and I might have trouble > attending. It might be best to put it off a week or to plan another > time -- maybe Friday -- when we could discuss it in IRC or in a > Hangout. > > Carl > > > On Mon, Dec 8, 2014 at 8:43 AM, Ryan Clevenger > wrote: > > Thanks for getting back Carl. I think we may be able to make this weeks > > meeting. Jason K?lker is the engineer doing all of the lifting on this > side. > > Let me get with him to review what you all have so far and check our > > availability. > > > > ________________________________________ > > > > Ryan Clevenger > > Manager, Cloud Engineering - US > > m: 678.548.7261 > > e: ryan.clevenger at rackspace.com > > > > ________________________________ > > From: Carl Baldwin [carl at ecbaldwin.net] > > Sent: Sunday, December 07, 2014 4:04 PM > > To: OpenStack Development Mailing List > > Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea > solicitation > > and collaboration > > > > Ryan, > > > > I have been working with the L3 sub team in this direction. Progress has > > been slow because of other priorities but we have made some. I have > written > > a blueprint detailing some changes needed to the code to enable the > > flexibility to one day run glaring ups on an l3 routed network [1]. > Jaime > > has been working on one that integrates ryu (or other speakers) with > neutron > > [2]. Dvr was also a step in this direction. > > > > I'd like to invite you to the l3 weekly meeting [3] to discuss further. > I'm > > very happy to see interest in this area and have someone new to > collaborate. > > > > Carl > > > > [1] https://review.openstack.org/#/c/88619/ > > [2] https://review.openstack.org/#/c/125401/ > > [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam > > > > On Dec 3, 2014 4:04 PM, "Ryan Clevenger" > > wrote: > >> > >> Hi, > >> > >> At Rackspace, we have a need to create a higher level networking service > >> primarily for the purpose of creating a Floating IP solution in our > >> environment. The current solutions for Floating IPs, being tied to > plugin > >> implementations, does not meet our needs at scale for the following > reasons: > >> > >> 1. Limited endpoint H/A mainly targeting failover only and not > >> multi-active endpoints, > >> 2. Lack of noisy neighbor and DDOS mitigation, > >> 3. IP fragmentation (with cells, public connectivity is terminated > inside > >> each cell leading to fragmentation and IP stranding when cell > CPU/Memory use > >> doesn't line up with allocated IP blocks. Abstracting public > connectivity > >> away from nova installations allows for much more efficient use of those > >> precious IPv4 blocks). > >> 4. Diversity in transit (multiple encapsulation and transit types on a > per > >> floating ip basis). > >> > >> We realize that network infrastructures are often unique and such a > >> solution would likely diverge from provider to provider. However, we > would > >> love to collaborate with the community to see if such a project could be > >> built that would meet the needs of providers at scale. We believe that, > at > >> its core, this solution would boil down to terminating north<->south > traffic > >> temporarily at a massively horizontally scalable centralized core and > then > >> encapsulating traffic east<->west to a specific host based on the > >> association setup via the current L3 router's extension's 'floatingips' > >> resource. > >> > >> Our current idea, involves using Open vSwitch for header rewriting and > >> tunnel encapsulation combined with a set of Ryu applications for > management: > >> > >> https://i.imgur.com/bivSdcC.png > >> > >> The Ryu application uses Ryu's BGP support to announce up to the Public > >> Routing layer individual floating ips (/32's or /128's) which are then > >> summarized and announced to the rest of the datacenter. If a particular > >> floating ip is experiencing unusually large traffic (DDOS, slashdot > effect, > >> etc.), the Ryu application could change the announcements up to the > Public > >> layer to shift that traffic to dedicated hosts setup for that purpose. > It > >> also announces a single /32 "Tunnel Endpoint" ip downstream to the > TunnelNet > >> Routing system which provides transit to and from the cells and their > >> hypervisors. Since traffic from either direction can then end up on any > of > >> the FLIP hosts, a simple flow table to modify the MAC and IP in either > the > >> SRC or DST fields (depending on traffic direction) allows the system to > be > >> completely stateless. We have proven this out (with static routing and > >> flows) to work reliably in a small lab setup. > >> > >> On the hypervisor side, we currently plumb networks into separate OVS > >> bridges. Another Ryu application would control the bridge that handles > >> overlay networking to selectively divert traffic destined for the > default > >> gateway up to the FLIP NAT systems, taking into account any configured > >> logical routing and local L2 traffic to pass out into the existing > overlay > >> fabric undisturbed. > >> > >> Adding in support for L2VPN EVPN > >> (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN > >> Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) > to the > >> Ryu BGP speaker will allow the hypervisor side Ryu application to > advertise > >> up to the FLIP system reachability information to take into account VM > >> failover, live-migrate, and supported encapsulation types. We believe > that > >> decoupling the tunnel endpoint discovery from the control plane > >> (Nova/Neutron) will provide for a more robust solution as well as allow > for > >> use outside of openstack if desired. > >> > >> ________________________________________ > >> > >> Ryan Clevenger > >> Manager, Cloud Engineering - US > >> m: 678.548.7261 > >> e: ryan.clevenger at rackspace.com > >> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > > Stephen Balukoff > Blue Box Group, LLC > (800)613-4305 x807 > -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.trifonov at gmail.com Wed Dec 10 11:02:27 2014 From: t.trifonov at gmail.com (Tihomir Trifonov) Date: Wed, 10 Dec 2014 13:02:27 +0200 Subject: [openstack-dev] [horizon] REST and Django In-Reply-To: References: <547DD08A.7000402@redhat.com> Message-ID: Richard, thanks for the reply, I agree that the given example is not a real REST. But we already have the REST API - that's Keystone, Nova, Cinder, Glance, Neutron etc, APIs. So what we plan to do here? To add a new REST layer to communicate with other REST API? Do we really need Frontend-REST-REST architecture ? My opinion is that we don't need another REST layer as we currently are trying to go away from the Django layer, which is the same - another processing layer. Although we call it REST proxy or whatever - it doesn't need to be a real REST, but just an aggregation proxy that combines and forwards some requests with adding minimal processing overhead. What makes sense for me is to keep the authentication in this layer as it is now - push a cookie to the frontend, but the REST layer will extract the auth tokens from the session storage and prepare the auth context for the REST API request to OS services. This way we will not expose the tokens to the JS frontend, and will have strict control over the authentication. The frontend will just send data requests, they will be wrapped with auth context and forwarded. Regarding the existing issues with versions in the API - for me the existing approach is wrong. All these fixes were made as workarounds. What should have been done is to create abstractions for each version and to use a separate class for each version. This was partially done for the keystoneclient in api/keystone.py, but not for the forms/views, where we still have if-else for versions. What I suggest here is to have different(concrete) views/forms for each version, and to use them according the context. If the Keystone backend is v2.0 - then in the Frontend use keystone2() object, otherwise use keystone3() object. This of course needs some more coding, but is much cleaner in terms of customization and testing. For me the current hacks with 'if keystone.version == 3.0' are wrong at many levels. And this can be solved now. *The problem till now was that we had one frontend that had to be backed by different versions of backend components*. *Now we can have different frontends that map to specific backend*. That's how I understand the power of Angular with it's views and directives. That's where I see the real benefit of using full-featured frontend. Also imagine how easy will be then to deprecate a component version, compared to what we need to do now for the same. Otherwise we just rewrite the current Django middleware with another DjangoRest middleware and don't change anything, we don't fix the problems. We just move them to another place. I still think that in Paris we talked about a new generation of the Dashboard, a different approach on building the frontend for OpenStack. What I've heard there from users/operators of Horizon was that it was extremely hard to add customizations and new features to the Dashboard, as all these needed to go through upstream changes and to wait until next release cycle to get them. Do we still want to address these concerns and how? Please, correct me if I got things wrong. On Wed, Dec 10, 2014 at 11:56 AM, Richard Jones wrote: > Sorry I didn't respond to this earlier today, I had intended to. > > What you're describing isn't REST, and the principles of REST are what > have been guiding the design of the new API so far. I see a lot of value in > using REST approaches, mostly around clarity of the interface. > > While the idea of a very thin proxy seemed like a great idea at one point, > my conversations at the summit convinced me that there was value in both > using the client interfaces present in the openstack_dashboard/api code > base (since they abstract away many issues in the apis including across > versions) and also value in us being able to clean up (for example, using > "project_id" rather than "project" in the user API we've already > implemented) and extend those interfaces (to allow batched operations). > > We want to be careful about what we expose in Horizon to the JS clients > through this API. That necessitates some amount of code in Horizon. About > half of the current API for keysone represents that control (the other half > is docstrings :) > > > Richard > > > On Tue Dec 09 2014 at 9:37:47 PM Tihomir Trifonov > wrote: > >> Sorry for the late reply, just few thoughts on the matter. >> >> IMO the REST middleware should be as thin as possible. And I mean thin in >> terms of processing - it should not do pre/post processing of the requests, >> but just unpack/pack. So here is an example: >> >> instead of making AJAX calls that contain instructions: >> >> ?? >>> POST --json --data {"action": "delete", "data": [ {"name": >>> "item1"}, {"name": "item2"}, {"name": "item3" ]} >> >> >> I think a better approach is just to pack/unpack batch commands, and >> leave execution to the frontend/backend and not middleware: >> >> >> ?? >>> POST --json --data {" >>> ?batch >>> ": >>> ?[ >>> {? >>> " >>> ? >>> action" >>> ? : "delete"? >>> , >>> ?"payload": ? >>> {"name": "item1"} >>> ?, >>> {? >>> " >>> ? >>> action" >>> ? : "delete"? >>> , >>> ? >>> "payload": >>> ? >>> {"name": "item >>> ?2 >>> "} >>> ?, >>> {? >>> " >>> ? >>> action" >>> ? : "delete"? >>> , >>> ? >>> "payload": >>> ? >>> {"name": "item >>> ?3 >>> "} >>> ? ] ? >>> ? >>> ? >>> } >> >> >> ?The idea is that the middleware should not know the actual data. It >> should ideally just unpack the data: >> >> ??responses = [] >>> >> >> for cmd in >>> ? ? >>> ? >>> ? >>> request.POST['batch']:? >> >> >>> ? >>> ??responses >>> ?.append(? >>> ? >>> getattr(controller, cmd['action'] >>> ?)(** >>> cmd['?payload'] >>> ?)?) >>> >> >>> ?return responses? >>> >> >> >> ?and the frontend(JS) will just send batches of simple commands, and will >> receive a list of responses for each command in the batch. The error >> handling will be done in the frontend?(JS) as well. >> >> ? >> >> For the more complex example of 'put()' where we have dependent objects: >> >> project = api.keystone.tenant_get(request, id) >>> kwargs = self._tenant_kwargs_from_DATA(request.DATA, enabled=None) >>> api.keystone.tenant_update(request, project, **kwargs) >> >> >> >> In practice the project data should be already present in the >> frontend(assuming that we already loaded it to render the project >> form/view), so >> >> ? >> ? >> POST --json --data {" >> ?batch >> ": >> ?[ >> {? >> " >> ? >> action" >> ? : "tenant_update"? >> , >> ?"payload": ? >> {"project": js_project_object.id, "name": "some name", "prop1": "some >> prop", "prop2": "other prop, etc."} >> ? >> ? ] ? >> ? >> ? >> }? >> >> So in general we don't need to recreate the full state on each REST call, >> if we make the Frontent full-featured application. This way - the frontend >> will construct the object, will hold the cached value, and will just send >> the needed requests as single ones or in batches, will receive the response >> from the API backend, and will render the results. The whole processing >> logic will be held in the Frontend(JS), while the middleware will just act >> as proxy(un/packer). This way we will maintain just the logic in the >> frontend, and will not need to duplicate some logic in the middleware. >> >> >> >> >> On Tue, Dec 2, 2014 at 4:45 PM, Adam Young wrote: >> >>> On 12/02/2014 12:39 AM, Richard Jones wrote: >>> >>> On Mon Dec 01 2014 at 4:18:42 PM Thai Q Tran wrote: >>> >>>> I agree that keeping the API layer thin would be ideal. I should add >>>> that having discrete API calls would allow dynamic population of table. >>>> However, I will make a case where it *might* be necessary to add >>>> additional APIs. Consider that you want to delete 3 items in a given table. >>>> >>>> If you do this on the client side, you would need to perform: n * (1 >>>> API request + 1 AJAX request) >>>> If you have some logic on the server side that batch delete actions: n >>>> * (1 API request) + 1 AJAX request >>>> >>>> Consider the following: >>>> n = 1, client = 2 trips, server = 2 trips >>>> n = 3, client = 6 trips, server = 4 trips >>>> n = 10, client = 20 trips, server = 11 trips >>>> n = 100, client = 200 trips, server 101 trips >>>> >>>> As you can see, this does not scale very well.... something to >>>> consider... >>>> >>> This is not something Horizon can fix. Horizon can make matters >>> worse, but cannot make things better. >>> >>> If you want to delete 3 users, Horizon still needs to make 3 distinct >>> calls to Keystone. >>> >>> To fix this, we need either batch calls or a standard way to do >>> multiples of the same operation. >>> >>> The unified API effort it the right place to drive this. >>> >>> >>> >>> >>> >>> >>> >>> Yep, though in the above cases the client is still going to be >>> hanging, waiting for those server-backend calls, with no feedback until >>> it's all done. I would hope that the client-server call overhead is >>> minimal, but I guess that's probably wishful thinking when in the land of >>> random Internet users hitting some provider's Horizon :) >>> >>> So yeah, having mulled it over myself I agree that it's useful to have >>> batch operations implemented in the POST handler, the most common operation >>> being DELETE. >>> >>> Maybe one day we could transition to a batch call with user feedback >>> using a websocket connection. >>> >>> >>> Richard >>> >>>> Richard Jones ---11/27/2014 05:38:53 PM---On Fri Nov 28 2014 at >>>> 5:58:00 AM Tripp, Travis S wrote: >>>> >>>> From: Richard Jones >>>> To: "Tripp, Travis S" , OpenStack List < >>>> openstack-dev at lists.openstack.org> >>>> Date: 11/27/2014 05:38 PM >>>> Subject: Re: [openstack-dev] [horizon] REST and Django >>>> >>>> ------------------------------ >>>> >>>> >>>> >>>> >>>> On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S <*travis.tripp at hp.com* >>>> > wrote: >>>> >>>> Hi Richard, >>>> >>>> You are right, we should put this out on the main ML, so copying >>>> thread out to there. ML: FYI that this started after some impromptu IRC >>>> discussions about a specific patch led into an impromptu google hangout >>>> discussion with all the people on the thread below. >>>> >>>> >>>> Thanks Travis! >>>> >>>> >>>> >>>> As I mentioned in the review[1], Thai and I were mainly discussing >>>> the possible performance implications of network hops from client to >>>> horizon server and whether or not any aggregation should occur server side. >>>> In other words, some views require several APIs to be queried before any >>>> data can displayed and it would eliminate some extra network requests from >>>> client to server if some of the data was first collected on the server side >>>> across service APIs. For example, the launch instance wizard will need to >>>> collect data from quite a few APIs before even the first step is displayed >>>> (I?ve listed those out in the blueprint [2]). >>>> >>>> The flip side to that (as you also pointed out) is that if we keep >>>> the API?s fine grained then the wizard will be able to optimize in one >>>> place the calls for data as it is needed. For example, the first step may >>>> only need half of the API calls. It also could lead to perceived >>>> performance increases just due to the wizard making a call for different >>>> data independently and displaying it as soon as it can. >>>> >>>> >>>> Indeed, looking at the current launch wizard code it seems like you >>>> wouldn't need to load all that data for the wizard to be displayed, since >>>> only some subset of it would be necessary to display any given panel of the >>>> wizard. >>>> >>>> >>>> >>>> I tend to lean towards your POV and starting with discrete API >>>> calls and letting the client optimize calls. If there are performance >>>> problems or other reasons then doing data aggregation on the server side >>>> could be considered at that point. >>>> >>>> >>>> I'm glad to hear it. I'm a fan of optimising when necessary, and not >>>> beforehand :) >>>> >>>> >>>> >>>> Of course if anybody is able to do some performance testing between >>>> the two approaches then that could affect the direction taken. >>>> >>>> >>>> I would certainly like to see us take some measurements when >>>> performance issues pop up. Optimising without solid metrics is bad idea :) >>>> >>>> >>>> Richard >>>> >>>> >>>> >>>> [1] >>>> *https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py* >>>> >>>> [2] >>>> *https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign* >>>> >>>> >>>> -Travis >>>> >>>> *From: *Richard Jones <*r1chardj0n3s at gmail.com* >>>> > >>>> * Date: *Wednesday, November 26, 2014 at 11:55 PM >>>> * To: *Travis Tripp <*travis.tripp at hp.com* >, >>>> Thai Q Tran/Silicon Valley/IBM <*tqtran at us.ibm.com* >>>> >, David Lyle <*dklyle0 at gmail.com* >>>> >, Maxime Vidori <*maxime.vidori at enovance.com* >>>> >, "Wroblewski, Szymon" < >>>> *szymon.wroblewski at intel.com* >, >>>> "Wood, Matthew David (HP Cloud - Horizon)" <*matt.wood at hp.com* >>>> >, "Chen, Shaoquan" <*sean.chen2 at hp.com* >>>> >, "Farina, Matt (HP Cloud)" < >>>> *matthew.farina at hp.com* >, Cindy Lu/Silicon >>>> Valley/IBM <*clu at us.ibm.com* >, Justin >>>> Pomeroy/Rochester/IBM <*jpomero at us.ibm.com* >, >>>> Neill Cox <*neill.cox at ingenious.com.au* >>>> > >>>> * Subject: *Re: REST and Django >>>> >>>> I'm not sure whether this is the appropriate place to discuss this, >>>> or whether I should be posting to the list under [Horizon] but I think we >>>> need to have a clear idea of what goes in the REST API and what goes in the >>>> client (angular) code. >>>> >>>> In my mind, the thinner the REST API the better. Indeed if we can >>>> get away with proxying requests through without touching any *client code, >>>> that would be great. >>>> >>>> Coding additional logic into the REST API means that a developer >>>> would need to look in two places, instead of one, to determine what was >>>> happening for a particular call. If we keep it thin then the API presented >>>> to the client developer is very, very similar to the API presented by the >>>> services. Minimum surprise. >>>> >>>> Your thoughts? >>>> >>>> >>>> Richard >>>> >>>> >>>> On Wed Nov 26 2014 at 2:40:52 PM Richard Jones < >>>> *r1chardj0n3s at gmail.com* > wrote: >>>> >>>> >>>> Thanks for the great summary, Travis. >>>> >>>> I've completed the work I pledged this morning, so now the REST >>>> API change set has: >>>> >>>> - no rest framework dependency >>>> - AJAX scaffolding in openstack_dashboard.api.rest.utils >>>> - code in openstack_dashboard/api/rest/ >>>> - renamed the API from "identity" to "keystone" to be consistent >>>> - added a sample of testing, mostly for my own sanity to check >>>> things were working >>>> >>>> *https://review.openstack.org/#/c/136676* >>>> >>>> >>>> >>>> Richard >>>> >>>> On Wed Nov 26 2014 at 12:18:25 PM Tripp, Travis S < >>>> *travis.tripp at hp.com* > wrote: >>>> >>>> >>>> Hello all, >>>> >>>> Great discussion on the REST urls today! I think that we are >>>> on track to come to a common REST API usage pattern. To provide quick >>>> summary: >>>> >>>> We all agreed that going to a straight REST pattern rather >>>> than through tables was a good idea. We discussed using direct get / post >>>> in Django views like what Max originally used[1][2] and Thai also >>>> started[3] with the identity table rework or to go with djangorestframework >>>> [5] like what Richard was prototyping with[4]. >>>> >>>> The main things we would use from Django Rest Framework were >>>> built in JSON serialization (avoid boilerplate), better exception handling, >>>> and some request wrapping. However, we all weren?t sure about the need for >>>> a full new framework just for that. At the end of the conversation, we >>>> decided that it was a cleaner approach, but Richard would see if he could >>>> provide some utility code to do that much for us without requiring the full >>>> framework. David voiced that he doesn?t want us building out a whole >>>> framework on our own either. >>>> >>>> So, Richard will do some investigation during his day today >>>> and get back to us. Whatever the case, we?ll get a patch in horizon for >>>> the base dependency (framework or Richard?s utilities) that both Thai?s >>>> work and the launch instance work is dependent upon. We?ll build REST >>>> style API?s using the same pattern. We will likely put the rest api?s in >>>> horizon/openstack_dashboard/api/rest/. >>>> >>>> [1] >>>> *https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/keypair.py* >>>> >>>> [2] >>>> *https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/launch.py* >>>> >>>> [3] >>>> *https://review.openstack.org/#/c/133767/8/openstack_dashboard/dashboards/identity/users/views.py* >>>> >>>> [4] >>>> *https://review.openstack.org/#/c/136676/4/openstack_dashboard/rest_api/identity.py* >>>> >>>> [5] *http://www.django-rest-framework.org/* >>>> >>>> >>>> Thanks, >>>> >>>> >>>> Travis_______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Regards, >> Tihomir Trifonov >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Tihomir Trifonov -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From sombrafam at gmail.com Wed Dec 10 11:12:20 2014 From: sombrafam at gmail.com (Erlon Cruz) Date: Wed, 10 Dec 2014 09:12:20 -0200 Subject: [openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time In-Reply-To: References: Message-ID: Both are fine, but A is better. On Tue, Dec 9, 2014 at 10:46 PM, Kurt Taylor wrote: > So far it looks like we have centered around 2 options: > Option "A" 1200 and 2200 UTC > Option "D" 1500 and 0400 UTC > > There is still time to pick your best time. Please vote at > https://www.google.com/moderator/#16/e=21b93c > > Special thanks to Steve, Daya, Markus, Mikhail, Emily, Nurit, Edwin and > Ramy for taking the time to vote. > > Kurt Taylor (krtaylor) > > > On Tue, Dec 9, 2014 at 9:32 AM, Kurt Taylor > wrote: > >> All of the feedback so far has supported moving the existing IRC >> Third-party CI meeting to better fit a worldwide audience. >> >> The consensus is that we will have only 1 meeting per week at >> alternating times. You can see examples of other teams with alternating >> meeting times at: https://wiki.openstack.org/wiki/Meetings >> >> This way, one week we are good for one part of the world, the next week >> for the other. You will not need to attend both meetings, just the meeting >> time every other week that fits your schedule. >> >> Proposed times in UTC are being voted on here: >> https://www.google.com/moderator/#16/e=21b93c >> >> Please vote on the time that is best for you. I would like to finalize >> the new times this week. >> >> Thanks! >> Kurt Taylor (krtaylor) >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at dague.net Wed Dec 10 11:37:02 2014 From: sean at dague.net (Sean Dague) Date: Wed, 10 Dec 2014 06:37:02 -0500 Subject: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches In-Reply-To: References: <5480D2CF.8050301@linux.vnet.ibm.com> <5480E275.8020005@linux.vnet.ibm.com> <5480E534.2050601@dague.net> <54820ad4d5c3d_2da2df3e8cb9@ovo.mains.priv.notmuch> Message-ID: <5488305E.8080300@dague.net> On 12/09/2014 06:18 PM, Eric Windisch wrote: > > While gating on nova-docker will prevent patches that cause > nova-docker to break 100% to land, it won't do a lot to prevent > transient failures. To fix those we need people dedicated to making > sure nova-docker is working. > > > > What would be helpful for me is a way to know that our tests are > breaking without manually checking Kibana, such as an email. I know that periodic jobs can do this kind of notification, if you ask about it in #openstack-infra there might be a solution there. However, having a job in infra on Nova is a thing that comes with an expectation that someone is staying engaged on the infra and Nova sides to ensure that it's running correctly, and debug it when it's wrong. It's not a set it and forget it. It's already past the 2 weeks politeness boundary before it's considered fair game to just delete it. Creating the job is < 10% of the work. Long term maintenance is important. I'm still not getting the feeling that there is really a long term owner on this job. I'd love that not to be the case, but simple things like the fact that the directory structure was all out of whack make it clear no one was regularly looking at it. -Sean -- Sean Dague http://dague.net From tnurlygayanov at mirantis.com Wed Dec 10 11:43:21 2014 From: tnurlygayanov at mirantis.com (Timur Nurlygayanov) Date: Wed, 10 Dec 2014 15:43:21 +0400 Subject: [openstack-dev] [Murano] Cannot find murano.conf In-Reply-To: <4728f25da73e44ee880f567206f381c5@BY2PR42MB101.048d.mgd.msft.net> References: <4728f25da73e44ee880f567206f381c5@BY2PR42MB101.048d.mgd.msft.net> Message-ID: Hi, looks like this is issues with the installation of Murano requirements with the pip. To fix them I suggest to do the following: *pip install pip -Upip install setuptools -U* and after this try to install Murano requirements again. On Wed, Dec 10, 2014 at 10:46 AM, wrote: > HI Team, > > > > I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the > below install murano-api I encounter the below error. Please assist. > > > > When I try to install > > > > $ *tox -e venv -- murano-api --config-file ./etc/murano/murano.conf* > > > > pip can't proceed with requirement 'pycrypto>=2.6 (from -r > /home/ubuntu/murano/murano/requirements.t xt > > > > (line 18))' due to a pre-existing build directory. > > location: /home/ubuntu/murano/murano/.tox/venv/build/pycrypto > > This is likely due to a previous installation that failed. > > pip is being responsible and not assuming it can delete this. > > Please delete it and try again. > > Storing debug log for failure in /home/ubuntu/.pip/pip.log > > > > ERROR: could not install deps > [-r/home/ubuntu/murano/murano/requirements.txt, > -r/home/ubuntu/murano/m > urano/test-requirements.txt] > > > > > > > > Warm Regards, > > *Raghavendra Lad* > > > > ------------------------------ > > This message is for the designated recipient only and may contain > privileged, proprietary, or otherwise confidential information. If you have > received it in error, please notify the sender immediately and delete the > original. Any other use of the e-mail by you is prohibited. Where allowed > by local law, electronic communications with Accenture and its affiliates, > including e-mail and instant messaging (including content), may be scanned > by our systems for the purposes of information security and assessment of > internal compliance with Accenture policy. > > ______________________________________________________________________________________ > > www.accenture.com > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Timur, Senior QA Engineer OpenStack Projects Mirantis Inc My OpenStack summit schedule: http://kilodesignsummit.sched.org/timur.nurlygayanov#.VFSrD8mhhOI -------------- next part -------------- An HTML attachment was scrubbed... URL: From visnusaran.murugan at hp.com Wed Dec 10 11:42:25 2014 From: visnusaran.murugan at hp.com (Murugan, Visnusaran) Date: Wed, 10 Dec 2014 11:42:25 +0000 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <54862401.3020508@redhat.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> Message-ID: <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> -----Original Message----- From: Zane Bitter [mailto:zbitter at redhat.com] Sent: Tuesday, December 9, 2014 3:50 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown On 08/12/14 07:00, Murugan, Visnusaran wrote: > > Hi Zane & Michael, > > Please have a look @ > https://etherpad.openstack.org/p/execution-stream-and-aggregator-based > -convergence > > Updated with a combined approach which does not require persisting graph and backup stack removal. Well, we still have to persist the dependencies of each version of a resource _somehow_, because otherwise we can't know how to clean them up in the correct order. But what I think you meant to say is that this approach doesn't require it to be persisted in a separate table where the rows are marked as traversed as we work through the graph. [Murugan, Visnusaran] In case of rollback where we have to cleanup earlier version of resources, we could get the order from old template. We'd prefer not to have a graph table. > This approach reduces DB queries by waiting for completion notification on a topic. The drawback I see is that delete stack stream will be huge as it will have the entire graph. We can always dump such data in ResourceLock.data Json and pass a simple flag "load_stream_from_db" to converge RPC call as a workaround for delete operation. This seems to be essentially equivalent to my 'SyncPoint' proposal[1], with the key difference that the data is stored in-memory in a Heat engine rather than the database. I suspect it's probably a mistake to move it in-memory for similar reasons to the argument Clint made against synchronising the marking off of dependencies in-memory. The database can handle that and the problem of making the DB robust against failures of a single machine has already been solved by someone else. If we do it in-memory we are just creating a single point of failure for not much gain. (I guess you could argue it doesn't matter, since if any Heat engine dies during the traversal then we'll have to kick off another one anyway, but it does limit our options if that changes in the future.) [Murugan, Visnusaran] Resource completes, removes itself from resource_lock and notifies engine. Engine will acquire parent lock and initiate parent only if all its children are satisfied (no child entry in resource_lock). This will come in place of Aggregator. It's not clear to me how the 'streams' differ in practical terms from just passing a serialisation of the Dependencies object, other than being incomprehensible to me ;). The current Dependencies implementation (1) is a very generic implementation of a DAG, (2) works and has plenty of unit tests, (3) has, with I think one exception, a pretty straightforward API, (4) has a very simple serialisation, returned by the edges() method, which can be passed back into the constructor to recreate it, and (5) has an API that is to some extent relied upon by resources, and so won't likely be removed outright in any event. Whatever code we need to handle dependencies ought to just build on this existing implementation. [Murugan, Visnusaran] Our thought was to reduce payload size (template/graph). Just planning for worst case scenario (million resource stack) We could always dump them in ResourceLock.data to be loaded by Worker. I think the difference may be that the streams only include the *shortest* paths (there will often be more than one) to each resource. i.e. A <------- B <------- C ^ | | | +---------------------+ can just be written as: A <------- B <------- C because there's only one order in which that can execute anyway. (If we're going to do this though, we should just add a method to the dependencies.Graph class to delete redundant edges, not create a whole new data structure.) There is a big potential advantage here in that it reduces the theoretical maximum number of edges in the graph from O(n^2) to O(n). (Although in practice real templates are typically not likely to have such dense graphs.) There's a downside to this too though: say that A in the above diagram is replaced during an update. In that case not only B but also C will need to figure out what the latest version of A is. One option here is to pass that data along via B, but that will become very messy to implement in a non-trivial example. The other would be for C to go search in the database for resources with the same name as A and the current traversal_id marked as the latest. But that not only creates a concurrency problem we didn't have before (A could have been updated with a new traversal_id at some point after C had established that the current traversal was still valid but before it went looking for A), it also eliminates all of the performance gains from removing that edge in the first place. [1] https://github.com/zaneb/heat-convergence-prototype/blob/distributed-graph/converge/sync_point.py > To Stop current stack operation, we will use your traversal_id based approach. +1 :) [Murugan, Visnusaran] We had this idea already :) > If in case you feel Aggregator model creates more queues, then we > might have to poll DB to get resource status. (Which will impact > performance adversely :) ) For the reasons given above I would vote for doing this in the DB. I agree there will be a performance penalty for doing so, because we'll be paying for robustness. [Murugan, Visnusaran] +1 > Lock table: name(Unique - Resource_id), stack_id, engine_id, data > (Json to store stream dict) Based on our call on Thursday, I think you're taking the idea of the Lock table too literally. The point of referring to locks is that we can use the same concepts as the Lock table relies on to do atomic updates on a particular row of the database, and we can use those atomic updates to prevent race conditions when implementing SyncPoints/Aggregators/whatever you want to call them. It's not that we'd actually use the Lock table itself, which implements a mutex and therefore offers only a much slower and more stateful way of doing what we want (lock mutex, change data, unlock mutex). [Murugan, Visnusaran] Are you suggesting something like a select-for-update in resource table itself without having a lock table? cheers, Zane. > Your thoughts. > Vishnu (irc: ckmvishnu) > Unmesh (irc: unmeshg) _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tnurlygayanov at mirantis.com Wed Dec 10 11:45:46 2014 From: tnurlygayanov at mirantis.com (Timur Nurlygayanov) Date: Wed, 10 Dec 2014 15:45:46 +0400 Subject: [openstack-dev] [Murano] Oslo.messaging error In-Reply-To: <94a869fdf4a441edb487156cdc8bf7ec@BY2PR42MB101.048d.mgd.msft.net> References: <94a869fdf4a441edb487156cdc8bf7ec@BY2PR42MB101.048d.mgd.msft.net> Message-ID: Hi Raghavendra Lad, looks like Murano services can't connect ot the RabbitMQ server. Could you please share the configuration parameters for RabbitMQ from ./etc/murano/murano.conf ? On Wed, Dec 10, 2014 at 10:55 AM, wrote: > > > > > HI Team, > > > > I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the > below install murano-api I encounter the below error. Please assist. > > > > When I try to install > > > > I am using the Murano guide link provided below: > > https://murano.readthedocs.org/en/latest/install/manual.html > > > > > > I am trying to execute the section 7 > > > > 1. Open a new console and launch Murano API. A separate terminal is > required because the console will be locked by a running process. > > 2. $ cd ~/murano/murano > > 3. $ tox -e venv -- murano-api \ > > 4. > --config-file ./etc/murano/murano.conf > > > > > > I am getting the below error : I have a Juno Openstack ready and trying to > integrate Murano > > > > > > 2014-12-10 12:10:30.396 7721 DEBUG murano.openstack.common.service [-] > neutron.endpoint_type = publicURL log_opt_values > /home/ > ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048 > > 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-] > neutron.insecure = False log_opt_values > /home/ubun > tu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048 > > 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-] > **************************************************************** > **************** log_opt_values > /home/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2050 > > 2014-12-10 12:10:30.400 7721 INFO oslo.messaging._drivers.impl_rabbit [-] > Connecting to AMQP server on controller:5672 > > 2014-12-10 12:10:30.408 7721 INFO oslo.messaging._drivers.impl_rabbit [-] > Connecting to AMQP server on controller:5672 > > 2014-12-10 12:10:30.416 7721 INFO eventlet.wsgi [-] (7721) wsgi starting > up on http://0.0.0.0:8082/ > > 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Updating > statistic information. update_stats > /home/ubuntu/murano/muran > o/murano/common/statservice.py:57 > > 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats > object: > ction object at 0x7fada950a510> update_stats > /home/ubuntu/murano/murano/murano/common/statservice.py:58 > > 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats: > Requests:0 Errors: 0 Ave.Res.Time 0.0000 > > Per tenant: {} update_stats > /home/ubuntu/murano/murano/murano/common/statservice.py:64 > > 2014-12-10 12:10:30.433 7721 DEBUG oslo.db.sqlalchemy.session [-] MySQL > server mode set to > STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZER > O_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION > _check_effective_sql_mode /hom > e/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py:509 > > 2014-12-10 12:10:33.464 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] > AMQP server controller:5672 closed the connection. Check > log in credentials: Socket closed > > 2014-12-10 12:10:33.465 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] > AMQP server controller:5672 closed the connection. Check > log in credentials: Socket closed > > 2014-12-10 12:10:37.483 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] > AMQP server controller:5672 closed the connection. Check > log in credentials: Socket closed > > 2014-12-10 12:10:37.484 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] > AMQP server controller:5672 closed the connection. Check > log in credentials: Socket closed > > > > > > Warm Regards, > > *Raghavendra Lad* > > > > ------------------------------ > > This message is for the designated recipient only and may contain > privileged, proprietary, or otherwise confidential information. If you have > received it in error, please notify the sender immediately and delete the > original. Any other use of the e-mail by you is prohibited. Where allowed > by local law, electronic communications with Accenture and its affiliates, > including e-mail and instant messaging (including content), may be scanned > by our systems for the purposes of information security and assessment of > internal compliance with Accenture policy. > > ______________________________________________________________________________________ > > www.accenture.com > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Timur, Senior QA Engineer OpenStack Projects Mirantis Inc My OpenStack summit schedule: http://kilodesignsummit.sched.org/timur.nurlygayanov#.VFSrD8mhhOI -------------- next part -------------- An HTML attachment was scrubbed... URL: From ipekelny at mirantis.com Wed Dec 10 12:18:44 2014 From: ipekelny at mirantis.com (Ilya Pekelny) Date: Wed, 10 Dec 2014 14:18:44 +0200 Subject: [openstack-dev] [Murano] Oslo.messaging error In-Reply-To: References: <94a869fdf4a441edb487156cdc8bf7ec@BY2PR42MB101.048d.mgd.msft.net> Message-ID: Please, provide a RabbitMQ logs from a controller and an oslo.messaging version. Do you use up-stream oslo.messaging version? It looks like well known heartbeat bug. On Wed, Dec 10, 2014 at 1:45 PM, Timur Nurlygayanov < tnurlygayanov at mirantis.com> wrote: > Hi Raghavendra Lad, > > looks like Murano services can't connect ot the RabbitMQ server. > Could you please share the configuration parameters for RabbitMQ from > ./etc/murano/murano.conf ? > > > On Wed, Dec 10, 2014 at 10:55 AM, wrote: > >> >> >> >> >> HI Team, >> >> >> >> I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the >> below install murano-api I encounter the below error. Please assist. >> >> >> >> When I try to install >> >> >> >> I am using the Murano guide link provided below: >> >> https://murano.readthedocs.org/en/latest/install/manual.html >> >> >> >> >> >> I am trying to execute the section 7 >> >> >> >> 1. Open a new console and launch Murano API. A separate terminal is >> required because the console will be locked by a running process. >> >> 2. $ cd ~/murano/murano >> >> 3. $ tox -e venv -- murano-api \ >> >> 4. > --config-file ./etc/murano/murano.conf >> >> >> >> >> >> I am getting the below error : I have a Juno Openstack ready and trying >> to integrate Murano >> >> >> >> >> >> 2014-12-10 12:10:30.396 7721 DEBUG murano.openstack.common.service [-] >> neutron.endpoint_type = publicURL log_opt_values >> /home/ >> ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048 >> >> 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-] >> neutron.insecure = False log_opt_values >> /home/ubun >> tu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048 >> >> 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-] >> **************************************************************** >> **************** log_opt_values >> /home/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2050 >> >> 2014-12-10 12:10:30.400 7721 INFO oslo.messaging._drivers.impl_rabbit [-] >> Connecting to AMQP server on controller:5672 >> >> 2014-12-10 12:10:30.408 7721 INFO oslo.messaging._drivers.impl_rabbit [-] >> Connecting to AMQP server on controller:5672 >> >> 2014-12-10 12:10:30.416 7721 INFO eventlet.wsgi [-] (7721) wsgi starting >> up on http://0.0.0.0:8082/ >> >> 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Updating >> statistic information. update_stats >> /home/ubuntu/murano/muran >> o/murano/common/statservice.py:57 >> >> 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats >> object: >> > ction object at 0x7fada950a510> update_stats >> /home/ubuntu/murano/murano/murano/common/statservice.py:58 >> >> 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats: >> Requests:0 Errors: 0 Ave.Res.Time 0.0000 >> >> Per tenant: {} update_stats >> /home/ubuntu/murano/murano/murano/common/statservice.py:64 >> >> 2014-12-10 12:10:30.433 7721 DEBUG oslo.db.sqlalchemy.session [-] MySQL >> server mode set to >> STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZER >> O_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION >> _check_effective_sql_mode /hom >> e/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py:509 >> >> 2014-12-10 12:10:33.464 7721 ERROR oslo.messaging._drivers.impl_rabbit >> [-] AMQP server controller:5672 closed the connection. Check >> log in credentials: Socket closed >> >> 2014-12-10 12:10:33.465 7721 ERROR oslo.messaging._drivers.impl_rabbit >> [-] AMQP server controller:5672 closed the connection. Check >> log in credentials: Socket closed >> >> 2014-12-10 12:10:37.483 7721 ERROR oslo.messaging._drivers.impl_rabbit >> [-] AMQP server controller:5672 closed the connection. Check >> log in credentials: Socket closed >> >> 2014-12-10 12:10:37.484 7721 ERROR oslo.messaging._drivers.impl_rabbit >> [-] AMQP server controller:5672 closed the connection. Check >> log in credentials: Socket closed >> >> >> >> >> >> Warm Regards, >> >> *Raghavendra Lad* >> >> >> >> ------------------------------ >> >> This message is for the designated recipient only and may contain >> privileged, proprietary, or otherwise confidential information. If you have >> received it in error, please notify the sender immediately and delete the >> original. Any other use of the e-mail by you is prohibited. Where allowed >> by local law, electronic communications with Accenture and its affiliates, >> including e-mail and instant messaging (including content), may be scanned >> by our systems for the purposes of information security and assessment of >> internal compliance with Accenture policy. >> >> ______________________________________________________________________________________ >> >> www.accenture.com >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > > Timur, > Senior QA Engineer > OpenStack Projects > Mirantis Inc > > My OpenStack summit schedule: > http://kilodesignsummit.sched.org/timur.nurlygayanov#.VFSrD8mhhOI > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Dec 10 13:54:03 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Dec 2014 13:54:03 +0000 Subject: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches In-Reply-To: <5488305E.8080300@dague.net> References: <5480D2CF.8050301@linux.vnet.ibm.com> <5480E275.8020005@linux.vnet.ibm.com> <5480E534.2050601@dague.net> <54820ad4d5c3d_2da2df3e8cb9@ovo.mains.priv.notmuch> <5488305E.8080300@dague.net> Message-ID: <20141210135403.GU2497@yuggoth.org> On 2014-12-10 06:37:02 -0500 (-0500), Sean Dague wrote: > I know that periodic jobs can do this kind of notification, if you > ask about it in #openstack-infra there might be a solution there. [...] E-mail reporting in Zuul is currently implemented pipeline-specific, so the nova-docker tests would need to be in their own job in a dedicated pipeline with reporting set to the relevant contact address. This may be an excessive level of overhead, so we should have a separate infra discussion on whether that's a realistic solution, or whether it's worth looking at new Zuul functionality to tack E-mail reporting addresses onto specific jobs in arbitrary pipelines. -- Jeremy Stanley From eli at mirantis.com Wed Dec 10 13:57:00 2014 From: eli at mirantis.com (Evgeniy L) Date: Wed, 10 Dec 2014 17:57:00 +0400 Subject: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format In-Reply-To: References: Message-ID: Hi, First let me describe what our plans for the nearest release. We want to deliver role as a simple plugin, it means that plugin developer can define his own role with yaml and also it should work fine with our current approach when user can define several fields on the settings tab. Also I would like to mention another thing which we should probably discuss in separate thread, how plugins should be implemented. We have two types of plugins, simple and complicated, the definition of simple - I can do everything I need with yaml, the definition of complicated - probably I have to write some python code. It doesn't mean that this python code should do absolutely everything it wants, but it means we should implement stable, documented interface where plugin is connected to the core. Now lets talk about UI flow, our current problem is how to get the information if plugins is used in the environment or not, this information is required for backend which generates appropriate tasks for task executor, also this information can be used in the future if we decide to implement plugins deletion mechanism. I didn't come up with a some new solution, as before we have two options to solve the problem: # 1 Use conditional language which is currently used on UI, it will look like Vitaly described in the example [1]. Plugin developer should: 1. describe at least one element for UI, which he will be able to use in task 2. add condition which is written in our own programming language Example of the condition for LBaaS plugin: condition: settings:lbaas.metadata.enabled == true 3. add condition to metadata.yaml a condition which defines if plugin is enabled is_enabled: settings:lbaas.metadata.enabled == true This approach has good flexibility, but also it has problems: a. It's complicated and not intuitive for plugin developer. b. It doesn't cover case when the user installs 3rd party plugin which doesn't have any conditions (because of # a) and user doesn't have a way to disable it for environment if it breaks his configuration. # 2 As we discussed from the very beginning after user selects a release he can choose a set of plugins which he wants to be enabled for environment. After that we can say that plugin is enabled for the environment and we send tasks related to this plugin to task executor. >> My approach also allows to eliminate "enableness" of plugins which will cause UX issues and issues like you described above. vCenter and Ceph also don't have "enabled" state. vCenter has hypervisor and storage, Ceph provides backends for Cinder and Glance which can be used simultaneously or only one of them can be used. Both of described plugins have enabled/disabled state, vCenter is enabled when vCenter is selected as hypervisor. Ceph is enabled when it's selected as a backend for Cinder or Glance. If you don't like the idea of having Ceph/vCenter checkboxes on the first page, I can suggest as an idea (research is required) to define groups like Storage Backend, Network Manager and we will allow plugin developer to embed his option in radiobutton field on wizard pages. But plugin developer should not describe conditions, he should just write that his plugin is a Storage Backend, Hypervisor or new Network Manager. And the plugins e.g. Zabbix, Nagios, which don't belong to any of this groups should be shown as checkboxes on the first page of the wizard. > [1] https://github.com/vkramskikh/fuel-plugins/commit/1ddb166731fc4bf614f502b276eb136687cb20cf On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh wrote: > > > 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak : > >> >>> - environment_config.yaml should contain exact config which will be >>> mixed into cluster_attributes. No need to implicitly generate any controls >>> like it is done now. >>> >>> Initially i had the same thoughts and wanted to use it the way it is, >> but now i completely agree with Evgeniy that additional DSL will cause a lot >> of problems with compatibility between versions and developer experience. >> > As far as I understand, you want to introduce another approach to describe > UI part or plugins? > >> We need to search for alternatives.. >> 1. for UI i would prefer separate tab for plugins, where user will be >> able to enable/disable plugin explicitly. >> > Of course, we need a separate page for plugin management. > >> Currently settings tab is overloaded. >> 2. on backend we need to validate plugins against certain env before >> enabling it, >> and for simple case we may expose some basic entities like >> network_mode. >> For case where you need complex logic - python code is far more flexible >> that new DSL. >> >>> >>> - metadata.yaml should also contain "is_removable" field. This field >>> is needed to determine whether it is possible to remove installed plugin. >>> It is impossible to remove plugins in the current implementation. >>> This field should contain an expression written in our DSL which we already >>> use in a few places. The LBaaS plugin also uses it to hide the checkbox if >>> Neutron is not used, so even simple plugins like this need to utilize it. >>> This field can also be autogenerated, for more complex plugins plugin >>> writer needs to fix it manually. For example, for Ceph it could look like >>> "settings:storage.volumes_ceph.value == false and >>> settings:storage.images_ceph.value == false". >>> >>> How checkbox will help? There is several cases of plugin removal.. >> > It is not a checkbox, this is condition that determines whether the plugin > is removable. It allows plugin developer specify when plguin can be safely > removed from Fuel if there are some environments which were created after > the plugin had been installed. > >> 1. Plugin is installed, but not enabled for any env - just remove the >> plugin >> 2. Plugin is installed, enabled and cluster deployed - forget about it >> for now.. >> 3. Plugin is installed and only enabled - we need to maintain state of db >> consistent after plugin is removed, it is problematic, but possible >> > My approach also allows to eliminate "enableness" of plugins which will > cause UX issues and issues like you described above. vCenter and Ceph also > don't have "enabled" state. vCenter has hypervisor and storage, Ceph > provides backends for Cinder and Glance which can be used simultaneously or > only one of them can be used. > >> My main point that plugin is enabled/disabled explicitly by user, after >> that we can decide ourselves can it be removed or not. >> >>> >>> - For every task in tasks.yaml there should be added new "condition" >>> field with an expression which determines whether the task should be run. >>> In the current implementation tasks are always run for specified roles. For >>> example, vCenter plugin can have a few tasks with conditions like >>> "settings:common.libvirt_type.value == 'vcenter'" or >>> "settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar >>> approach will be used in implementation of Granular Deployment feature. >>> >>> I had some thoughts about using DSL, it seemed to me especially helpfull >> when you need to disable part of embedded into core functionality, >> like deploying with another hypervisor, or network dirver (contrail for >> example). And DSL wont cover all cases here, this quite similar to >> metadata.yaml, simple cases can be covered by some variables in tasks (like >> group, unique, etc), but complex is easier to test and describe in python. >> > Could you please provide example of such conditions? vCenter and Ceph can > be turned into plugins using this approach. > > Also, I'm not against python version of plugins. It could look like a > python class with exactly the same fields form YAML files, but conditions > will be written in python. > >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Vitaly Kramskikh, > Software Engineer, > Mirantis, Inc. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh wrote: > > > 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak : > >> >>> - environment_config.yaml should contain exact config which will be >>> mixed into cluster_attributes. No need to implicitly generate any controls >>> like it is done now. >>> >>> Initially i had the same thoughts and wanted to use it the way it is, >> but now i completely agree with Evgeniy that additional DSL will cause a lot >> of problems with compatibility between versions and developer experience. >> > As far as I understand, you want to introduce another approach to describe > UI part or plugins? > >> We need to search for alternatives.. >> 1. for UI i would prefer separate tab for plugins, where user will be >> able to enable/disable plugin explicitly. >> > Of course, we need a separate page for plugin management. > >> Currently settings tab is overloaded. >> 2. on backend we need to validate plugins against certain env before >> enabling it, >> and for simple case we may expose some basic entities like >> network_mode. >> For case where you need complex logic - python code is far more flexible >> that new DSL. >> >>> >>> - metadata.yaml should also contain "is_removable" field. This field >>> is needed to determine whether it is possible to remove installed plugin. >>> It is impossible to remove plugins in the current implementation. >>> This field should contain an expression written in our DSL which we already >>> use in a few places. The LBaaS plugin also uses it to hide the checkbox if >>> Neutron is not used, so even simple plugins like this need to utilize it. >>> This field can also be autogenerated, for more complex plugins plugin >>> writer needs to fix it manually. For example, for Ceph it could look like >>> "settings:storage.volumes_ceph.value == false and >>> settings:storage.images_ceph.value == false". >>> >>> How checkbox will help? There is several cases of plugin removal.. >> > It is not a checkbox, this is condition that determines whether the plugin > is removable. It allows plugin developer specify when plguin can be safely > removed from Fuel if there are some environments which were created after > the plugin had been installed. > >> 1. Plugin is installed, but not enabled for any env - just remove the >> plugin >> 2. Plugin is installed, enabled and cluster deployed - forget about it >> for now.. >> 3. Plugin is installed and only enabled - we need to maintain state of db >> consistent after plugin is removed, it is problematic, but possible >> > My approach also allows to eliminate "enableness" of plugins which will > cause UX issues and issues like you described above. vCenter and Ceph also > don't have "enabled" state. vCenter has hypervisor and storage, Ceph > provides backends for Cinder and Glance which can be used simultaneously or > only one of them can be used. > >> My main point that plugin is enabled/disabled explicitly by user, after >> that we can decide ourselves can it be removed or not. >> >>> >>> - For every task in tasks.yaml there should be added new "condition" >>> field with an expression which determines whether the task should be run. >>> In the current implementation tasks are always run for specified roles. For >>> example, vCenter plugin can have a few tasks with conditions like >>> "settings:common.libvirt_type.value == 'vcenter'" or >>> "settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar >>> approach will be used in implementation of Granular Deployment feature. >>> >>> I had some thoughts about using DSL, it seemed to me especially helpfull >> when you need to disable part of embedded into core functionality, >> like deploying with another hypervisor, or network dirver (contrail for >> example). And DSL wont cover all cases here, this quite similar to >> metadata.yaml, simple cases can be covered by some variables in tasks (like >> group, unique, etc), but complex is easier to test and describe in python. >> > Could you please provide example of such conditions? vCenter and Ceph can > be turned into plugins using this approach. > > Also, I'm not against python version of plugins. It could look like a > python class with exactly the same fields form YAML files, but conditions > will be written in python. > >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Vitaly Kramskikh, > Software Engineer, > Mirantis, Inc. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Dec 10 14:35:49 2014 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 10 Dec 2014 15:35:49 +0100 Subject: [openstack-dev] [neutron] Linux capabilities vs sudo/rootwrap? In-Reply-To: References: Message-ID: <54885A45.2000507@openstack.org> Angus Lees wrote: > How crazy would it be to just give neutron CAP_NET_ADMIN (where > required), and allow it to make network changes via ip (netlink) calls > directly? I don't think that's completely crazy. Given what neutron is expected to do, and what it is already empowered to do (through lazy and less lazy rootwrap filters), relying on CAP_NET_ADMIN instead should have limited security impact. It would be worth precisely analyzing the delta (what will a capability-enhanced neutron be able to do to the system that the rootwrap-powered neutron can't already do), and try to get performance numbers... That would help making the right choice, although I expect the best gains here are in avoiding the whole external executable call and result parsing. You could even maintain parallel code paths (use capability if present). Cheers, -- Thierry Carrez (ttx) From davanum at gmail.com Wed Dec 10 14:54:27 2014 From: davanum at gmail.com (Davanum Srinivas) Date: Wed, 10 Dec 2014 09:54:27 -0500 Subject: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches In-Reply-To: <5488305E.8080300@dague.net> References: <5480D2CF.8050301@linux.vnet.ibm.com> <5480E275.8020005@linux.vnet.ibm.com> <5480E534.2050601@dague.net> <54820ad4d5c3d_2da2df3e8cb9@ovo.mains.priv.notmuch> <5488305E.8080300@dague.net> Message-ID: Sean, fyi, got it stable now for the moment. http://logstash.openstack.org/#eyJzZWFyY2giOiIgYnVpbGRfbmFtZTpcImNoZWNrLXRlbXBlc3QtZHN2bS1kb2NrZXJcIiBBTkQgbWVzc2FnZTpcIkZpbmlzaGVkOlwiIEFORCBidWlsZF9zdGF0dXM6XCJGQUlMVVJFXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIiLCJzdGFtcCI6MTQxODIyMzEwMjcyOX0= with https://review.openstack.org/#/c/138714/ thanks, dims On Wed, Dec 10, 2014 at 6:37 AM, Sean Dague wrote: > On 12/09/2014 06:18 PM, Eric Windisch wrote: >> >> While gating on nova-docker will prevent patches that cause >> nova-docker to break 100% to land, it won't do a lot to prevent >> transient failures. To fix those we need people dedicated to making >> sure nova-docker is working. >> >> >> >> What would be helpful for me is a way to know that our tests are >> breaking without manually checking Kibana, such as an email. > > I know that periodic jobs can do this kind of notification, if you ask > about it in #openstack-infra there might be a solution there. > > However, having a job in infra on Nova is a thing that comes with an > expectation that someone is staying engaged on the infra and Nova sides > to ensure that it's running correctly, and debug it when it's wrong. > It's not a set it and forget it. > > It's already past the 2 weeks politeness boundary before it's considered > fair game to just delete it. > > Creating the job is < 10% of the work. Long term maintenance is > important. I'm still not getting the feeling that there is really a long > term owner on this job. I'd love that not to be the case, but simple > things like the fact that the directory structure was all out of whack > make it clear no one was regularly looking at it. > > -Sean > > -- > Sean Dague > http://dague.net > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From mbirru at gmail.com Wed Dec 10 15:01:38 2014 From: mbirru at gmail.com (Murali B) Date: Wed, 10 Dec 2014 20:31:38 +0530 Subject: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Message-ID: Hi keshava, We would like contribute towards service chain and NFV Could you please share the document if you have any related to service VM The service chain can be achieved if we able to redirect the traffic to service VM using ovs-flows in this case we no need to have routing enable on the service VM(traffic is redirected at L2). All the tenant VM's in cloud could use this service VM services by adding the ovs-rules in OVS Thanks -Murali -------------- next part -------------- An HTML attachment was scrubbed... URL: From slukjanov at mirantis.com Wed Dec 10 15:21:02 2014 From: slukjanov at mirantis.com (Sergey Lukjanov) Date: Wed, 10 Dec 2014 19:21:02 +0400 Subject: [openstack-dev] [sahara] team meeting Dec 11 1800 UTC Message-ID: Hi folks, We'll be having the Sahara team meeting as usual in #openstack-meeting-alt channel. Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20141211T18 -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pasquale.porreca at dektech.com.au Wed Dec 10 15:42:45 2014 From: pasquale.porreca at dektech.com.au (Pasquale Porreca) Date: Wed, 10 Dec 2014 16:42:45 +0100 Subject: [openstack-dev] [NFV][Telco] pxe-boot In-Reply-To: References: <20141125223355.Horde.8UWcgskXSBVvU-RJrCgbtw7@mail.dektech.com.au> <693765041.19659732.1417044729676.JavaMail.zimbra@redhat.com> <547ED500.1070008@dektech.com.au> Message-ID: <548869F5.1040801@dektech.com.au> Well, one of the main reason to choose an open source product is to avoid vendor lock-in. I think it is not advisableto embed in the software running in an instance a call to OpenStack specific services. On 12/10/14 00:20, Joe Gordon wrote: > > On Wed, Dec 3, 2014 at 1:16 AM, Pasquale Porreca > > wrote: > > The use case we were thinking about is a Network Function (e.g. > IMS Nodes) implementation in which the high availability is based > on OpenSAF. In this scenario there is an Active/Standby cluster of > 2 System Controllers (SC) plus several Payloads (PL) that boot > from network, controlled by the SC. The logic of which service to > deploy on each payload is inside the SC. > > In OpenStack both SCs and PLs will be instances running in the > cloud, anyway the PLs should still boot from network under the > control of the SC. In fact to use Glance to store the image for > the PLs and keep the control of the PLs in the SC, the SC should > trigger the boot of the PLs with requests to Nova/Glance, but an > application running inside an instance should not directly > interact with a cloud infrastructure service like Glance or Nova. > > > Why not? This is a fairly common practice. -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr -------------- next part -------------- An HTML attachment was scrubbed... URL: From vkramskikh at mirantis.com Wed Dec 10 15:50:29 2014 From: vkramskikh at mirantis.com (Vitaly Kramskikh) Date: Wed, 10 Dec 2014 19:50:29 +0400 Subject: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format In-Reply-To: References: Message-ID: 2014-12-10 16:57 GMT+03:00 Evgeniy L : > Hi, > > First let me describe what our plans for the nearest release. We want to > deliver > role as a simple plugin, it means that plugin developer can define his own > role > with yaml and also it should work fine with our current approach when user > can > define several fields on the settings tab. > > Also I would like to mention another thing which we should probably discuss > in separate thread, how plugins should be implemented. We have two types > of plugins, simple and complicated, the definition of simple - I can do > everything > I need with yaml, the definition of complicated - probably I have to write > some > python code. It doesn't mean that this python code should do absolutely > everything it wants, but it means we should implement stable, documented > interface where plugin is connected to the core. > > Now lets talk about UI flow, our current problem is how to get the > information > if plugins is used in the environment or not, this information is required > for > backend which generates appropriate tasks for task executor, also this > information can be used in the future if we decide to implement plugins > deletion > mechanism. > > I didn't come up with a some new solution, as before we have two options to > solve the problem: > > # 1 > > Use conditional language which is currently used on UI, it will look like > Vitaly described in the example [1]. > Plugin developer should: > > 1. describe at least one element for UI, which he will be able to use in > task > > 2. add condition which is written in our own programming language > > Example of the condition for LBaaS plugin: > > condition: settings:lbaas.metadata.enabled == true > > 3. add condition to metadata.yaml a condition which defines if plugin is > enabled > > is_enabled: settings:lbaas.metadata.enabled == true > > This approach has good flexibility, but also it has problems: > > a. It's complicated and not intuitive for plugin developer. > It is less complicated than python code > b. It doesn't cover case when the user installs 3rd party plugin > which doesn't have any conditions (because of # a) and > user doesn't have a way to disable it for environment if it > breaks his configuration. > If plugin doesn't have conditions for tasks, then it has invalid metadata. > > # 2 > > As we discussed from the very beginning after user selects a release he can > choose a set of plugins which he wants to be enabled for environment. > After that we can say that plugin is enabled for the environment and we > send > tasks related to this plugin to task executor. > > >> My approach also allows to eliminate "enableness" of plugins which > will cause UX issues and issues like you described above. vCenter and Ceph > also don't have "enabled" state. vCenter has hypervisor and storage, Ceph > provides backends for Cinder and Glance which can be used simultaneously or > only one of them can be used. > > Both of described plugins have enabled/disabled state, vCenter is enabled > when vCenter is selected as hypervisor. Ceph is enabled when it's selected > as a backend for Cinder or Glance. > Nope, Ceph for Volumes can be used without Ceph for Images. Both of these plugins can also have some granular tasks which are enabled by various checkboxes (like VMware vCenter for volumes). How would you determine whether tasks which installs VMware vCenter for volumes should run? > > If you don't like the idea of having Ceph/vCenter checkboxes on the first > page, > I can suggest as an idea (research is required) to define groups like > Storage Backend, > Network Manager and we will allow plugin developer to embed his option in > radiobutton > field on wizard pages. But plugin developer should not describe > conditions, he should > just write that his plugin is a Storage Backend, Hypervisor or new Network > Manager. > And the plugins e.g. Zabbix, Nagios, which don't belong to any of this > groups > should be shown as checkboxes on the first page of the wizard. > Why don't you just ditch "enableness" of plugins and get rid of this complex stuff? Can you explain why do you need to know if plugin is "enabled"? Let me summarize my opinion on this: - You don't need to know whether plugin is enabled or not. You need to know what tasks should be run and whether plugin is removable (anything else?). These conditions can be described by the DSL. - Explicitly asking the user to enable plugin for new environment should be considered as a last resort solution because it significantly impair our UX for inexperienced user. Just imagine: a new user which barely knows about OpenStack chooses a name for the environment, OS release and then he needs to choose plugins. Really? My proposal for "complex" plugin interface: there should be python classes with exactly the same fields from yaml files: plugin name, version, etc. But condition for cluster deletion and for tasks which are written in DSL in case of "simple" yaml config should become methods which plugin writer can make as complex as he wants. > > [1] > https://github.com/vkramskikh/fuel-plugins/commit/1ddb166731fc4bf614f502b276eb136687cb20cf > > On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh > wrote: > >> >> >> 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak : >> >>> >>>> - environment_config.yaml should contain exact config which will be >>>> mixed into cluster_attributes. No need to implicitly generate any controls >>>> like it is done now. >>>> >>>> Initially i had the same thoughts and wanted to use it the way it is, >>> but now i completely agree with Evgeniy that additional DSL will cause a lot >>> of problems with compatibility between versions and developer experience. >>> >> As far as I understand, you want to introduce another approach to >> describe UI part or plugins? >> >>> We need to search for alternatives.. >>> 1. for UI i would prefer separate tab for plugins, where user will be >>> able to enable/disable plugin explicitly. >>> >> Of course, we need a separate page for plugin management. >> >>> Currently settings tab is overloaded. >>> 2. on backend we need to validate plugins against certain env before >>> enabling it, >>> and for simple case we may expose some basic entities like >>> network_mode. >>> For case where you need complex logic - python code is far more flexible >>> that new DSL. >>> >>>> >>>> - metadata.yaml should also contain "is_removable" field. This >>>> field is needed to determine whether it is possible to remove installed >>>> plugin. It is impossible to remove plugins in the current implementation. >>>> This field should contain an expression written in our DSL which we already >>>> use in a few places. The LBaaS plugin also uses it to hide the checkbox if >>>> Neutron is not used, so even simple plugins like this need to utilize it. >>>> This field can also be autogenerated, for more complex plugins plugin >>>> writer needs to fix it manually. For example, for Ceph it could look like >>>> "settings:storage.volumes_ceph.value == false and >>>> settings:storage.images_ceph.value == false". >>>> >>>> How checkbox will help? There is several cases of plugin removal.. >>> >> It is not a checkbox, this is condition that determines whether the >> plugin is removable. It allows plugin developer specify when plguin can be >> safely removed from Fuel if there are some environments which were created >> after the plugin had been installed. >> >>> 1. Plugin is installed, but not enabled for any env - just remove the >>> plugin >>> 2. Plugin is installed, enabled and cluster deployed - forget about it >>> for now.. >>> 3. Plugin is installed and only enabled - we need to maintain state of >>> db consistent after plugin is removed, it is problematic, but possible >>> >> My approach also allows to eliminate "enableness" of plugins which will >> cause UX issues and issues like you described above. vCenter and Ceph also >> don't have "enabled" state. vCenter has hypervisor and storage, Ceph >> provides backends for Cinder and Glance which can be used simultaneously or >> only one of them can be used. >> >>> My main point that plugin is enabled/disabled explicitly by user, after >>> that we can decide ourselves can it be removed or not. >>> >>>> >>>> - For every task in tasks.yaml there should be added new >>>> "condition" field with an expression which determines whether the task >>>> should be run. In the current implementation tasks are always run for >>>> specified roles. For example, vCenter plugin can have a few tasks with >>>> conditions like "settings:common.libvirt_type.value == 'vcenter'" or >>>> "settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar >>>> approach will be used in implementation of Granular Deployment feature. >>>> >>>> I had some thoughts about using DSL, it seemed to me especially >>> helpfull when you need to disable part of embedded into core functionality, >>> like deploying with another hypervisor, or network dirver (contrail for >>> example). And DSL wont cover all cases here, this quite similar to >>> metadata.yaml, simple cases can be covered by some variables in tasks (like >>> group, unique, etc), but complex is easier to test and describe in python. >>> >> Could you please provide example of such conditions? vCenter and Ceph can >> be turned into plugins using this approach. >> >> Also, I'm not against python version of plugins. It could look like a >> python class with exactly the same fields form YAML files, but conditions >> will be written in python. >> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Vitaly Kramskikh, >> Software Engineer, >> Mirantis, Inc. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh > wrote: > >> >> >> 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak : >> >>> >>>> - environment_config.yaml should contain exact config which will be >>>> mixed into cluster_attributes. No need to implicitly generate any controls >>>> like it is done now. >>>> >>>> Initially i had the same thoughts and wanted to use it the way it is, >>> but now i completely agree with Evgeniy that additional DSL will cause a lot >>> of problems with compatibility between versions and developer experience. >>> >> As far as I understand, you want to introduce another approach to >> describe UI part or plugins? >> >>> We need to search for alternatives.. >>> 1. for UI i would prefer separate tab for plugins, where user will be >>> able to enable/disable plugin explicitly. >>> >> Of course, we need a separate page for plugin management. >> >>> Currently settings tab is overloaded. >>> 2. on backend we need to validate plugins against certain env before >>> enabling it, >>> and for simple case we may expose some basic entities like >>> network_mode. >>> For case where you need complex logic - python code is far more flexible >>> that new DSL. >>> >>>> >>>> - metadata.yaml should also contain "is_removable" field. This >>>> field is needed to determine whether it is possible to remove installed >>>> plugin. It is impossible to remove plugins in the current implementation. >>>> This field should contain an expression written in our DSL which we already >>>> use in a few places. The LBaaS plugin also uses it to hide the checkbox if >>>> Neutron is not used, so even simple plugins like this need to utilize it. >>>> This field can also be autogenerated, for more complex plugins plugin >>>> writer needs to fix it manually. For example, for Ceph it could look like >>>> "settings:storage.volumes_ceph.value == false and >>>> settings:storage.images_ceph.value == false". >>>> >>>> How checkbox will help? There is several cases of plugin removal.. >>> >> It is not a checkbox, this is condition that determines whether the >> plugin is removable. It allows plugin developer specify when plguin can be >> safely removed from Fuel if there are some environments which were created >> after the plugin had been installed. >> >>> 1. Plugin is installed, but not enabled for any env - just remove the >>> plugin >>> 2. Plugin is installed, enabled and cluster deployed - forget about it >>> for now.. >>> 3. Plugin is installed and only enabled - we need to maintain state of >>> db consistent after plugin is removed, it is problematic, but possible >>> >> My approach also allows to eliminate "enableness" of plugins which will >> cause UX issues and issues like you described above. vCenter and Ceph also >> don't have "enabled" state. vCenter has hypervisor and storage, Ceph >> provides backends for Cinder and Glance which can be used simultaneously or >> only one of them can be used. >> >>> My main point that plugin is enabled/disabled explicitly by user, after >>> that we can decide ourselves can it be removed or not. >>> >>>> >>>> - For every task in tasks.yaml there should be added new >>>> "condition" field with an expression which determines whether the task >>>> should be run. In the current implementation tasks are always run for >>>> specified roles. For example, vCenter plugin can have a few tasks with >>>> conditions like "settings:common.libvirt_type.value == 'vcenter'" or >>>> "settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar >>>> approach will be used in implementation of Granular Deployment feature. >>>> >>>> I had some thoughts about using DSL, it seemed to me especially >>> helpfull when you need to disable part of embedded into core functionality, >>> like deploying with another hypervisor, or network dirver (contrail for >>> example). And DSL wont cover all cases here, this quite similar to >>> metadata.yaml, simple cases can be covered by some variables in tasks (like >>> group, unique, etc), but complex is easier to test and describe in python. >>> >> Could you please provide example of such conditions? vCenter and Ceph can >> be turned into plugins using this approach. >> >> Also, I'm not against python version of plugins. It could look like a >> python class with exactly the same fields form YAML files, but conditions >> will be written in python. >> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Vitaly Kramskikh, >> Software Engineer, >> Mirantis, Inc. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From duncan.thomas at gmail.com Wed Dec 10 15:54:46 2014 From: duncan.thomas at gmail.com (Duncan Thomas) Date: Wed, 10 Dec 2014 17:54:46 +0200 Subject: [openstack-dev] [cinder] 3rd Party CI for drivers Message-ID: Hi All Hopefully this shouldn't come as a surprise to anybody, but the cinder team is requiring working third party CI for all drivers. For any driver that was merged before the start of Kilo, we expect working 3rd party CI posting on every commit before k-2, that is the 5th of Feb, or that driver is at risk of being removed from the tree. Please join #openstack-cinder to discuss CI requirements. -- Duncan Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From skywalker.nick at gmail.com Wed Dec 10 16:06:49 2014 From: skywalker.nick at gmail.com (Li Ma) Date: Thu, 11 Dec 2014 00:06:49 +0800 Subject: [openstack-dev] [oslo.messaging][devstack] ZeroMQ driver maintenance next steps In-Reply-To: <69DE76EB-8BCD-4D35-9099-13B71160AF80@doughellmann.com> References: <546A058C.7090800@ubuntu.com> <1C39F2D6-C600-4BA5-8F95-79E2E7DA7172@doughellmann.com> <6F5E7E0C-656F-4616-A816-BC2E340299AD@doughellmann.com> <37ecc12f660ddca2b5a76930c06c1ce6@sileht.net> <546B4DC7.9090309@ubuntu.com> <548679CD.4030205@gmail.com> <69DE76EB-8BCD-4D35-9099-13B71160AF80@doughellmann.com> Message-ID: <54886F99.5060205@gmail.com> On 2014/12/9 22:07, Doug Hellmann wrote: > On Dec 8, 2014, at 11:25 PM, Li Ma wrote: > >> Hi all, I tried to deploy zeromq by devstack and it definitely failed with lots of problems, like dependencies, topics, matchmaker setup, etc. I've already registered a blueprint for devstack-zeromq [1]. > I added the [devstack] tag to the subject of this message so that team will see the thread. Thanks for helping fix this critical bug, Doug. :-) @devstack: Currently, I cannot find any devstack-specs related repo for proposing blueprint details. So, If any devstack guys here, please help review this blueprint [1] and welcome to leave any comments. This is really important for us. Actually, I thought that I could provide some bug fix to make it work, but after evaluation, I would like to make it as a blueprint, because there are lots of works and a blueprint is suitable for it to trace everything. [1] https://blueprints.launchpad.net/devstack/+spec/zeromq >> Besides, I suggest to build a wiki page in order to trace all the workitems related with ZeroMQ. The general sections may be [Why ZeroMQ], [Current Bugs & Reviews], [Future Plan & Blueprints], [Discussions], [Resources], etc. > Coordinating the work on this via a wiki page makes sense. Please post the link when you?re ready. > > Doug OK. I'll get it done soon. >> Any comments? >> >> [1] https://blueprints.launchpad.net/devstack/+spec/zeromq >> >> cheers, >> Li Ma >> >> On 2014/11/18 21:46, James Page wrote: >>> -----BEGIN PGP SIGNED MESSAGE----- >>> Hash: SHA256 >>> >>> On 18/11/14 00:55, Denis Makogon wrote: >>>> So if zmq driver support in devstack is fixed, we can easily add a >>>> new job to run them in the same way. >>>> >>>> >>>> Btw this is a good question. I will take look at current state of >>>> zmq in devstack. >>> I don't think its that far off and its broken rather than missing - >>> the rpc backend code needs updating to use oslo.messaging rather than >>> project specific copies of the rpc common codebase (pre oslo). >>> Devstack should be able to run with the local matchmaker in most >>> scenarios but it looks like there was support for the redis matchmaker >>> as well. >>> >>> If you could take some time to fixup that would be awesome! >>> >>> - -- James Page >>> Ubuntu and Debian Developer >>> james.page at ubuntu.com >>> jamespage at debian.org >>> -----BEGIN PGP SIGNATURE----- >>> Version: GnuPG v1 >>> >>> iQIbBAEBCAAGBQJUa03HAAoJEL/srsug59jDdZQP+IeEvXAcfxNs2Tgvt5trnjgg >>> cnTrJPLbr6i/uIXKjRvNDSkJEdv//EjL/IRVRIf0ld09FpRnyKzUDMPq1CzFJqdo >>> 45RqFWwJ46NVA4ApLZVugJvKc4tlouZQvizqCBzDKA6yUsUoGmRpYFAQ3rN6Gs9h >>> Q/8XSAmHQF1nyTarxvylZgnqhqWX0p8n1+fckQeq2y7s3D3WxfM71ftiLrmQCWir >>> aPkH7/0qvW+XiOtBXVTXDb/7pocNZg+jtBkUcokORXbJCmiCN36DBXv9LPIYgfhe >>> /cC/wQFH4RUSkoj7SYPAafX4J2lTMjAd+GwdV6ppKy4DbPZdNty8c9cbG29KUK40 >>> TSCz8U3tUcaFGDQdBB5Kg85c1aYri6dmLxJlk7d8pOXLTb0bfnzdl+b6UsLkhXqB >>> P4Uc+IaV9vxoqmYZAzuqyWm9QriYlcYeaIJ9Ma5fN+CqxnIaCS7UbSxHj0yzTaUb >>> 4XgmcQBwHe22ouwBmk2RGzLc1Rv8EzMLbbrGhtTu459WnAZCrXOTPOCn54PoIgZD >>> bK/Om+nmTxepWD1lExHIYk3BXyZObxPO00UJHdxvSAIh45ROlh8jW8hQA9lJ9QVu >>> Cz775xVlh4DRYgenN34c2afOrhhdq4V1OmjYUBf5M4gS6iKa20LsMjp7NqT0jzzB >>> tRDFb67u28jxnIXR16g= >>> =+k0M >>> -----END PGP SIGNATURE----- >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From eli at mirantis.com Wed Dec 10 16:31:41 2014 From: eli at mirantis.com (Evgeniy L) Date: Wed, 10 Dec 2014 20:31:41 +0400 Subject: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format In-Reply-To: References: Message-ID: On Wed, Dec 10, 2014 at 6:50 PM, Vitaly Kramskikh wrote: > > > 2014-12-10 16:57 GMT+03:00 Evgeniy L : > >> Hi, >> >> First let me describe what our plans for the nearest release. We want to >> deliver >> role as a simple plugin, it means that plugin developer can define his >> own role >> with yaml and also it should work fine with our current approach when >> user can >> define several fields on the settings tab. >> >> Also I would like to mention another thing which we should probably >> discuss >> in separate thread, how plugins should be implemented. We have two types >> of plugins, simple and complicated, the definition of simple - I can do >> everything >> I need with yaml, the definition of complicated - probably I have to >> write some >> python code. It doesn't mean that this python code should do absolutely >> everything it wants, but it means we should implement stable, documented >> interface where plugin is connected to the core. >> >> Now lets talk about UI flow, our current problem is how to get the >> information >> if plugins is used in the environment or not, this information is >> required for >> backend which generates appropriate tasks for task executor, also this >> information can be used in the future if we decide to implement plugins >> deletion >> mechanism. >> >> I didn't come up with a some new solution, as before we have two options >> to >> solve the problem: >> >> # 1 >> >> Use conditional language which is currently used on UI, it will look like >> Vitaly described in the example [1]. >> Plugin developer should: >> >> 1. describe at least one element for UI, which he will be able to use in >> task >> >> 2. add condition which is written in our own programming language >> >> Example of the condition for LBaaS plugin: >> >> condition: settings:lbaas.metadata.enabled == true >> >> 3. add condition to metadata.yaml a condition which defines if plugin is >> enabled >> >> is_enabled: settings:lbaas.metadata.enabled == true >> >> This approach has good flexibility, but also it has problems: >> >> a. It's complicated and not intuitive for plugin developer. >> > It is less complicated than python code > I'm not sure why are you talking about python code here, my point is we should not force developer to use this conditions in any language. Anyway I don't agree with the statement there are more people who know python than "fuel ui conditional language". > b. It doesn't cover case when the user installs 3rd party plugin >> which doesn't have any conditions (because of # a) and >> user doesn't have a way to disable it for environment if it >> breaks his configuration. >> > If plugin doesn't have conditions for tasks, then it has invalid metadata. > Yep, and it's a problem of the platform, which provides a bad interface. > >> # 2 >> >> As we discussed from the very beginning after user selects a release he >> can >> choose a set of plugins which he wants to be enabled for environment. >> After that we can say that plugin is enabled for the environment and we >> send >> tasks related to this plugin to task executor. >> >> >> My approach also allows to eliminate "enableness" of plugins which >> will cause UX issues and issues like you described above. vCenter and Ceph >> also don't have "enabled" state. vCenter has hypervisor and storage, Ceph >> provides backends for Cinder and Glance which can be used simultaneously or >> only one of them can be used. >> >> Both of described plugins have enabled/disabled state, vCenter is enabled >> when vCenter is selected as hypervisor. Ceph is enabled when it's selected >> as a backend for Cinder or Glance. >> > Nope, Ceph for Volumes can be used without Ceph for Images. Both of these > plugins can also have some granular tasks which are enabled by various > checkboxes (like VMware vCenter for volumes). How would you determine > whether tasks which installs VMware vCenter for volumes should run? > Why "nope"? I have "Cinder OR Glance". It can be easily handled in deployment script. >> If you don't like the idea of having Ceph/vCenter checkboxes on the first >> page, >> I can suggest as an idea (research is required) to define groups like >> Storage Backend, >> Network Manager and we will allow plugin developer to embed his option in >> radiobutton >> field on wizard pages. But plugin developer should not describe >> conditions, he should >> just write that his plugin is a Storage Backend, Hypervisor or new >> Network Manager. >> And the plugins e.g. Zabbix, Nagios, which don't belong to any of this >> groups >> should be shown as checkboxes on the first page of the wizard. >> > Why don't you just ditch "enableness" of plugins and get rid of this > complex stuff? Can you explain why do you need to know if plugin is > "enabled"? Let me summarize my opinion on this: > I described why we need it many times. Also it looks like you skipped another option and I would like to see some more information why you don't like it and why it's a bad from UX stand point of view. > > - You don't need to know whether plugin is enabled or not. You need to > know what tasks should be run and whether plugin is removable (anything > else?). These conditions can be described by the DSL. > > I do need to know if plugin is enabled to figure out if it's removable, in fact those are the same things. > > - > - Explicitly asking the user to enable plugin for new environment > should be considered as a last resort solution because it significantly > impair our UX for inexperienced user. Just imagine: a new user which barely > knows about OpenStack chooses a name for the environment, OS release and > then he needs to choose plugins. Really? > > I really think that it's absolutely ok to show checkbox with LBaaS for the user who found the plugin, downloaded it on the master and installed it with CLI. And right now this user have to go to this settings tab with attempts to find this checkbox, also he may not find it for example because of incompatible release version, and it's clearly a bad UX. > My proposal for "complex" plugin interface: there should be python classes > with exactly the same fields from yaml files: plugin name, version, etc. > But condition for cluster deletion and for tasks which are written in DSL > in case of "simple" yaml config should become methods which plugin writer > can make as complex as he wants. > Why do you want to use python to define plugin name, version etc? It's a static data which are used for installation, I don't think that in fuel client (or some other installation tool) we want to unpack the plugin and import this module to get the information which is required for installation. > >> [1] >> https://github.com/vkramskikh/fuel-plugins/commit/1ddb166731fc4bf614f502b276eb136687cb20cf >> >> On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh < >> vkramskikh at mirantis.com> wrote: >> >>> >>> >>> 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak : >>> >>>> >>>>> - environment_config.yaml should contain exact config which will >>>>> be mixed into cluster_attributes. No need to implicitly generate any >>>>> controls like it is done now. >>>>> >>>>> Initially i had the same thoughts and wanted to use it the way it is, >>>> but now i completely agree with Evgeniy that additional DSL will cause a lot >>>> of problems with compatibility between versions and developer >>>> experience. >>>> >>> As far as I understand, you want to introduce another approach to >>> describe UI part or plugins? >>> >>>> We need to search for alternatives.. >>>> 1. for UI i would prefer separate tab for plugins, where user will be >>>> able to enable/disable plugin explicitly. >>>> >>> Of course, we need a separate page for plugin management. >>> >>>> Currently settings tab is overloaded. >>>> 2. on backend we need to validate plugins against certain env before >>>> enabling it, >>>> and for simple case we may expose some basic entities like >>>> network_mode. >>>> For case where you need complex logic - python code is far more >>>> flexible that new DSL. >>>> >>>>> >>>>> - metadata.yaml should also contain "is_removable" field. This >>>>> field is needed to determine whether it is possible to remove installed >>>>> plugin. It is impossible to remove plugins in the current implementation. >>>>> This field should contain an expression written in our DSL which we already >>>>> use in a few places. The LBaaS plugin also uses it to hide the checkbox if >>>>> Neutron is not used, so even simple plugins like this need to utilize it. >>>>> This field can also be autogenerated, for more complex plugins plugin >>>>> writer needs to fix it manually. For example, for Ceph it could look like >>>>> "settings:storage.volumes_ceph.value == false and >>>>> settings:storage.images_ceph.value == false". >>>>> >>>>> How checkbox will help? There is several cases of plugin removal.. >>>> >>> It is not a checkbox, this is condition that determines whether the >>> plugin is removable. It allows plugin developer specify when plguin can be >>> safely removed from Fuel if there are some environments which were created >>> after the plugin had been installed. >>> >>>> 1. Plugin is installed, but not enabled for any env - just remove the >>>> plugin >>>> 2. Plugin is installed, enabled and cluster deployed - forget about it >>>> for now.. >>>> 3. Plugin is installed and only enabled - we need to maintain state of >>>> db consistent after plugin is removed, it is problematic, but possible >>>> >>> My approach also allows to eliminate "enableness" of plugins which will >>> cause UX issues and issues like you described above. vCenter and Ceph also >>> don't have "enabled" state. vCenter has hypervisor and storage, Ceph >>> provides backends for Cinder and Glance which can be used simultaneously or >>> only one of them can be used. >>> >>>> My main point that plugin is enabled/disabled explicitly by user, after >>>> that we can decide ourselves can it be removed or not. >>>> >>>>> >>>>> - For every task in tasks.yaml there should be added new >>>>> "condition" field with an expression which determines whether the task >>>>> should be run. In the current implementation tasks are always run for >>>>> specified roles. For example, vCenter plugin can have a few tasks with >>>>> conditions like "settings:common.libvirt_type.value == 'vcenter'" or >>>>> "settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar >>>>> approach will be used in implementation of Granular Deployment feature. >>>>> >>>>> I had some thoughts about using DSL, it seemed to me especially >>>> helpfull when you need to disable part of embedded into core functionality, >>>> like deploying with another hypervisor, or network dirver (contrail for >>>> example). And DSL wont cover all cases here, this quite similar to >>>> metadata.yaml, simple cases can be covered by some variables in tasks (like >>>> group, unique, etc), but complex is easier to test and describe in python. >>>> >>> Could you please provide example of such conditions? vCenter and Ceph >>> can be turned into plugins using this approach. >>> >>> Also, I'm not against python version of plugins. It could look like a >>> python class with exactly the same fields form YAML files, but conditions >>> will be written in python. >>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> >>> -- >>> Vitaly Kramskikh, >>> Software Engineer, >>> Mirantis, Inc. >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh < >> vkramskikh at mirantis.com> wrote: >> >>> >>> >>> 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak : >>> >>>> >>>>> - environment_config.yaml should contain exact config which will >>>>> be mixed into cluster_attributes. No need to implicitly generate any >>>>> controls like it is done now. >>>>> >>>>> Initially i had the same thoughts and wanted to use it the way it is, >>>> but now i completely agree with Evgeniy that additional DSL will cause a lot >>>> of problems with compatibility between versions and developer >>>> experience. >>>> >>> As far as I understand, you want to introduce another approach to >>> describe UI part or plugins? >>> >>>> We need to search for alternatives.. >>>> 1. for UI i would prefer separate tab for plugins, where user will be >>>> able to enable/disable plugin explicitly. >>>> >>> Of course, we need a separate page for plugin management. >>> >>>> Currently settings tab is overloaded. >>>> 2. on backend we need to validate plugins against certain env before >>>> enabling it, >>>> and for simple case we may expose some basic entities like >>>> network_mode. >>>> For case where you need complex logic - python code is far more >>>> flexible that new DSL. >>>> >>>>> >>>>> - metadata.yaml should also contain "is_removable" field. This >>>>> field is needed to determine whether it is possible to remove installed >>>>> plugin. It is impossible to remove plugins in the current implementation. >>>>> This field should contain an expression written in our DSL which we already >>>>> use in a few places. The LBaaS plugin also uses it to hide the checkbox if >>>>> Neutron is not used, so even simple plugins like this need to utilize it. >>>>> This field can also be autogenerated, for more complex plugins plugin >>>>> writer needs to fix it manually. For example, for Ceph it could look like >>>>> "settings:storage.volumes_ceph.value == false and >>>>> settings:storage.images_ceph.value == false". >>>>> >>>>> How checkbox will help? There is several cases of plugin removal.. >>>> >>> It is not a checkbox, this is condition that determines whether the >>> plugin is removable. It allows plugin developer specify when plguin can be >>> safely removed from Fuel if there are some environments which were created >>> after the plugin had been installed. >>> >>>> 1. Plugin is installed, but not enabled for any env - just remove the >>>> plugin >>>> 2. Plugin is installed, enabled and cluster deployed - forget about it >>>> for now.. >>>> 3. Plugin is installed and only enabled - we need to maintain state of >>>> db consistent after plugin is removed, it is problematic, but possible >>>> >>> My approach also allows to eliminate "enableness" of plugins which will >>> cause UX issues and issues like you described above. vCenter and Ceph also >>> don't have "enabled" state. vCenter has hypervisor and storage, Ceph >>> provides backends for Cinder and Glance which can be used simultaneously or >>> only one of them can be used. >>> >>>> My main point that plugin is enabled/disabled explicitly by user, after >>>> that we can decide ourselves can it be removed or not. >>>> >>>>> >>>>> - For every task in tasks.yaml there should be added new >>>>> "condition" field with an expression which determines whether the task >>>>> should be run. In the current implementation tasks are always run for >>>>> specified roles. For example, vCenter plugin can have a few tasks with >>>>> conditions like "settings:common.libvirt_type.value == 'vcenter'" or >>>>> "settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar >>>>> approach will be used in implementation of Granular Deployment feature. >>>>> >>>>> I had some thoughts about using DSL, it seemed to me especially >>>> helpfull when you need to disable part of embedded into core functionality, >>>> like deploying with another hypervisor, or network dirver (contrail for >>>> example). And DSL wont cover all cases here, this quite similar to >>>> metadata.yaml, simple cases can be covered by some variables in tasks (like >>>> group, unique, etc), but complex is easier to test and describe in python. >>>> >>> Could you please provide example of such conditions? vCenter and Ceph >>> can be turned into plugins using this approach. >>> >>> Also, I'm not against python version of plugins. It could look like a >>> python class with exactly the same fields form YAML files, but conditions >>> will be written in python. >>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> >>> -- >>> Vitaly Kramskikh, >>> Software Engineer, >>> Mirantis, Inc. >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Vitaly Kramskikh, > Software Engineer, > Mirantis, Inc. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen.kf.wong at gmail.com Wed Dec 10 16:33:20 2014 From: stephen.kf.wong at gmail.com (Stephen Wong) Date: Wed, 10 Dec 2014 08:33:20 -0800 Subject: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework In-Reply-To: References: Message-ID: Hi Murali, There is already a ServiceVM project (Tacker), currently under development on stackforge: https://wiki.openstack.org/wiki/ServiceVM If you are interested in this topic, please take a look at the wiki page above and see if the project's goals align with yours. If so, you are certainly welcome to join the IRC meeting and start to contribute to the project's direction and design. Thanks, - Stephen On Wed, Dec 10, 2014 at 7:01 AM, Murali B wrote: > Hi keshava, > > We would like contribute towards service chain and NFV > > Could you please share the document if you have any related to service VM > > The service chain can be achieved if we able to redirect the traffic to > service VM using ovs-flows > > in this case we no need to have routing enable on the service VM(traffic > is redirected at L2). > > All the tenant VM's in cloud could use this service VM services by adding > the ovs-rules in OVS > > > Thanks > -Murali > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From blak111 at gmail.com Wed Dec 10 16:36:35 2014 From: blak111 at gmail.com (Kevin Benton) Date: Wed, 10 Dec 2014 09:36:35 -0700 Subject: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup? In-Reply-To: References: <87ppc090ne.fsf@metaswitch.com> <87d27z8kxj.fsf@metaswitch.com> Message-ID: What would the port binding operation do in this case? Just mark the port as bound and nothing else? On Wed, Dec 10, 2014 at 12:48 AM, henry hly wrote: > Hi Kevin, > > Does it make sense to introduce "GeneralvSwitch MD", working with > VIF_TYPE_TAP? It just do very simple port bind just like OVS and > bridge. Then anyone can implement their backend and agent without > patch neutron drivers. > > Best Regards > Henry > > On Fri, Dec 5, 2014 at 4:23 PM, Kevin Benton wrote: > > I see the difference now. > > The main concern I see with the NOOP type is that creating the virtual > > interface could require different logic for certain hypervisors. In that > > case Neutron would now have to know things about nova and to me it seems > > like that's slightly too far the other direction. > > > > On Thu, Dec 4, 2014 at 8:00 AM, Neil Jerram > > wrote: > >> > >> Kevin Benton writes: > >> > >> > What you are proposing sounds very reasonable. If I understand > >> > correctly, the idea is to make Nova just create the TAP device and get > >> > it attached to the VM and leave it 'unplugged'. This would work well > >> > and might eliminate the need for some drivers. I see no reason to > >> > block adding a VIF type that does this. > >> > >> I was actually floating a slightly more radical option than that: the > >> idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does > >> absolutely _nothing_, not even create the TAP device. > >> > >> (My pending Nova spec at https://review.openstack.org/#/c/130732/ > >> proposes VIF_TYPE_TAP, for which Nova _does_ creates the TAP device, but > >> then does nothing else - i.e. exactly what you've described just above. > >> But in this email thread I was musing about going even further, towards > >> providing a platform for future networking experimentation where Nova > >> isn't involved at all in the networking setup logic.) > >> > >> > However, there is a good reason that the VIF type for some OVS-based > >> > deployments require this type of setup. The vSwitches are connected to > >> > a central controller using openflow (or ovsdb) which configures > >> > forwarding rules/etc. Therefore they don't have any agents running on > >> > the compute nodes from the Neutron side to perform the step of getting > >> > the interface plugged into the vSwitch in the first place. For this > >> > reason, we will still need both types of VIFs. > >> > >> Thanks. I'm not advocating that existing VIF types should be removed, > >> though - rather wondering if similar function could in principle be > >> implemented without Nova VIF plugging - or what that would take. > >> > >> For example, suppose someone came along and wanted to implement a new > >> OVS-like networking infrastructure? In principle could they do that > >> without having to enhance the Nova VIF driver code? I think at the > >> moment they couldn't, but that they would be able to if VIF_TYPE_NOOP > >> (or possibly VIF_TYPE_TAP) was already in place. In principle I think > >> it would then be possible for the new implementation to specify > >> VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does the kind > >> of configuration and vSwitch plugging that you've described above. > >> > >> Does that sound correct, or am I missing something else? > >> > >> >> 1 .When the port is created in the Neutron DB, and handled (bound > >> > etc.) > >> > by the plugin and/or mechanism driver, the TAP device name is already > >> > present at that time. > >> > > >> > This is backwards. The tap device name is derived from the port ID, so > >> > the port has already been created in Neutron at that point. It is just > >> > unbound. The steps are roughly as follows: Nova calls neutron for a > >> > port, Nova creates/plugs VIF based on port, Nova updates port on > >> > Neutron, Neutron binds the port and notifies agent/plugin/whatever to > >> > finish the plumbing, Neutron notifies Nova that port is active, Nova > >> > unfreezes the VM. > >> > > >> > None of that should be affected by what you are proposing. The only > >> > difference is that your Neutron agent would also perform the > >> > 'plugging' operation. > >> > >> Agreed - but thanks for clarifying the exact sequence of events. > >> > >> I wonder if what I'm describing (either VIF_TYPE_NOOP or VIF_TYPE_TAP) > >> might fit as part of the "Nova-network/Neutron Migration" priority > >> that's just been announced for Kilo. I'm aware that a part of that > >> priority is concerned with live migration, but perhaps it could also > >> include the goal of future networking work not having to touch Nova > >> code? > >> > >> Regards, > >> Neil > > > > > > > > > > -- > > Kevin Benton > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Kevin Benton -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtreinish at kortar.org Wed Dec 10 16:39:11 2014 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 10 Dec 2014 11:39:11 -0500 Subject: [openstack-dev] [QA] Meeting Thursday December 11th at 22:00 UTC Message-ID: <20141210163911.GA788@Sazabi.treinish> Hi everyone, Just a quick reminder that the weekly OpenStack QA team IRC meeting will be tomorrow Thursday, December 11th at 22:00 UTC in the #openstack-meeting channel. The agenda for tomorrow's meeting can be found here: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting Anyone is welcome to add an item to the agenda. It's also worth noting that a few weeks ago we started having a regular dedicated Devstack topic during the meetings. So if anyone is interested in Devstack development please join the meetings to be a part of the discussion. To help people figure out what time 22:00 UTC is in other timezones tomorrow's meeting will be at: 17:00 EST 07:00 JST 08:30 ACDT 23:00 CET 16:00 CST 14:00 PST -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From jaypipes at gmail.com Wed Dec 10 16:39:34 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 10 Dec 2014 11:39:34 -0500 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: <20141210093101.GC6450@redhat.com> References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> Message-ID: <54887746.1010508@gmail.com> On 12/10/2014 04:31 AM, Daniel P. Berrange wrote: > So the problem of Nova review bandwidth is a constant problem across all > areas of the code. We need to solve this problem for the team as a whole > in a much broader fashion than just for people writing VIF drivers. The > VIF drivers are really small pieces of code that should be straightforward > to review & get merged in any release cycle in which they are proposed. > I think we need to make sure that we focus our energy on doing this and > not ignoring the problem by breaking stuff off out of tree. +1 well said. -jay From blak111 at gmail.com Wed Dec 10 16:40:00 2014 From: blak111 at gmail.com (Kevin Benton) Date: Wed, 10 Dec 2014 09:40:00 -0700 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: >Remove everything out of tree, and leave only Neutron API framework as integration platform, would lower the attractions of the whole Openstack Project. Without a default "good enough" reference backend from community, customers have to depends on packagers to fully test all backends for them. That's not what's being proposed. Please read the spec. There will still be a tested reference implementation from the community that gates all changes. Where the code lives has no impact on customers. On Wed, Dec 10, 2014 at 12:32 AM, loy wolfe wrote: > Remove everything out of tree, and leave only Neutron API framework as > integration platform, would lower the attractions of the whole > Openstack Project. Without a default "good enough" reference backend > from community, customers have to depends on packagers to fully test > all backends for them. Can we image nova without kvm, glance without > swift? Cinder is weak because of default lvm backend, if in the future > Ceph became the default it would be much better. > > If the goal of this decomposition is eventually moving default > reference driver out, and the in-tree OVS backend is an eyesore, then > it's better to split the Neutron core with base repo and vendor repo. > They only share common base API/DB model, each vendor can extend their > API, DB model freely, using a shim proxy to delegate all the service > logic to their backend controller. They can choose to keep out of > tree, or in tree (vendor repo) with the previous policy that > contribute code reviewing for their code being reviewed by other > vendors. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Kevin Benton -------------- next part -------------- An HTML attachment was scrubbed... URL: From jason at koelker.net Wed Dec 10 16:52:53 2014 From: jason at koelker.net (=?UTF-8?Q?Jason_K=C3=B6lker?=) Date: Wed, 10 Dec 2014 16:52:53 +0000 Subject: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration In-Reply-To: References: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP> <9EDBC83C95615E4A97C964A30A3E18AC2F695F28@ORD1EXD01.RACKSPACE.CORP> Message-ID: On Mon, Dec 8, 2014 at 7:57 PM, Carl Baldwin wrote: > I'll be traveling around the time of the L3 meeting this week. My > flight leaves 40 minutes after the meeting and I might have trouble > attending. It might be best to put it off a week or to plan another > time -- maybe Friday -- when we could discuss it in IRC or in a > Hangout. Carl, Very glad to see the work the L3 team has been working towards in this. I'm still digesting the specs/blueprints, but as you stated they are very much in the direction we'd like to head as well. I'll start lurking in the L3 meetings to get more familiar with the current state of things as I've been disconnected from upstream for a while. I'm `jkoelker` on freenode or `jkoelker at gmail.com` for hangouts if you wanna chat. Happy Hacking! 7-11 From vkozhukalov at mirantis.com Wed Dec 10 16:53:07 2014 From: vkozhukalov at mirantis.com (Vladimir Kozhukalov) Date: Wed, 10 Dec 2014 20:53:07 +0400 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: References: <6B37BF6C-3ABF-490F-B35E-E4F1FC6AEF8D@jimrollenhagen.com> <2FD74259-FECF-4B5E-99FE-6A3EB6976582@mirantis.com> <20141209150048.GB5494@jimrollenhagen.com> <1A3C52DFCD06494D8528644858247BF0178162E7@EX10MBOX03.pnnl.gov> Message-ID: Devananda, Thank you for such a constructive letter, First of all, just to make sure we are on the same page, we are totally +1 for using any tool which meets our requirements and we are totally +1 for working together on the same problems. As you remember we suggested to add advanced partition capabilities (md and lvm) into IPA. I see that it is layer violation for Ironic, I see it is not in cloud scope, but we need these features, because our users want them and because our use case is deployment. For me it is seems OK when some tool has some feature which is not mandatory to use. And we didn't start Fuel Agent until these features were rejected to be merged into IPA. If we had a chance to implement them in terms of IPA that would be a preferred way for us. Some details: * Power management For power management Cobbler uses so called 'fence agents'. It is just a OS package which provides a bunch of scripts using ILO, IPMI, DRAC clients. Currently we extended this set of agents with so called 'ssh' agent. This ssh agent is able to run 'reboot' command inside OS via ssh. We use this agent by default because many of our users do their experiments on BMC-free hardware. That is why this spec https://review.openstack.org/#/c/138115/ refers to SSH power driver. I know Ironic already has SSH power driver which runs 'virsh' (a little bit confusing) command via ssh and it is supposed to use it for experimental envs. The suggestion to implement another SSH power driver can confuse people. My suggestion is to extend Ironic SSH power driver so as to make it able to run any command (virsh, vbox or even reboot) from a set. And maybe renaming this driver into something like 'experimental' or 'development' is not a very bad idea. I am aware that Ironic wants to remove this driver at all as it is used for tests only. But there are lots of different power cases (including w/o BMC) hardware and we definitely need to have a place where to put this non standard power related stuff. I believe many people are interested in having such a workaround. And we certainly need other Ironic power management capabilities like ILO, DRAC, IPMI. We are also potentially very interested in developing other hardware management capabilities like configuring hardware RAIDs, BIOS/UEFI, etc. * DHCP, TFTP, DNS management We are aware of the way how Ironic manages DHCP (not directly). As you mentioned, currently Ironic has a pluggable framework for DHCP and the only in-tree driver is neutron. And we are aware that implementing kind of dnsmasq wrapper immediately breaks Ironic scaling scheme (many conductors). When I wrote 'it is planned to implement dnsmasq plugin' in this spec https://review.openstack.org/#/c/138301 I didn't mean Ironic is planning to do this. I meant Fuel team is planning to implement this dnsmasq plugin out of Ironic tree (will point it out explicitly) just to be able to fit Fuel release cycle (iterative development). Maybe in the future we will consider to switch to Neutron for managing networks (out of scope of this discussion). This Ironic Fuel Agent driver is supposed to use Ironic abstractions to configure DHCP, i.e. call plugin methods update_port_dhcp_opts, update_port_address, update_dhcp_opts, get_ip_addresses NOT changing Ironic core (again, we will point it out explicitly). * IPA vs. Fuel Agent My suggestion here is to stop think of Fuel Agent as Fuel only related stuff. I hope it is clear by now that Fuel Agent is just a generic tool which is about 'operator == user within traditional IT shop' use case. And this use case requires all that stuff like LVM and enormous flexibility which does not even have a chance to be considered as a part of IPA next few months. A good decision here might be implementing Fuel Agent driver and then working on distinguishing common IPA and Fuel Agent parts and putting them into one tree (long term perspective). If it is a big deal we can even rename Fuel Agent into something which sounds more neutrally (not related to Fuel) and put it into a separate git. If this is what FuelAgent is about, why is there so much resistance to > contributing that functionality to the component which is already > integrated with Ironic? Why complicate matters for both users and > developers by adding *another* deploy agent that does (or will soon do) the > same things? Briefly, we are glad to contribute to IPA but let's do things iteratively. I need somehow to deliver power and dhcp management + image based provisioning by March 2015. According to my previous experience of contributing to IPA it is almost impossible to merge everything I need by that time. It is possible to implement Fuel Agent driver by that time. It is also possible to implement something on my own not integrating Ironic into Fuel at all. As a long term perspective if it's OK to land MD and LVM into IPA we definitely can do that. In summary, if I understand correctly, it seems as though you're trying to > fit Ironic into Cobbler's way of doing things, rather than recognize that > Ironic approaches provisioning in a fundamentally different way. You are only partly correct here. We are trying to fit Ironic into part of Cobbler's way of doing things. We are planning to get rid of native OS installers (anaconda) and therefore get rid of Cobbler and switch to image based model. We have some parts in Fuel which were designed to fit Cobbler's way of doing things and we can not throw them away at once. Our delivery cycle enforces us to do things iteratively. That is the reason of having dnsmasq wrapper, for example, as a temporary scheme. Fuel is continuously evolving to address modern tendencies. Your use case: > * is not cloud-like > Exactly. * does not include Nova or Neutron, but will duplicate functionality of > both (you need a scheduler and all the logic within nova.virt.ironic, and > something to manage DHCP and DNS assignment) > Currently Fuel user plays a role of very smart Nova scheduler (user looks at node parameters and chooses which of nodes are suitable) and yes, Fuel partly implements some logic similar to that in nova.virt.ironic. > * would use Ironic to manage diverse hardware, which naturally requires > some operator-driven customization, but still exposes the messy > configuration bits^D^Dchoices to users at deploy time > Exactly. * duplicates some of the functionality already available in other drivers Maybe, if you mean IPA driver. There are certain aspects of the proposal which I like, though: > * using SSH rather than HTTP for remote access to the deploy agent > yes. > * support for putting the root partition on a software RAID > not only MD, root fs over LVM is also in our scope. > * integration with another provisioning system, without any API changes yes. Besides, we install grub and linux kernel on a hard drive (local boot). Why does Astute need to pass this to Ironic? It seems like Astute could > simply SSH to the node running FuelAgent at this point, without Ironic > being involved That is exactly what it does right now. But as far as we need power management (IPMI ILO etc) and Ironic implements that, as far as Ironic is going to implement other hardware management capabilities, it sounds rational to use it for our use case. Why we are interested in using OpenStack services (definitely Ironic, maybe Neutron, maybe Glance, already Keystone)? Answer is simple: because some of those services address our needs. No conspiracy theory. Totally pragmatic. Vladimir Kozhukalov On Wed, Dec 10, 2014 at 1:06 AM, Devananda van der Veen < devananda.vdv at gmail.com> wrote: > On Tue Dec 09 2014 at 9:45:51 AM Fox, Kevin M wrote: > >> We've been interested in Ironic as a replacement for Cobbler for some of >> our systems and have been kicking the tires a bit recently. >> >> While initially I thought this thread was probably another "Fuel not >> playing well with the community" kind of thing, I'm not thinking that any >> more. Its deeper then that. >> > > There are aspects to both conversations here, and you raise many valid > points. > > Cloud provisioning is great. I really REALLY like it. But one of the >> things that makes it great is the nice, pretty, cute, uniform, standard >> "hardware" the vm gives the user. Ideally, the physical hardware would >> behave the same. But, >> ?No Battle Plan Survives Contact With the Enemy?. The sad reality is, >> most hardware is different from each other. Different drivers, different >> firmware, different different different. >> > > Indeed, hardware is different. And no matter how homogeneous you *think* > it is, at some point, some hardware is going to fail^D^D^Dbehave > differently than some other piece of hardware. > > One of the primary goals of Ironic is to provide a common *abstraction* to > all the vendor differences, driver differences, and hardware differences. > There's no magic in that -- underneath the covers, each driver is going to > have to deal with the unpleasant realities of actual hardware that is > actually different. > > >> One way the cloud enables this isolation is by forcing the cloud admin's >> to install things and deal with the grungy hardware to make the interface >> nice and clean for the user. For example, if you want greater mean time >> between failures of nova compute nodes, you probably use a raid 1. Sure, >> its kind of a pet kind of thing todo, but its up to the cloud admin to >> decide what's "better", buying more hardware, or paying for more admin/user >> time. Extra hard drives are dirt cheep... >> >> So, in reality Ironic is playing in a space somewhere between "I want to >> use cloud tools to deploy hardware, yay!" and "ewww.., physical hardware's >> nasty. you have to know all these extra things and do all these extra >> things that you don't have to do with a vm"... I believe Ironic's going to >> need to be able to deal with this messiness in as clean a way as possible. > > > If by "clean" you mean, expose a common abstraction on top of all those > messy differences -- then we're on the same page. I would welcome any > feedback as to where that abstraction leaks today, and on both spec and > code reviews that would degrade or violate that abstraction layer. I think > it is one of, if not *the*, defining characteristic of the project. > > >> But that's my opinion. If the team feels its not a valid use case, then >> we'll just have to use something else for our needs. I really really want >> to be able to use heat to deploy whole physical distributed systems though. >> >> Today, we're using software raid over two disks to deploy our nova >> compute. Why? We have some very old disks we recovered for one of our >> clouds and they fail often. nova-compute is pet enough to benefit somewhat >> from being able to swap out a disk without much effort. If we were to use >> Ironic to provision the compute nodes, we need to support a way to do the >> same. >> > > I have made the (apparently incorrect) assumption that anyone running > anything sensitive to disk failures in production would naturally have a > hardware RAID, and that, therefor, Ironic should be capable of setting up > that RAID in accordance with a description in the Nova flavor metadata -- > but did not need to be concerned with software RAIDs. > > Clearly, there are several folks who have the same use-case in mind, but > do not have hardware RAID cards in their servers, so my initial assumption > was incorrect :) > > I'm fairly sure that the IPA team would welcome contributions to this > effect. > > We're looking into ways of building an image that has a software raid >> presetup, and expand it on boot. > > > Awesome! I hope that work will make its way into diskimage-builder ;) > > (As an aside, I suggested this to the Fuel team back in Atlanta...) > > >> This requires each image to be customized for this case though. I can see >> Fuel not wanting to provide two different sets of images, "hardware raid" >> and "software raid", that have the same contents in them, with just >> different partitioning layouts... If we want users to not have to care >> about partition layout, this is also not ideal... >> > > End-users are probably not generating their own images for bare metal > (unless user == operator, in which case, it should be fine). > > >> Assuming Ironic can be convinced that these features really would be >> needed, perhaps the solution is a middle ground between the pxe driver and >> the agent? >> > > I've been rallying for a convergence between the feature sets of these > drivers -- specifically, that the agent should support partition-based > images, and also support copy-over-iscsi as a deployment model. In > parallel, Lucas had started working on splitting the deploy interface into > both boot and deploy, which point we may be able to deprecate the current > family of pxe_* drivers. But I'm birdwalking... > > >> Associate partition information at the flavor level. The admin can decide >> the best partitioning layout for a given hardware... The user doesn't have >> to care any more. Two flavors for the same hardware could be "4 9's" or "5 >> 9's" or something that way. >> > > Bingo. This is the approach we've been discussing over the past two years > - nova flavors could include metadata which get passed down to Ironic and > applied at deploy-time - but it hasn't been as high a priority as other > things. Though not specifically covering partitions, there are specs up for > Nova [0] and Ironic [1] for this workflow. > > >> Modify the agent to support a pxe style image in addition to full layout, >> and have the agent partition/setup raid and lay down the image into it. >> Modify the agent to support running grub2 at the end of deployment. >> > Or at least make the agent plugable to support adding these options. >> >> This does seem a bit backwards from the way the agent has been going. the >> pxe driver was kind of linux specific. the agent is not... So maybe that >> does imply a 3rd driver may be beneficial... But it would be nice to have >> one driver, the agent, in the end that supports everything. >> > > We'll always need different drivers to handle different kinds of hardware. > And we have two modes of deployment today (copy-image-over-iscsi, > agent-downloads-locally) and could have more in the future (bittorrent, > multicast, ...?). That said, I don't know why a single agent couldn't > support multiple modes of deployment. > > > -Devananda > > > [0] https://review.openstack.org/#/c/136104/ > > [1] > > http://specs.openstack.org/openstack/ironic-specs/specs/backlog/driver-capabilities.html > and > https://review.openstack.org/#/c/137363/ > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Wed Dec 10 16:16:39 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 10 Dec 2014 17:16:39 +0100 Subject: [openstack-dev] [oslo] interesting problem with config filter In-Reply-To: <4BCA7D02-38D6-4B5B-A496-8EB259C1A792@doughellmann.com> References: <4BCA7D02-38D6-4B5B-A496-8EB259C1A792@doughellmann.com> Message-ID: <548871E7.8060805@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 08/12/14 21:58, Doug Hellmann wrote: > As we?ve discussed a few times, we want to isolate applications > from the configuration options defined by libraries. One way we > have of doing that is the ConfigFilter class in oslo.config. When a > regular ConfigOpts instance is wrapped with a filter, a library can > register new options on the filter that are not visible to anything > that doesn?t have the filter object. Unfortunately, the Neutron > team has identified an issue with this approach. We have a bug > report [1] from them about the way we?re using config filters in > oslo.concurrency specifically, but the issue applies to their use > everywhere. > > The neutron tests set the default for oslo.concurrency?s lock_path > variable to ?$state_path/lock?, and the state_path option is > defined in their application. With the filter in place, > interpolation of $state_path to generate the lock_path value fails > because state_path is not known to the ConfigFilter instance. It's not just unit tests. It's also in generic /etc/neutron.conf file installed with the rest of neutron: https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L23 There is nothing wrong in the way neutron sets it up, so I expect the fix to go in either oslo.concurrency or oslo.config, whichever is achievable. > > The reverse would also happen (if the value of state_path was > somehow defined to depend on lock_path), and that?s actually a > bigger concern to me. A deployer should be able to use > interpolation anywhere, and not worry about whether the options are > in parts of the code that can see each other. The values are all in > one file, as far as they know, and so interpolation should ?just > work?. +1. It's not deployer's job to read code and determine which options are substitution-aware and which are not. > > I see a few solutions: > > 1. Don?t use the config filter at all. +1. And that's not just for oslo.concurrency case, but globally. > 2. Make the config filter able to add new options and still see > everything else that is already defined (only filter in one > direction). 3. Leave things as they are, and make the error message > better. > > Because of the deployment implications of using the filter, I?m > inclined to go with choice 1 or 2. However, choice 2 leaves open > the possibility of a deployer wanting to use the value of an option > defined by one filtered set of code when defining another. I don?t > know how frequently that might come up, but it seems like the error > would be very confusing, especially if both options are set in the > same config file. > > I think that leaves option 1, which means our plans for hiding > options from applications need to be rethought. > > Does anyone else see another solution that I?m missing? I'm not an oslo guy, so I leave the resolution to you. > > Doug > > [1] https://bugs.launchpad.net/oslo.config/+bug/1399897 > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUiHHnAAoJEC5aWaUY1u57DOsH/i+FY46YWH2lSguYPS5h+Ciu S/fwzamKrcF6Y2pipl+j55CiIyIejlnXwE+UV90k4gM9G6vl4T6u1w6N9dus67pu 6kWHty4eDGHGIuj0iGWIWUNPN6ChHNmhxoFadvZKCBWULeTvh3DL/Ply4MYx4bqF MbtpAE5Qh2OUUO977kSjcULZtgrIYeInKd5tdZkLmXf0PQnMKU9rEwa8DNZL24Ro sBZ6GKDXfa4vqk5alFiWoqxW/MUoi6Ngxm2T0OJZy20L6BL5n8sT96rinAbtGzo5 CELu91D6UeFR/rry2bI6DIS7rPN4BHCsSTZ1cXK/wxLHTqaSP50qj2phZ7zGbVA= =IuLJ -----END PGP SIGNATURE----- From clint at fewbar.com Wed Dec 10 17:15:41 2014 From: clint at fewbar.com (Clint Byrum) Date: Wed, 10 Dec 2014 09:15:41 -0800 Subject: [openstack-dev] [TripleO] mid-cycle details final draft In-Reply-To: <1417474177-sup-8420@fewbar.com> References: <1417474177-sup-8420@fewbar.com> Message-ID: <1418231671-sup-7546@fewbar.com> Just FYI, we ran into a last minute scheduling conflict with the venue and are sorting it out, so please _do not book travel yet_. Worst case it will move to Feb 16 - 18 instead of 18 - 20. Excerpts from Clint Byrum's message of 2014-12-01 14:58:58 -0800: > Hello! I've received confirmation that our venue, the HP offices in > downtown Seattle, will be available for the most-often-preferred > least-often-cannot week of Feb 16 - 20. > > Our venue has a maximum of 20 participants, but I only have 16 possible > attendees now. Please add yourself to that list _now_ if you will be > joining us. > > I've asked our office staff to confirm Feb 18 - 20 (Wed-Fri). When they > do, I will reply to this thread to let everyone know so you can all > start to book travel. See the etherpad for travel details. > > https://etherpad.openstack.org/p/kilo-tripleo-midcycle-meetup From vkramskikh at mirantis.com Wed Dec 10 17:23:34 2014 From: vkramskikh at mirantis.com (Vitaly Kramskikh) Date: Wed, 10 Dec 2014 21:23:34 +0400 Subject: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format In-Reply-To: References: Message-ID: 2014-12-10 19:31 GMT+03:00 Evgeniy L : > > > On Wed, Dec 10, 2014 at 6:50 PM, Vitaly Kramskikh > wrote: > >> >> >> 2014-12-10 16:57 GMT+03:00 Evgeniy L : >> >>> Hi, >>> >>> First let me describe what our plans for the nearest release. We want to >>> deliver >>> role as a simple plugin, it means that plugin developer can define his >>> own role >>> with yaml and also it should work fine with our current approach when >>> user can >>> define several fields on the settings tab. >>> >>> Also I would like to mention another thing which we should probably >>> discuss >>> in separate thread, how plugins should be implemented. We have two types >>> of plugins, simple and complicated, the definition of simple - I can do >>> everything >>> I need with yaml, the definition of complicated - probably I have to >>> write some >>> python code. It doesn't mean that this python code should do absolutely >>> everything it wants, but it means we should implement stable, documented >>> interface where plugin is connected to the core. >>> >>> Now lets talk about UI flow, our current problem is how to get the >>> information >>> if plugins is used in the environment or not, this information is >>> required for >>> backend which generates appropriate tasks for task executor, also this >>> information can be used in the future if we decide to implement plugins >>> deletion >>> mechanism. >>> >>> I didn't come up with a some new solution, as before we have two options >>> to >>> solve the problem: >>> >>> # 1 >>> >>> Use conditional language which is currently used on UI, it will look like >>> Vitaly described in the example [1]. >>> Plugin developer should: >>> >>> 1. describe at least one element for UI, which he will be able to use in >>> task >>> >>> 2. add condition which is written in our own programming language >>> >>> Example of the condition for LBaaS plugin: >>> >>> condition: settings:lbaas.metadata.enabled == true >>> >>> 3. add condition to metadata.yaml a condition which defines if plugin is >>> enabled >>> >>> is_enabled: settings:lbaas.metadata.enabled == true >>> >>> This approach has good flexibility, but also it has problems: >>> >>> a. It's complicated and not intuitive for plugin developer. >>> >> It is less complicated than python code >> > > I'm not sure why are you talking about python code here, my point > is we should not force developer to use this conditions in any language. > > But that's how current plugin-like stuff works. There are various tasks which are run only if some checkboxes are set, so stuff like Ceph and vCenter will need conditions to describe tasks. > Anyway I don't agree with the statement there are more people who know > python than "fuel ui conditional language". > > >> b. It doesn't cover case when the user installs 3rd party plugin >>> which doesn't have any conditions (because of # a) and >>> user doesn't have a way to disable it for environment if it >>> breaks his configuration. >>> >> If plugin doesn't have conditions for tasks, then it has invalid metadata. >> > > Yep, and it's a problem of the platform, which provides a bad interface. > Why is it bad? It plugin writer doesn't provide plugin name or version, then metadata is invalid also. It is plugin writer's fault that he didn't write metadata properly. > > >> >>> # 2 >>> >>> As we discussed from the very beginning after user selects a release he >>> can >>> choose a set of plugins which he wants to be enabled for environment. >>> After that we can say that plugin is enabled for the environment and we >>> send >>> tasks related to this plugin to task executor. >>> >>> >> My approach also allows to eliminate "enableness" of plugins which >>> will cause UX issues and issues like you described above. vCenter and Ceph >>> also don't have "enabled" state. vCenter has hypervisor and storage, Ceph >>> provides backends for Cinder and Glance which can be used simultaneously or >>> only one of them can be used. >>> >>> Both of described plugins have enabled/disabled state, vCenter is enabled >>> when vCenter is selected as hypervisor. Ceph is enabled when it's >>> selected >>> as a backend for Cinder or Glance. >>> >> Nope, Ceph for Volumes can be used without Ceph for Images. Both of these >> plugins can also have some granular tasks which are enabled by various >> checkboxes (like VMware vCenter for volumes). How would you determine >> whether tasks which installs VMware vCenter for volumes should run? >> > > Why "nope"? I have "Cinder OR Glance". > Oh, I missed it. So there are 2 checkboxes, how would you determine "enableness"? > It can be easily handled in deployment script. > I don't know much about the status of granular deployment blueprint, but AFAIK that's what we are going to get rid of. > > >>> If you don't like the idea of having Ceph/vCenter checkboxes on the >>> first page, >>> I can suggest as an idea (research is required) to define groups like >>> Storage Backend, >>> Network Manager and we will allow plugin developer to embed his option >>> in radiobutton >>> field on wizard pages. But plugin developer should not describe >>> conditions, he should >>> just write that his plugin is a Storage Backend, Hypervisor or new >>> Network Manager. >>> And the plugins e.g. Zabbix, Nagios, which don't belong to any of this >>> groups >>> should be shown as checkboxes on the first page of the wizard. >>> >> Why don't you just ditch "enableness" of plugins and get rid of this >> complex stuff? Can you explain why do you need to know if plugin is >> "enabled"? Let me summarize my opinion on this: >> > > I described why we need it many times. Also it looks like you skipped > another option > and I would like to see some more information why you don't like it and > why it's > a bad from UX stand point of view. > Yes, I skipped it. You said "research is required", so please do it, write a proposal and then we will compare it with condition approach. You still don't have your proposal, so there is nothing to compare and discuss. From the first perspective it seems complex and restrictive. > >> - You don't need to know whether plugin is enabled or not. You need >> to know what tasks should be run and whether plugin is removable (anything >> else?). These conditions can be described by the DSL. >> >> I do need to know if plugin is enabled to figure out if it's removable, > in fact those are the same things. > So there is nothing else you need "enableness", right? If you "described why we need it many times", I think you need to do it one more time (in form of a list). If we need "enableness" just to determine whether the plugin is removable, then it is not the reason to ruin our UX. > >> - >> - Explicitly asking the user to enable plugin for new environment >> should be considered as a last resort solution because it significantly >> impair our UX for inexperienced user. Just imagine: a new user which barely >> knows about OpenStack chooses a name for the environment, OS release and >> then he needs to choose plugins. Really? >> >> I really think that it's absolutely ok to show checkbox with LBaaS for > the user who found the > plugin, downloaded it on the master and installed it with CLI. > > And right now this user have to go to this settings tab with attempts to > find this checkbox, > also he may not find it for example because of incompatible release > version, and it's clearly > a bad UX. > I like how it is done in modern browsers - after upgrade of master node there is notification about incompatible plugins, and in list of plugins there is a message that plugin is incompatible. We need to design how we will handle it. Currently it is definitely a bad UX because nothing is done for this. > My proposal for "complex" plugin interface: there should be python classes >> with exactly the same fields from yaml files: plugin name, version, etc. >> But condition for cluster deletion and for tasks which are written in DSL >> in case of "simple" yaml config should become methods which plugin writer >> can make as complex as he wants. >> > Why do you want to use python to define plugin name, version etc? It's a > static data which are > used for installation, I don't think that in fuel client (or some other > installation tool) we want > to unpack the plugin and import this module to get the information which > is required for installation. > It is just a proposal in which I try to solve problems which you see in my approach. If you want a "complex" interface with arbitrary python code, that's how I see it. All fields are the same here, the approach is the same, just conditions are in python. And YAML config can be converted to this class, and all other code won't need to handle 2 different interfaces for plugins. > >>> [1] >>> https://github.com/vkramskikh/fuel-plugins/commit/1ddb166731fc4bf614f502b276eb136687cb20cf >>> >>> On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh < >>> vkramskikh at mirantis.com> wrote: >>> >>>> >>>> >>>> 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak : >>>> >>>>> >>>>>> - environment_config.yaml should contain exact config which will >>>>>> be mixed into cluster_attributes. No need to implicitly generate any >>>>>> controls like it is done now. >>>>>> >>>>>> Initially i had the same thoughts and wanted to use it the way it >>>>> is, but now i completely agree with Evgeniy that additional DSL will cause >>>>> a lot >>>>> of problems with compatibility between versions and developer >>>>> experience. >>>>> >>>> As far as I understand, you want to introduce another approach to >>>> describe UI part or plugins? >>>> >>>>> We need to search for alternatives.. >>>>> 1. for UI i would prefer separate tab for plugins, where user will be >>>>> able to enable/disable plugin explicitly. >>>>> >>>> Of course, we need a separate page for plugin management. >>>> >>>>> Currently settings tab is overloaded. >>>>> 2. on backend we need to validate plugins against certain env before >>>>> enabling it, >>>>> and for simple case we may expose some basic entities like >>>>> network_mode. >>>>> For case where you need complex logic - python code is far more >>>>> flexible that new DSL. >>>>> >>>>>> >>>>>> - metadata.yaml should also contain "is_removable" field. This >>>>>> field is needed to determine whether it is possible to remove installed >>>>>> plugin. It is impossible to remove plugins in the current implementation. >>>>>> This field should contain an expression written in our DSL which we already >>>>>> use in a few places. The LBaaS plugin also uses it to hide the checkbox if >>>>>> Neutron is not used, so even simple plugins like this need to utilize it. >>>>>> This field can also be autogenerated, for more complex plugins plugin >>>>>> writer needs to fix it manually. For example, for Ceph it could look like >>>>>> "settings:storage.volumes_ceph.value == false and >>>>>> settings:storage.images_ceph.value == false". >>>>>> >>>>>> How checkbox will help? There is several cases of plugin removal.. >>>>> >>>> It is not a checkbox, this is condition that determines whether the >>>> plugin is removable. It allows plugin developer specify when plguin can be >>>> safely removed from Fuel if there are some environments which were created >>>> after the plugin had been installed. >>>> >>>>> 1. Plugin is installed, but not enabled for any env - just remove the >>>>> plugin >>>>> 2. Plugin is installed, enabled and cluster deployed - forget about it >>>>> for now.. >>>>> 3. Plugin is installed and only enabled - we need to maintain state of >>>>> db consistent after plugin is removed, it is problematic, but possible >>>>> >>>> My approach also allows to eliminate "enableness" of plugins which will >>>> cause UX issues and issues like you described above. vCenter and Ceph also >>>> don't have "enabled" state. vCenter has hypervisor and storage, Ceph >>>> provides backends for Cinder and Glance which can be used simultaneously or >>>> only one of them can be used. >>>> >>>>> My main point that plugin is enabled/disabled explicitly by user, >>>>> after that we can decide ourselves can it be removed or not. >>>>> >>>>>> >>>>>> - For every task in tasks.yaml there should be added new >>>>>> "condition" field with an expression which determines whether the task >>>>>> should be run. In the current implementation tasks are always run for >>>>>> specified roles. For example, vCenter plugin can have a few tasks with >>>>>> conditions like "settings:common.libvirt_type.value == 'vcenter'" or >>>>>> "settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar >>>>>> approach will be used in implementation of Granular Deployment feature. >>>>>> >>>>>> I had some thoughts about using DSL, it seemed to me especially >>>>> helpfull when you need to disable part of embedded into core functionality, >>>>> like deploying with another hypervisor, or network dirver (contrail >>>>> for example). And DSL wont cover all cases here, this quite similar to >>>>> metadata.yaml, simple cases can be covered by some variables in tasks (like >>>>> group, unique, etc), but complex is easier to test and describe in python. >>>>> >>>> Could you please provide example of such conditions? vCenter and Ceph >>>> can be turned into plugins using this approach. >>>> >>>> Also, I'm not against python version of plugins. It could look like a >>>> python class with exactly the same fields form YAML files, but conditions >>>> will be written in python. >>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> >>>> -- >>>> Vitaly Kramskikh, >>>> Software Engineer, >>>> Mirantis, Inc. >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh < >>> vkramskikh at mirantis.com> wrote: >>> >>>> >>>> >>>> 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak : >>>> >>>>> >>>>>> - environment_config.yaml should contain exact config which will >>>>>> be mixed into cluster_attributes. No need to implicitly generate any >>>>>> controls like it is done now. >>>>>> >>>>>> Initially i had the same thoughts and wanted to use it the way it >>>>> is, but now i completely agree with Evgeniy that additional DSL will cause >>>>> a lot >>>>> of problems with compatibility between versions and developer >>>>> experience. >>>>> >>>> As far as I understand, you want to introduce another approach to >>>> describe UI part or plugins? >>>> >>>>> We need to search for alternatives.. >>>>> 1. for UI i would prefer separate tab for plugins, where user will be >>>>> able to enable/disable plugin explicitly. >>>>> >>>> Of course, we need a separate page for plugin management. >>>> >>>>> Currently settings tab is overloaded. >>>>> 2. on backend we need to validate plugins against certain env before >>>>> enabling it, >>>>> and for simple case we may expose some basic entities like >>>>> network_mode. >>>>> For case where you need complex logic - python code is far more >>>>> flexible that new DSL. >>>>> >>>>>> >>>>>> - metadata.yaml should also contain "is_removable" field. This >>>>>> field is needed to determine whether it is possible to remove installed >>>>>> plugin. It is impossible to remove plugins in the current implementation. >>>>>> This field should contain an expression written in our DSL which we already >>>>>> use in a few places. The LBaaS plugin also uses it to hide the checkbox if >>>>>> Neutron is not used, so even simple plugins like this need to utilize it. >>>>>> This field can also be autogenerated, for more complex plugins plugin >>>>>> writer needs to fix it manually. For example, for Ceph it could look like >>>>>> "settings:storage.volumes_ceph.value == false and >>>>>> settings:storage.images_ceph.value == false". >>>>>> >>>>>> How checkbox will help? There is several cases of plugin removal.. >>>>> >>>> It is not a checkbox, this is condition that determines whether the >>>> plugin is removable. It allows plugin developer specify when plguin can be >>>> safely removed from Fuel if there are some environments which were created >>>> after the plugin had been installed. >>>> >>>>> 1. Plugin is installed, but not enabled for any env - just remove the >>>>> plugin >>>>> 2. Plugin is installed, enabled and cluster deployed - forget about it >>>>> for now.. >>>>> 3. Plugin is installed and only enabled - we need to maintain state of >>>>> db consistent after plugin is removed, it is problematic, but possible >>>>> >>>> My approach also allows to eliminate "enableness" of plugins which will >>>> cause UX issues and issues like you described above. vCenter and Ceph also >>>> don't have "enabled" state. vCenter has hypervisor and storage, Ceph >>>> provides backends for Cinder and Glance which can be used simultaneously or >>>> only one of them can be used. >>>> >>>>> My main point that plugin is enabled/disabled explicitly by user, >>>>> after that we can decide ourselves can it be removed or not. >>>>> >>>>>> >>>>>> - For every task in tasks.yaml there should be added new >>>>>> "condition" field with an expression which determines whether the task >>>>>> should be run. In the current implementation tasks are always run for >>>>>> specified roles. For example, vCenter plugin can have a few tasks with >>>>>> conditions like "settings:common.libvirt_type.value == 'vcenter'" or >>>>>> "settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar >>>>>> approach will be used in implementation of Granular Deployment feature. >>>>>> >>>>>> I had some thoughts about using DSL, it seemed to me especially >>>>> helpfull when you need to disable part of embedded into core functionality, >>>>> like deploying with another hypervisor, or network dirver (contrail >>>>> for example). And DSL wont cover all cases here, this quite similar to >>>>> metadata.yaml, simple cases can be covered by some variables in tasks (like >>>>> group, unique, etc), but complex is easier to test and describe in python. >>>>> >>>> Could you please provide example of such conditions? vCenter and Ceph >>>> can be turned into plugins using this approach. >>>> >>>> Also, I'm not against python version of plugins. It could look like a >>>> python class with exactly the same fields form YAML files, but conditions >>>> will be written in python. >>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> >>>> -- >>>> Vitaly Kramskikh, >>>> Software Engineer, >>>> Mirantis, Inc. >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Vitaly Kramskikh, >> Software Engineer, >> Mirantis, Inc. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at jvf.cc Wed Dec 10 17:55:37 2014 From: jay at jvf.cc (Jay Faulkner) Date: Wed, 10 Dec 2014 17:55:37 +0000 Subject: [openstack-dev] People of OpenStack (and their IRC nicks) In-Reply-To: References: <20141209125828.GA10149@redhat.redhat.com> <20141209140407.GO2497@yuggoth.org> <54874973.5060903@openstack.org> <1418169303-sup-199@fewbar.com> Message-ID: <82FFB70D-4C31-47BA-ACF6-6A1A0DF48E4D@jvf.cc> Often times I find myself in need of going the other direction ? which IRC nick goes to which person. Does anyone know how to do that with the Foundation directory? Thanks, Jay > On Dec 10, 2014, at 2:30 AM, Matthew Gilliard wrote: > > So, are we agreed that http://www.openstack.org/community/members/ is > the authoritative place for IRC lookups? In which case, I'll take the > old content out of https://wiki.openstack.org/wiki/People and leave a > message directing people where to look. > > I don't have the imagination to use anything other than my real name > on IRC but for people who do, should we try to encourage putting the > IRC nick in the gerrit name? > > On Tue, Dec 9, 2014 at 11:56 PM, Clint Byrum wrote: >> Excerpts from Angus Salkeld's message of 2014-12-09 15:25:59 -0800: >>> On Wed, Dec 10, 2014 at 5:11 AM, Stefano Maffulli >>> wrote: >>> >>>> On 12/09/2014 06:04 AM, Jeremy Stanley wrote: >>>>> We already have a solution for tracking the contributor->IRC >>>>> mapping--add it to your Foundation Member Profile. For example, mine >>>>> is in there already: >>>>> >>>>> http://www.openstack.org/community/members/profile/5479 >>>> >>>> I recommend updating the openstack.org member profile and add IRC >>>> nickname there (and while you're there, update your affiliation history). >>>> >>>> There is also a search engine on: >>>> >>>> http://www.openstack.org/community/members/ >>>> >>>> >>> Except that info doesn't appear nicely in review. Some people put their >>> nick in their "Full Name" in >>> gerrit. Hopefully Clint doesn't mind: >>> >>> https://review.openstack.org/#/q/owner:%22Clint+%27SpamapS%27+Byrum%22+status:open,n,z >>> >> >> Indeed, I really didn't like that I'd be reviewing somebody's change, >> and talking to them on IRC, and not know if they knew who I was. >> >> It also has the odd side effect that gerritbot triggers my IRC filters >> when I 'git review'. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From keshava.a at hp.com Wed Dec 10 18:06:23 2014 From: keshava.a at hp.com (A, Keshava) Date: Wed, 10 Dec 2014 18:06:23 +0000 Subject: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework In-Reply-To: References: Message-ID: <891761EAFA335D44AD1FFDB9B4A8C063D8F9EE@G4W3216.americas.hpqcorp.net> Hi Murali, There are many unknows w.r.t ?Service-VM? and how it should from NFV perspective. In my opinion it was not decided how the Service-VM framework should be. Depending on this we at OpenStack also will have impact for ?Service Chaining?. Please find the mail attached w.r.t that discussion with NFV for ?Service-VM + Openstack OVS related discussion?. Regards, keshava From: Stephen Wong [mailto:stephen.kf.wong at gmail.com] Sent: Wednesday, December 10, 2014 10:03 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi Murali, There is already a ServiceVM project (Tacker), currently under development on stackforge: https://wiki.openstack.org/wiki/ServiceVM If you are interested in this topic, please take a look at the wiki page above and see if the project's goals align with yours. If so, you are certainly welcome to join the IRC meeting and start to contribute to the project's direction and design. Thanks, - Stephen On Wed, Dec 10, 2014 at 7:01 AM, Murali B > wrote: Hi keshava, We would like contribute towards service chain and NFV Could you please share the document if you have any related to service VM The service chain can be achieved if we able to redirect the traffic to service VM using ovs-flows in this case we no need to have routing enable on the service VM(traffic is redirected at L2). All the tenant VM's in cloud could use this service VM services by adding the ovs-rules in OVS Thanks -Murali _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded message was scrubbed... From: Reinaldo Penno Subject: Re: [opnfv-tech-discuss] Service VM v/s its basic framework Date: Wed, 10 Dec 2014 17:49:43 +0000 Size: 115979 URL: From zbitter at redhat.com Wed Dec 10 17:47:13 2014 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 10 Dec 2014 12:47:13 -0500 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> Message-ID: <54888721.50404@redhat.com> You really need to get a real email client with quoting support ;) On 10/12/14 06:42, Murugan, Visnusaran wrote: > Well, we still have to persist the dependencies of each version of a resource _somehow_, because otherwise we can't know how to clean them up in the correct order. But what I think you meant to say is that this approach doesn't require it to be persisted in a separate table where the rows are marked as traversed as we work through the graph. > > [Murugan, Visnusaran] > In case of rollback where we have to cleanup earlier version of resources, we could get the order from old template. We'd prefer not to have a graph table. In theory you could get it by keeping old templates around. But that means keeping a lot of templates, and it will be hard to keep track of when you want to delete them. It also means that when starting an update you'll need to load every existing previous version of the template in order to calculate the dependencies. It also leaves the dependencies in an ambiguous state when a resource fails, and although that can be worked around it will be a giant pain to implement. I agree that I'd prefer not to have a graph table. After trying a couple of different things I decided to store the dependencies in the Resource table, where we can read or write them virtually for free because it turns out that we are always reading or updating the Resource itself at exactly the same time anyway. >> This approach reduces DB queries by waiting for completion notification on a topic. The drawback I see is that delete stack stream will be huge as it will have the entire graph. We can always dump such data in ResourceLock.data Json and pass a simple flag "load_stream_from_db" to converge RPC call as a workaround for delete operation. > > This seems to be essentially equivalent to my 'SyncPoint' proposal[1], with the key difference that the data is stored in-memory in a Heat engine rather than the database. > > I suspect it's probably a mistake to move it in-memory for similar reasons to the argument Clint made against synchronising the marking off of dependencies in-memory. The database can handle that and the problem of making the DB robust against failures of a single machine has already been solved by someone else. If we do it in-memory we are just creating a single point of failure for not much gain. (I guess you could argue it doesn't matter, since if any Heat engine dies during the traversal then we'll have to kick off another one anyway, but it does limit our options if that changes in the future.) > [Murugan, Visnusaran] Resource completes, removes itself from resource_lock and notifies engine. Engine will acquire parent lock and initiate parent only if all its children are satisfied (no child entry in resource_lock). This will come in place of Aggregator. Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly what I did. The three differences I can see are: 1) I think you are proposing to create all of the sync points at the start of the traversal, rather than on an as-needed basis. This is probably a good idea. I didn't consider it because of the way my prototype evolved, but there's now no reason I can see not to do this. If we could move the data to the Resource table itself then we could even get it for free from an efficiency point of view. 2) You're using a single list from which items are removed, rather than two lists (one static, and one to which items are added) that get compared. Assuming (1) then this is probably a good idea too. 3) You're suggesting to notify the engine unconditionally and let the engine decide if the list is empty. That's probably not a good idea - not only does it require extra reads, it introduces a race condition that you then have to solve (it can be solved, it's just more work). Since the update to remove a child from the list is atomic, it's best to just trigger the engine only if the list is now empty. > It's not clear to me how the 'streams' differ in practical terms from just passing a serialisation of the Dependencies object, other than being incomprehensible to me ;). The current Dependencies implementation > (1) is a very generic implementation of a DAG, (2) works and has plenty of unit tests, (3) has, with I think one exception, a pretty straightforward API, (4) has a very simple serialisation, returned by the edges() method, which can be passed back into the constructor to recreate it, and (5) has an API that is to some extent relied upon by resources, and so won't likely be removed outright in any event. > Whatever code we need to handle dependencies ought to just build on this existing implementation. > [Murugan, Visnusaran] Our thought was to reduce payload size (template/graph). Just planning for worst case scenario (million resource stack) We could always dump them in ResourceLock.data to be loaded by Worker. If there's a smaller representation of a graph than a list of edges then I don't know what it is. The proposed stream structure certainly isn't it, unless you mean as an alternative to storing the entire graph once for each resource. A better alternative is to store it once centrally - in my current implementation it is passed down through the trigger messages, but since only one traversal can be in progress at a time it could just as easily be stored in the Stack table of the database at the slight cost of an extra write. I'm not opposed to doing that, BTW. In fact, I'm really interested in your input on how that might help make recovery from failure more robust. I know Anant mentioned that not storing enough data to recover when a node dies was his big concern with my current approach. I can see that by both creating all the sync points at the start of the traversal and storing the dependency graph in the database instead of letting it flow through the RPC messages, we would be able to resume a traversal where it left off, though I'm not sure what that buys us. And I guess what you're suggesting is that by having an explicit lock with the engine ID specified, we can detect when a resource is stuck in IN_PROGRESS due to an engine going down? That's actually pretty interesting. > Based on our call on Thursday, I think you're taking the idea of the Lock table too literally. The point of referring to locks is that we can use the same concepts as the Lock table relies on to do atomic updates on a particular row of the database, and we can use those atomic updates to prevent race conditions when implementing SyncPoints/Aggregators/whatever you want to call them. It's not that we'd actually use the Lock table itself, which implements a mutex and therefore offers only a much slower and more stateful way of doing what we want (lock mutex, change data, unlock mutex). > [Murugan, Visnusaran] Are you suggesting something like a select-for-update in resource table itself without having a lock table? Yes, that's exactly what I was suggesting. cheers, Zane. From ollivier.cedric at gmail.com Wed Dec 10 18:18:50 2014 From: ollivier.cedric at gmail.com (Cedric OLLIVIER) Date: Wed, 10 Dec 2014 19:18:50 +0100 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: 2014-12-09 18:32 GMT+01:00 Armando M. : > > By the way, if Kyle can do it in his teeny tiny time that he has left > after his PTL duties, then anyone can do it! :) > > https://review.openstack.org/#/c/140191/ > > Fully cloning Dave Tucker's repository [1] and the outdated fork of the ODL ML2 MechanismDriver included raises some questions (e.g. [2]). I wish the next patch set removes some files. At least it should take the mainstream work into account (e.g. [3]) . [1] https://github.com/dave-tucker/odl-neutron-drivers [2] https://review.openstack.org/#/c/113330/ [3] https://review.openstack.org/#/c/96459/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Dec 10 18:19:51 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Dec 2014 18:19:51 +0000 Subject: [openstack-dev] People of OpenStack (and their IRC nicks) In-Reply-To: <82FFB70D-4C31-47BA-ACF6-6A1A0DF48E4D@jvf.cc> References: <20141209125828.GA10149@redhat.redhat.com> <20141209140407.GO2497@yuggoth.org> <54874973.5060903@openstack.org> <1418169303-sup-199@fewbar.com> <82FFB70D-4C31-47BA-ACF6-6A1A0DF48E4D@jvf.cc> Message-ID: <20141210181951.GV2497@yuggoth.org> On 2014-12-10 17:55:37 +0000 (+0000), Jay Faulkner wrote: > Often times I find myself in need of going the other direction ? > which IRC nick goes to which person. Does anyone know how to do > that with the Foundation directory? I don't think there's a lookup for that (might be worth logging a feature request) but generally I rely on using the /whois command to ask the IRC network for details on a particular nick and look at the realname returned with it. I would encourage people to make sure their IRC clients are configured to set that metadata to something useful unless they really prefer to interact entirely pseudonymously there (that's of course a legitimate preference too). -- Jeremy Stanley From tqtran at us.ibm.com Wed Dec 10 18:39:07 2014 From: tqtran at us.ibm.com (Thai Q Tran) Date: Wed, 10 Dec 2014 11:39:07 -0700 Subject: [openstack-dev] [horizon] REST and Django In-Reply-To: References: , <547DD08A.7000402@redhat.com> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.994faa67a8e28335_0.0.1.1.gif Type: image/gif Size: 105 bytes Desc: not available URL: From stefano at openstack.org Wed Dec 10 18:39:36 2014 From: stefano at openstack.org (Stefano Maffulli) Date: Wed, 10 Dec 2014 10:39:36 -0800 Subject: [openstack-dev] People of OpenStack (and their IRC nicks) In-Reply-To: References: <20141209125828.GA10149@redhat.redhat.com> <20141209140407.GO2497@yuggoth.org> <54874973.5060903@openstack.org> <1418169303-sup-199@fewbar.com> Message-ID: <54889368.8020606@openstack.org> On 12/10/2014 02:30 AM, Matthew Gilliard wrote: > So, are we agreed that http://www.openstack.org/community/members/ is > the authoritative place for IRC lookups? In which case, I'll take the > old content out of https://wiki.openstack.org/wiki/People and leave a > message directing people where to look. Yes, please, let me know if you need help. > I don't have the imagination to use anything other than my real name > on IRC but for people who do, should we try to encourage putting the > IRC nick in the gerrit name? That's hard to enforce. A better way to solve this would be to link directly gerrit IDs to openstack.org profile URL but I have no idea how that would work. Gerrit seems only to show full name and email address as a fly-over, when you hover on the reviewer/owner name in the UI. From sleipnir012 at gmail.com Wed Dec 10 18:56:58 2014 From: sleipnir012 at gmail.com (Susanne Balle) Date: Wed, 10 Dec 2014 13:56:58 -0500 Subject: [openstack-dev] [neutron][lbaas] Kilo Midcycle Meetup In-Reply-To: <1418190532.10125.1.camel@localhost> References: <1416867738.3960.19.camel@localhost> <1417541257.4057.1.camel@localhost> <1418190532.10125.1.camel@localhost> Message-ID: Cool! Thx Susanne On Wed, Dec 10, 2014 at 12:48 AM, Brandon Logan wrote: > It's set. We'll be having the meetup on Feb 2-6 in San Antonio at RAX > HQ. I'll add a list of hotels and the address on the etherpad. > > https://etherpad.openstack.org/p/lbaas-kilo-meetup > > Thanks, > Brandon > > On Tue, 2014-12-02 at 17:27 +0000, Brandon Logan wrote: > > Per the meeting, put together an etherpad here: > > > > https://etherpad.openstack.org/p/lbaas-kilo-meetup > > > > I would like to get the location and dates finalized ASAP (preferrably > > the next couple of days). > > > > We'll also try to do the same as the neutron and octava meetups for > > remote attendees. > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From devananda.vdv at gmail.com Wed Dec 10 19:10:53 2014 From: devananda.vdv at gmail.com (Devananda van der Veen) Date: Wed, 10 Dec 2014 19:10:53 +0000 Subject: [openstack-dev] [Ironic] 0.3.2 client release Message-ID: Hi folks, Just a quick announcement that I've tagged an incremental release of our client library to catch up with the changes so far in Kilo in preparation for the k-1 milestone next week. Here are the release notes: - Add keystone v3 CLI support - Add tty password entry to CLI - Add node-set-maintenance command to CLI - Include maintenance_reason in CLI output of node-show - Add option to specify node uuid in node-create subcommand - Add GET support for vendor_passthru to the library It should be winding its way through the build pipeline right now, and available on pypi later today. Regards, Devananda -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlowja at outlook.com Wed Dec 10 19:12:29 2014 From: harlowja at outlook.com (Joshua Harlow) Date: Wed, 10 Dec 2014 11:12:29 -0800 Subject: [openstack-dev] [oslo] [taskflow] sprint review day Message-ID: Hi everyone, The OpenStack oslo team will be hosting a virtual sprint in the Freenode IRC channel #openstack-oslo for the taskflow subproject on Wednesday 12-17-2014 starting at 16:00 UTC and going for ~8 hours. The goal of this sprint is to work on any open reviews, documentation or any other integration questions, development and so-on, so that we can help progress the taskflow subproject forward at a good rate. Live version of the current documentation is available here: http://docs.openstack.org/developer/taskflow/ The code itself lives in the openstack/taskflow respository. http://git.openstack.org/cgit/openstack/taskflow/tree Please feel free to join if interested, curious, or able. Much appreciated, Joshua Harlow From yzveryanskyy at mirantis.com Wed Dec 10 19:33:44 2014 From: yzveryanskyy at mirantis.com (Yuriy Zveryanskyy) Date: Wed, 10 Dec 2014 21:33:44 +0200 Subject: [openstack-dev] [Ironic] Fuel agent proposal In-Reply-To: References: Message-ID: <5488A018.6020409@mirantis.com> New version of the spec: https://review.openstack.org/#/c/138115/ Problem description updated. Power interface part removed (not in scope of deploy driver). On 12/09/2014 12:23 AM, Devananda van der Veen wrote: > > I'd like to raise this topic for a wider discussion outside of the > hallway track and code reviews, where it has thus far mostly remained. > > > In previous discussions, my understanding has been that the Fuel team > sought to use Ironic to manage "pets" rather than "cattle" - and doing > so required extending the API and the project's functionality in ways > that no one else on the core team agreed with. Perhaps that > understanding was wrong (or perhaps not), but in any case, there is > now a proposal to add a FuelAgent driver to Ironic. The proposal > claims this would meet that teams' needs without requiring changes to > the core of Ironic. > > > https://review.openstack.org/#/c/138115/ > > > The Problem Description section calls out four things, which have all > been discussed previously (some are here [0]). I would like to address > each one, invite discussion on whether or not these are, in fact, > problems facing Ironic (not whether they are problems for someone, > somewhere), and then ask why these necessitate a new driver be added > to the project. > > > They are, for reference: > > > 1. limited partition support > > 2. no software RAID support > > 3. no LVM support > > 4. no support for hardware that lacks a BMC > > > #1. > > When deploying a partition image (eg, QCOW format), Ironic's PXE > deploy driver performs only the minimal partitioning necessary to > fulfill its mission as an OpenStack service: respect the user's > request for root, swap, and ephemeral partition sizes. When deploying > a whole-disk image, Ironic does not perform any partitioning -- such > is left up to the operator who created the disk image. > > > Support for arbitrarily complex partition layouts is not required by, > nor does it facilitate, the goal of provisioning physical servers via > a common cloud API. Additionally, as with #3 below, nothing prevents a > user from creating more partitions in unallocated disk space once they > have access to their instance. Therefor, I don't see how Ironic's > minimal support for partitioning is a problem for the project. > > > #2. > > There is no support for defining a RAID in Ironic today, at all, > whether software or hardware. Several proposals were floated last > cycle; one is under review right now for DRAC support [1], and there > are multiple call outs for RAID building in the state machine > mega-spec [2]. Any such support for hardware RAID will necessarily be > abstract enough to support multiple hardware vendor's driver > implementations and both in-band creation (via IPA) and out-of-band > creation (via vendor tools). > > > Given the above, it may become possible to add software RAID support > to IPA in the future, under the same abstraction. This would closely > tie the deploy agent to the images it deploys (the latter image's > kernel would be dependent upon a software RAID built by the former), > but this would necessarily be true for the proposed FuelAgent as well. > > > I don't see this as a compelling reason to add a new driver to the > project. Instead, we should (plan to) add support for software RAID to > the deploy agent which is already part of the project. > > > #3. > > LVM volumes can easily be added by a user (after provisioning) within > unallocated disk space for non-root partitions. I have not yet seen a > compelling argument for doing this within the provisioning phase. > > > #4. > > There are already in-tree drivers [3] [4] [5] which do not require a > BMC. One of these uses SSH to connect and run pre-determined commands. > Like the spec proposal, which states at line 122, "Control via SSH > access feature intended only for experiments in non-production > environment," the current SSHPowerDriver is only meant for testing > environments. We could probably extend this driver to do what the > FuelAgent spec proposes, as far as remote power control for cheap > always-on hardware in testing environments with a pre-shared key. > > > (And if anyone wonders about a use case for Ironic without external > power control ... I can only think of one situation where I would > rationally ever want to have a control-plane agent running inside a > user-instance: I am both the operator and the only user of the cloud.) > > > ---------------- > > > In summary, as far as I can tell, all of the problem statements upon > which the FuelAgent proposal are based are solvable through > incremental changes in existing drivers, or out of scope for the > project entirely. As another software-based deploy agent, FuelAgent > would duplicate the majority of the functionality which > ironic-python-agent has today. > > > Ironic's driver ecosystem benefits from a diversity of > hardware-enablement drivers. Today, we have two divergent software > deployment drivers which approach image deployment differently: > "agent" drivers use a local agent to prepare a system and download the > image; "pxe" drivers use a remote agent and copy the image over iSCSI. > I don't understand how a second driver which duplicates the > functionality we already have, and shares the same goals as the > drivers we already have, is beneficial to the project. > > > Doing the same thing twice just increases the burden on the team; > we're all working on the same problems, so let's do it together. > > > -Devananda > > > > [0] > https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition > > > [1] https://review.openstack.org/#/c/107981/ > > > [2] > https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst > > > [3] > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py > > [4] > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py > > [5] > http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py > > > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.shuklin at gmail.com Wed Dec 10 19:43:17 2014 From: george.shuklin at gmail.com (George Shuklin) Date: Wed, 10 Dec 2014 21:43:17 +0200 Subject: [openstack-dev] Lack of quota - security bug or not? Message-ID: <5488A255.9030302@gmail.com> I have some small discussion in launchpad: is lack of a quota for unprivileged user counted as security bug (or at least as a bug)? If user can create 100500 objects in database via normal API and ops have no way to restrict this, is it OK for Openstack or not? From ijw.ubuntu at cack.org.uk Wed Dec 10 19:48:17 2014 From: ijw.ubuntu at cack.org.uk (Ian Wells) Date: Wed, 10 Dec 2014 11:48:17 -0800 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: <20141210093101.GC6450@redhat.com> References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> Message-ID: On 10 December 2014 at 01:31, Daniel P. Berrange wrote: > > So the problem of Nova review bandwidth is a constant problem across all > areas of the code. We need to solve this problem for the team as a whole > in a much broader fashion than just for people writing VIF drivers. The > VIF drivers are really small pieces of code that should be straightforward > to review & get merged in any release cycle in which they are proposed. > I think we need to make sure that we focus our energy on doing this and > not ignoring the problem by breaking stuff off out of tree. > The problem is that we effectively prevent running an out of tree Neutron driver (which *is* perfectly legitimate) if it uses a VIF plugging mechanism that isn't in Nova, as we can't use out of tree code and we won't accept in code ones for out of tree drivers. This will get more confusing as *all* of the Neutron drivers and plugins move out of the tree, as that constraint becomes essentially arbitrary. Your issue is one of testing. Is there any way we could set up a better testing framework for VIF drivers where Nova interacts with something to test the plugging mechanism actually passes traffic? I don't believe there's any specific limitation on it being *Neutron* that uses the plugging interaction. -- Ian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlowja at outlook.com Wed Dec 10 20:26:47 2014 From: harlowja at outlook.com (Joshua Harlow) Date: Wed, 10 Dec 2014 12:26:47 -0800 Subject: [openstack-dev] [oslo] deprecation 'pattern' library?? Message-ID: Hi oslo folks (and others), I've recently put up a review for some common deprecation patterns: https://review.openstack.org/#/c/140119/ In summary, this is a common set of patterns that can be used by oslo libraries, other libraries... This is different from the versionutils one (which is more of a developer<->operator deprecation interaction) and is more focused on the developer <-> developer deprecation interaction (developers say using oslo libraries). Doug had the question about why not just put this out there on pypi with a useful name not so strongly connected to oslo; since that review is more of a common set of patterns that can be used by libraries outside openstack/oslo as well. There wasn't many/any similar libraries that I found (zope.deprecation is probably the closest) and twisted has something in-built to it that is something similar. So in order to avoid creating our own version of zope.deprecation in that review we might as well create a neat name that can be useful for oslo/openstack/elsewhere... Some ideas that were thrown around on IRC (check 'https://pypi.python.org/pypi/%s' % name for 404 to see if likely not registered): * debtcollector * bagman * deprecate * deprecation * baggage Any other neat names people can think about? Or in general any other comments/ideas about providing such a deprecation pattern library? -Josh From doug at doughellmann.com Wed Dec 10 20:32:24 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 10 Dec 2014 15:32:24 -0500 Subject: [openstack-dev] [oslo] [taskflow] sprint review day In-Reply-To: References: Message-ID: <17283B0A-7AE9-4BC2-853C-7E3EBA4973D5@doughellmann.com> On Dec 10, 2014, at 2:12 PM, Joshua Harlow wrote: > Hi everyone, > > The OpenStack oslo team will be hosting a virtual sprint in the > Freenode IRC channel #openstack-oslo for the taskflow subproject on > Wednesday 12-17-2014 starting at 16:00 UTC and going for ~8 hours. > > The goal of this sprint is to work on any open reviews, documentation > or any other integration questions, development and so-on, so that we > can help progress the taskflow subproject forward at a good rate. > > Live version of the current documentation is available here: > > http://docs.openstack.org/developer/taskflow/ > > The code itself lives in the openstack/taskflow respository. > > http://git.openstack.org/cgit/openstack/taskflow/tree > > Please feel free to join if interested, curious, or able. > > Much appreciated, > > Joshua Harlow > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Thanks for setting this up, Josh! This day works for me. We need to make sure a couple of other Oslo cores can make it that day for the sprint to really be useful, so everyone please let us know if you can make it. Doug From jaypipes at gmail.com Wed Dec 10 20:34:57 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 10 Dec 2014 15:34:57 -0500 Subject: [openstack-dev] Lack of quota - security bug or not? In-Reply-To: <5488A255.9030302@gmail.com> References: <5488A255.9030302@gmail.com> Message-ID: <5488AE71.1080304@gmail.com> On 12/10/2014 02:43 PM, George Shuklin wrote: > I have some small discussion in launchpad: is lack of a quota for > unprivileged user counted as security bug (or at least as a bug)? > > If user can create 100500 objects in database via normal API and ops > have no way to restrict this, is it OK for Openstack or not? That would be a major security bug. Please do file one and we'll get on it immediately. Thanks, -jay From jaypipes at gmail.com Wed Dec 10 20:37:24 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 10 Dec 2014 15:37:24 -0500 Subject: [openstack-dev] [oslo] deprecation 'pattern' library?? In-Reply-To: References: Message-ID: <5488AF04.2010906@gmail.com> On 12/10/2014 03:26 PM, Joshua Harlow wrote: > Hi oslo folks (and others), > > I've recently put up a review for some common deprecation patterns: > > https://review.openstack.org/#/c/140119/ > > In summary, this is a common set of patterns that can be used by oslo > libraries, other libraries... This is different from the versionutils > one (which is more of a developer<->operator deprecation interaction) > and is more focused on the developer <-> developer deprecation > interaction (developers say using oslo libraries). > > Doug had the question about why not just put this out there on pypi with > a useful name not so strongly connected to oslo; since that review is > more of a common set of patterns that can be used by libraries outside > openstack/oslo as well. There wasn't many/any similar libraries that I > found (zope.deprecation is probably the closest) and twisted has > something in-built to it that is something similar. So in order to avoid > creating our own version of zope.deprecation in that review we might as > well create a neat name that can be useful for oslo/openstack/elsewhere... > > Some ideas that were thrown around on IRC (check > 'https://pypi.python.org/pypi/%s' % name for 404 to see if likely not > registered): > > * debtcollector This would be my choice :) Best, -jay > * bagman > * deprecate > * deprecation > * baggage > > Any other neat names people can think about? > > Or in general any other comments/ideas about providing such a > deprecation pattern library? > > -Josh > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Wed Dec 10 21:00:54 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 10 Dec 2014 16:00:54 -0500 Subject: [openstack-dev] [oslo] deprecation 'pattern' library?? In-Reply-To: References: Message-ID: <41F34BA2-6A3D-4AF9-B414-57B5526812D6@doughellmann.com> On Dec 10, 2014, at 3:26 PM, Joshua Harlow wrote: > Hi oslo folks (and others), > > I've recently put up a review for some common deprecation patterns: > > https://review.openstack.org/#/c/140119/ > > In summary, this is a common set of patterns that can be used by oslo libraries, other libraries... This is different from the versionutils one (which is more of a developer<->operator deprecation interaction) and is more focused on the developer <-> developer deprecation interaction (developers say using oslo libraries). > > Doug had the question about why not just put this out there on pypi with a useful name not so strongly connected to oslo; since that review is more of a common set of patterns that can be used by libraries outside openstack/oslo as well. There wasn't many/any similar libraries that I found (zope.deprecation is probably the closest) and twisted has something in-built to it that is something similar. So in order to avoid creating our own version of zope.deprecation in that review we might as well create a neat name that can be useful for oslo/openstack/elsewhere... > > Some ideas that were thrown around on IRC (check 'https://pypi.python.org/pypi/%s' % name for 404 to see if likely not registered): > > * debtcollector +1 I suspect we?ll want a minimal spec for the new lib, but let?s wait and hear what some of the other cores think. Doug > * bagman > * deprecate > * deprecation > * baggage > > Any other neat names people can think about? > > Or in general any other comments/ideas about providing such a deprecation pattern library? > > -Josh > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From matthew.gilliard at gmail.com Wed Dec 10 21:01:41 2014 From: matthew.gilliard at gmail.com (Matthew Gilliard) Date: Wed, 10 Dec 2014 21:01:41 +0000 Subject: [openstack-dev] People of OpenStack (and their IRC nicks) In-Reply-To: <54889368.8020606@openstack.org> References: <20141209125828.GA10149@redhat.redhat.com> <20141209140407.GO2497@yuggoth.org> <54874973.5060903@openstack.org> <1418169303-sup-199@fewbar.com> <54889368.8020606@openstack.org> Message-ID: >> I'll take the >> old content out of https://wiki.openstack.org/wiki/People and leave a >> message directing people where to look. > Yes, please, let me know if you need help. Done. > to link > directly gerrit IDs to openstack.org profile URL This may be possible with a little javascript hackery in gerrit - I'll see what I can do there. >> which IRC nick goes to which person. Does anyone know how to do >> that with the Foundation directory? > I don't think there's a lookup for that (might be worth logging a > feature request) Done: https://bugs.launchpad.net/openstack-org/+bug/1401264 Thanks for your time everyone. Matthew From fungi at yuggoth.org Wed Dec 10 21:01:42 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Dec 2014 21:01:42 +0000 Subject: [openstack-dev] People of OpenStack (and their IRC nicks) In-Reply-To: <54889368.8020606@openstack.org> References: <20141209125828.GA10149@redhat.redhat.com> <20141209140407.GO2497@yuggoth.org> <54874973.5060903@openstack.org> <1418169303-sup-199@fewbar.com> <54889368.8020606@openstack.org> Message-ID: <20141210210142.GY2497@yuggoth.org> On 2014-12-10 10:39:36 -0800 (-0800), Stefano Maffulli wrote: [...] > A better way to solve this would be to link directly gerrit IDs to > openstack.org profile URL but I have no idea how that would work. > Gerrit seems only to show full name and email address as a > fly-over, when you hover on the reviewer/owner name in the UI. I suppose a Javascript overlay to place REST query calls to the member system might be an option down the road, but there will be opposition to that as long as that system is not maintained by the Infra team. Also, keep in mind that it's possible to have a Gerrit account without having a Foundation account (though once Gerrit's authenticating via openstackid.org that should also be tractable). -- Jeremy Stanley From fungi at yuggoth.org Wed Dec 10 21:05:39 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Dec 2014 21:05:39 +0000 Subject: [openstack-dev] Lack of quota - security bug or not? In-Reply-To: <5488AE71.1080304@gmail.com> References: <5488A255.9030302@gmail.com> <5488AE71.1080304@gmail.com> Message-ID: <20141210210539.GZ2497@yuggoth.org> On 2014-12-10 15:34:57 -0500 (-0500), Jay Pipes wrote: > On 12/10/2014 02:43 PM, George Shuklin wrote: > > I have some small discussion in launchpad: is lack of a quota > > for unprivileged user counted as security bug (or at least as a > > bug)? > > > > If user can create 100500 objects in database via normal API and > > ops have no way to restrict this, is it OK for Openstack or not? > > That would be a major security bug. Please do file one and we'll > get on it immediately. I think the bigger question is whether the lack of a quota implementation for everything a tenant could ever possibly create is something we should have reported in secret, worked under embargo, backported to supported stable branches, and announced via high-profile security advisories once fixed. -- Jeremy Stanley From jaypipes at gmail.com Wed Dec 10 21:07:35 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 10 Dec 2014 16:07:35 -0500 Subject: [openstack-dev] Lack of quota - security bug or not? In-Reply-To: <20141210210539.GZ2497@yuggoth.org> References: <5488A255.9030302@gmail.com> <5488AE71.1080304@gmail.com> <20141210210539.GZ2497@yuggoth.org> Message-ID: <5488B617.3080604@gmail.com> On 12/10/2014 04:05 PM, Jeremy Stanley wrote: > On 2014-12-10 15:34:57 -0500 (-0500), Jay Pipes wrote: >> On 12/10/2014 02:43 PM, George Shuklin wrote: >>> I have some small discussion in launchpad: is lack of a quota >>> for unprivileged user counted as security bug (or at least as a >>> bug)? >>> >>> If user can create 100500 objects in database via normal API and >>> ops have no way to restrict this, is it OK for Openstack or not? >> >> That would be a major security bug. Please do file one and we'll >> get on it immediately. > > I think the bigger question is whether the lack of a quota > implementation for everything a tenant could ever possibly create is > something we should have reported in secret, worked under embargo, > backported to supported stable branches, and announced via > high-profile security advisories once fixed. Sure, fine. -jay From fungi at yuggoth.org Wed Dec 10 21:12:19 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Dec 2014 21:12:19 +0000 Subject: [openstack-dev] Lack of quota - security bug or not? In-Reply-To: <5488B617.3080604@gmail.com> References: <5488A255.9030302@gmail.com> <5488AE71.1080304@gmail.com> <20141210210539.GZ2497@yuggoth.org> <5488B617.3080604@gmail.com> Message-ID: <20141210211219.GA2497@yuggoth.org> On 2014-12-10 16:07:35 -0500 (-0500), Jay Pipes wrote: > On 12/10/2014 04:05 PM, Jeremy Stanley wrote: > > I think the bigger question is whether the lack of a quota > > implementation for everything a tenant could ever possibly > > create is something we should have reported in secret, worked > > under embargo, backported to supported stable branches, and > > announced via high-profile security advisories once fixed. > > Sure, fine. Any tips for how to implement new quota features in a way that the patches won't violate our stable backport policies? -- Jeremy Stanley From rbryant at redhat.com Wed Dec 10 21:21:47 2014 From: rbryant at redhat.com (Russell Bryant) Date: Wed, 10 Dec 2014 14:21:47 -0700 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> Message-ID: <5488B96B.2070207@redhat.com> >> On Fri, Dec 5, 2014 at 8:23 AM, joehuang wrote: >>> Dear all & TC & PTL, >>> >>> In the 40 minutes cross-project summit session ?Approaches for >>> scaling out?[1], almost 100 peoples attended the meeting, and the >>> conclusion is that cells can not cover the use cases and >>> requirements which the OpenStack cascading solution[2] aim to >>> address, the background including use cases and requirements is >>> also described in the mail. I must admit that this was not the reaction I came away with the discussion with. There was a lot of confusion, and as we started looking closer, many (or perhaps most) people speaking up in the room did not agree that the requirements being stated are things we want to try to satisfy. On 12/05/2014 06:47 PM, joehuang wrote: > Hello, Davanum, > > Thanks for your reply. > > Cells can't meet the demand for the use cases and requirements described in the mail. You're right that cells doesn't solve all of the requirements you're discussing. Cells addresses scale in a region. My impression from the summit session and other discussions is that the scale issues addressed by cells are considered a priority, while the "global API" bits are not. >> 1. Use cases >> a). Vodafone use case[4](OpenStack summit speech video from 9'02" >> to 12'30" ), establishing globally addressable tenants which result >> in efficient services deployment. Keystone has been working on federated identity. That part makes sense, and is already well under way. >> b). Telefonica use case[5], create virtual DC( data center) cross >> multiple physical DCs with seamless experience. If we're talking about multiple DCs that are effectively local to each other with high bandwidth and low latency, that's one conversation. My impression is that you want to provide a single OpenStack API on top of globally distributed DCs. I honestly don't see that as a problem we should be trying to tackle. I'd rather continue to focus on making OpenStack work *really* well split into regions. I think some people are trying to use cells in a geographically distributed way, as well. I'm not sure that's a well understood or supported thing, though. Perhaps the folks working on the new version of cells can comment further. >> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, >> 8#. For NFV cloud, it?s in nature the cloud will be distributed but >> inter-connected in many data centers. I'm afraid I don't understand this one. In many conversations about NFV, I haven't heard this before. >> >> 2.requirements >> a). The operator has multiple sites cloud; each site can use one or >> multiple vendor?s OpenStack distributions. Is this a technical problem, or is a business problem of vendors not wanting to support a mixed environment that you're trying to work around with a technical solution? >> b). Each site with its own requirements and upgrade schedule while >> maintaining standard OpenStack API >> c). The multi-site cloud must provide unified resource management >> with global Open API exposed, for example create virtual DC cross >> multiple physical DCs with seamless experience. >> Although a prosperity orchestration layer could be developed for >> the multi-site cloud, but it's prosperity API in the north bound >> interface. The cloud operators want the ecosystem friendly global >> open API for the mutli-site cloud for global access. I guess the question is, do we see a "global API" as something we want to accomplish. What you're talking about is huge, and I'm not even sure how you would expect it to work in some cases (like networking). In any case, to be as clear as possible, I'm not convinced this is something we should be working on. I'm going to need to see much more overwhelming support for the idea before helping to figure out any further steps. -- Russell Bryant From greg at greghaynes.net Wed Dec 10 21:36:19 2014 From: greg at greghaynes.net (Gregory Haynes) Date: Wed, 10 Dec 2014 13:36:19 -0800 Subject: [openstack-dev] [TripleO] Bug Squashing Day Message-ID: <1418247379.1887051.201395737.3992C87F@webmail.messagingengine.com> A couple weeks ago we discussed having a bug squash day. AFAICT we all forgot, and we still have a huge bug backlog. I'd like to propose we make next Wed. (12/17, in whatever 24 window is Wed. in your time zone) a bug squashing day. Hopefully we can add this as an item to our weekly meeting on Tues. to help remind everyone the day before. Cheers, Greg -- Gregory Haynes greg at greghaynes.net From mikal at stillhq.com Wed Dec 10 21:41:49 2014 From: mikal at stillhq.com (Michael Still) Date: Thu, 11 Dec 2014 08:41:49 +1100 Subject: [openstack-dev] [nova] Kilo specs review day Message-ID: Hi, at the design summit we said that we would not approve specifications after the kilo-1 deadline, which is 18 December. Unfortunately, we?ve had a lot of specifications proposed this cycle (166 to my count), and haven?t kept up with the review workload. Therefore, I propose that Friday this week be a specs review day. We need to burn down the queue of specs needing review, as well as abandoning those which aren?t getting regular updates based on our review comments. I?d appreciate nova-specs-core doing reviews on Friday, but its always super helpful when non-cores review as well. A +1 for a developer or operator gives nova-specs-core a good signal of what might be ready to approve, and that helps us optimize our review time. For reference, the specs to review may be found at: https://review.openstack.org/#/q/project:openstack/nova-specs+status:open,n,z Thanks heaps, Michael -- Rackspace Australia From pcm at cisco.com Wed Dec 10 21:59:41 2014 From: pcm at cisco.com (Paul Michali (pcm)) Date: Wed, 10 Dec 2014 21:59:41 +0000 Subject: [openstack-dev] [neutron] FYI: VPNaaS Sub-team meeting setup... Message-ID: <5D271EB0-1B97-4C57-9ED4-0D076E90F04C@cisco.com> I created a Wiki page entry and reserved the IRC openstack-meeting-3 channel for Tuesday?s 1500 UTC. I?ll flesh out the meeting page with info on Friday, when I return from the Neutron mid-cycle sprint. Let me know if you have any agenda topics (or edit the page directly). Regards, PCM (Paul Michali) MAIL ?..?. pcm at cisco.com IRC ??..? pc_m (irc.freenode.com) TW ???... @pmichali GPG Key ? 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jp at jamezpolley.com Wed Dec 10 22:01:45 2014 From: jp at jamezpolley.com (James Polley) Date: Wed, 10 Dec 2014 23:01:45 +0100 Subject: [openstack-dev] [TripleO] Bug Squashing Day In-Reply-To: <1418247379.1887051.201395737.3992C87F@webmail.messagingengine.com> References: <1418247379.1887051.201395737.3992C87F@webmail.messagingengine.com> Message-ID: How do you find the Australian at the international online meeting? You don't, they'll find you and make loud pointed remarks about your lack of understanding of the ramifications of the world being round, the IDL, and so on. On Wed, Dec 10, 2014 at 10:36 PM, Gregory Haynes wrote: > A couple weeks ago we discussed having a bug squash day. AFAICT we all > forgot, and we still have a huge bug backlog. I'd like to propose we > make next Wed. (12/17, in whatever 24 window is Wed. in your time zone) > a bug squashing day. Hopefully we can add this as an item to our weekly > meeting on Tues. to help remind everyone the day before. > Luckily next week's meeting is the UTC1900 meeting - so for Europe that's Tuesday night, and for Christchurch and Sydney that's 9am and 6am respectively. The meeting we had earlier today was at 9am Wednesday CET (still time for a reminder) - 10pm/8pm Wednesday in Christchurch/Sydney. On any case, I've added a note to the agenda ( https://wiki.openstack.org/wiki/Meetings/TripleO#One-off_agenda_items) and linked it back to the original discussion ( http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-12-02-19.06.log.html#l-32 ) > > Cheers, > Greg > > -- > Gregory Haynes > greg at greghaynes.net > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Wed Dec 10 22:13:00 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Wed, 10 Dec 2014 14:13:00 -0800 Subject: [openstack-dev] [nova] Kilo specs review day In-Reply-To: References: Message-ID: On Wed, Dec 10, 2014 at 1:41 PM, Michael Still wrote: > Hi, > > at the design summit we said that we would not approve specifications > after the kilo-1 deadline, which is 18 December. Unfortunately, we?ve > had a lot of specifications proposed this cycle (166 to my count), and > haven?t kept up with the review workload. > > Therefore, I propose that Friday this week be a specs review day. We > need to burn down the queue of specs needing review, as well as > abandoning those which aren?t getting regular updates based on our > review comments. > > I?d appreciate nova-specs-core doing reviews on Friday, but its always > super helpful when non-cores review as well. A +1 for a developer or > operator gives nova-specs-core a good signal of what might be ready to > approve, and that helps us optimize our review time. > > For reference, the specs to review may be found at: > > > https://review.openstack.org/#/q/project:openstack/nova-specs+status:open,n,z ++, count me in! > > > Thanks heaps, > Michael > > -- > Rackspace Australia > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.carter at RACKSPACE.COM Wed Dec 10 22:16:54 2014 From: kevin.carter at RACKSPACE.COM (Kevin Carter) Date: Wed, 10 Dec 2014 22:16:54 +0000 Subject: [openstack-dev] Announcing the openstack ansible deployment repo Message-ID: <1FB68B52-F7C7-47E4-A44F-4846740B598A@rackspace.com> Hello all, The RCBOPS team at Rackspace has developed a repository of Ansible roles, playbooks, scripts, and libraries to deploy Openstack inside containers for production use. We?ve been running this deployment for a while now, and at the last OpenStack summit we discussed moving the repo into Stackforge as a community project. Today, I?m happy to announce that the "os-ansible-deployment" repo is online within Stackforge. This project is a work in progress and we welcome anyone who?s interested in contributing. This project includes: * Ansible playbooks for deployment and orchestration of infrastructure resources. * Isolation of services using LXC containers. * Software deployed from source using python wheels. Where to find us: * IRC: #openstack-ansible * Launchpad: https://launchpad.net/openstack-ansible * Meetings: #openstack-ansible IRC channel every Tuesday at 14:30 UTC. (The meeting schedule is not fully formalized and may be subject to change.) * Code: https://github.com/stackforge/os-ansible-deployment Thanks and we hope to see you in the channel. ? Kevin -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jp at jamezpolley.com Wed Dec 10 22:17:24 2014 From: jp at jamezpolley.com (James Polley) Date: Wed, 10 Dec 2014 23:17:24 +0100 Subject: [openstack-dev] [TripleO] Bug Squashing Day In-Reply-To: References: <1418247379.1887051.201395737.3992C87F@webmail.messagingengine.com> Message-ID: My previous email is a long-winded whinging-aussie way of saying that I think the bug-squashing day is a great idea, and I think Wednesday sounds like a great day for it. On Wed, Dec 10, 2014 at 11:01 PM, James Polley wrote: > How do you find the Australian at the international online meeting? > > You don't, they'll find you and make loud pointed remarks about your lack > of understanding of the ramifications of the world being round, the IDL, > and so on. > > On Wed, Dec 10, 2014 at 10:36 PM, Gregory Haynes > wrote: > >> A couple weeks ago we discussed having a bug squash day. AFAICT we all >> forgot, and we still have a huge bug backlog. I'd like to propose we >> make next Wed. (12/17, in whatever 24 window is Wed. in your time zone) >> a bug squashing day. Hopefully we can add this as an item to our weekly >> meeting on Tues. to help remind everyone the day before. >> > > Luckily next week's meeting is the UTC1900 meeting - so for Europe that's > Tuesday night, and for Christchurch and Sydney that's 9am and 6am > respectively. The meeting we had earlier today was at 9am Wednesday CET > (still time for a reminder) - 10pm/8pm Wednesday in Christchurch/Sydney. > > On any case, I've added a note to the agenda ( > https://wiki.openstack.org/wiki/Meetings/TripleO#One-off_agenda_items) > and linked it back to the original discussion ( > http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-12-02-19.06.log.html#l-32 > ) > >> >> Cheers, >> Greg >> >> -- >> Gregory Haynes >> greg at greghaynes.net >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at dague.net Wed Dec 10 22:23:02 2014 From: sean at dague.net (Sean Dague) Date: Wed, 10 Dec 2014 17:23:02 -0500 Subject: [openstack-dev] [oslo] deprecation 'pattern' library?? In-Reply-To: <41F34BA2-6A3D-4AF9-B414-57B5526812D6@doughellmann.com> References: <41F34BA2-6A3D-4AF9-B414-57B5526812D6@doughellmann.com> Message-ID: <5488C7C6.2000508@dague.net> On 12/10/2014 04:00 PM, Doug Hellmann wrote: > > On Dec 10, 2014, at 3:26 PM, Joshua Harlow wrote: > >> Hi oslo folks (and others), >> >> I've recently put up a review for some common deprecation patterns: >> >> https://review.openstack.org/#/c/140119/ >> >> In summary, this is a common set of patterns that can be used by oslo libraries, other libraries... This is different from the versionutils one (which is more of a developer<->operator deprecation interaction) and is more focused on the developer <-> developer deprecation interaction (developers say using oslo libraries). >> >> Doug had the question about why not just put this out there on pypi with a useful name not so strongly connected to oslo; since that review is more of a common set of patterns that can be used by libraries outside openstack/oslo as well. There wasn't many/any similar libraries that I found (zope.deprecation is probably the closest) and twisted has something in-built to it that is something similar. So in order to avoid creating our own version of zope.deprecation in that review we might as well create a neat name that can be useful for oslo/openstack/elsewhere... >> >> Some ideas that were thrown around on IRC (check 'https://pypi.python.org/pypi/%s' % name for 404 to see if likely not registered): >> >> * debtcollector > > +1 > > I suspect we?ll want a minimal spec for the new lib, but let?s wait and hear what some of the other cores think. Not a core, but as someone that will be using it, that seems reasonable. The biggest issue with the deprecation patterns in projects is aggressive cleaning tended to clean out all the deprecations at the beginning of a cycle... and then all the deprecation assist code, as it was unused.... sad panda. Having it in a common lib as a bunch of decorators would be great. Especially if we can work out things like *not* spamming deprecation load warnings on every worker start. -Sean > > Doug > >> * bagman >> * deprecate >> * deprecation >> * baggage >> >> Any other neat names people can think about? >> >> Or in general any other comments/ideas about providing such a deprecation pattern library? >> >> -Josh >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Sean Dague http://dague.net From aleonhardt.py at gmail.com Wed Dec 10 22:28:08 2014 From: aleonhardt.py at gmail.com (Alex Leonhardt) Date: Wed, 10 Dec 2014 22:28:08 +0000 Subject: [openstack-dev] [Openstack-operators] Announcing the openstack ansible deployment repo References: <1FB68B52-F7C7-47E4-A44F-4846740B598A@rackspace.com> Message-ID: This is great, fwiw, I'd also suggest to look at saltstack also supporting and working on features for OpenStack. Cheers! Alex On Wed, 10 Dec 2014 22:18 Kevin Carter wrote: > Hello all, > > > The RCBOPS team at Rackspace has developed a repository of Ansible roles, > playbooks, scripts, and libraries to deploy Openstack inside containers for > production use. We?ve been running this deployment for a while now, > and at the last OpenStack summit we discussed moving the repo into > Stackforge as a community project. Today, I?m happy to announce that the > "os-ansible-deployment" repo is online within Stackforge. This project is a > work in progress and we welcome anyone who?s interested in contributing. > > This project includes: > * Ansible playbooks for deployment and orchestration of infrastructure > resources. > * Isolation of services using LXC containers. > * Software deployed from source using python wheels. > > Where to find us: > * IRC: #openstack-ansible > * Launchpad: https://launchpad.net/openstack-ansible > * Meetings: #openstack-ansible IRC channel every Tuesday at 14:30 UTC. > (The meeting schedule is not fully formalized and may be subject to change.) > * Code: https://github.com/stackforge/os-ansible-deployment > > Thanks and we hope to see you in the channel. > > ? > > Kevin > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlowja at outlook.com Wed Dec 10 22:49:10 2014 From: harlowja at outlook.com (Joshua Harlow) Date: Wed, 10 Dec 2014 14:49:10 -0800 Subject: [openstack-dev] [oslo] deprecation 'pattern' library?? In-Reply-To: <5488C7C6.2000508@dague.net> References: <41F34BA2-6A3D-4AF9-B414-57B5526812D6@doughellmann.com> <5488C7C6.2000508@dague.net> Message-ID: Sean Dague wrote: > On 12/10/2014 04:00 PM, Doug Hellmann wrote: >> On Dec 10, 2014, at 3:26 PM, Joshua Harlow wrote: >> >>> Hi oslo folks (and others), >>> >>> I've recently put up a review for some common deprecation patterns: >>> >>> https://review.openstack.org/#/c/140119/ >>> >>> In summary, this is a common set of patterns that can be used by oslo libraries, other libraries... This is different from the versionutils one (which is more of a developer<->operator deprecation interaction) and is more focused on the developer<-> developer deprecation interaction (developers say using oslo libraries). >>> >>> Doug had the question about why not just put this out there on pypi with a useful name not so strongly connected to oslo; since that review is more of a common set of patterns that can be used by libraries outside openstack/oslo as well. There wasn't many/any similar libraries that I found (zope.deprecation is probably the closest) and twisted has something in-built to it that is something similar. So in order to avoid creating our own version of zope.deprecation in that review we might as well create a neat name that can be useful for oslo/openstack/elsewhere... >>> >>> Some ideas that were thrown around on IRC (check 'https://pypi.python.org/pypi/%s' % name for 404 to see if likely not registered): >>> >>> * debtcollector >> +1 >> >> I suspect we?ll want a minimal spec for the new lib, but let?s wait and hear what some of the other cores think. > > Not a core, but as someone that will be using it, that seems reasonable. > > The biggest issue with the deprecation patterns in projects is > aggressive cleaning tended to clean out all the deprecations at the > beginning of a cycle... and then all the deprecation assist code, as it > was unused.... sad panda. > > Having it in a common lib as a bunch of decorators would be great. > Especially if we can work out things like *not* spamming deprecation > load warnings on every worker start. We should be able to adjust the deprecation warnings here. Although I'd almost want these kinds of warnings to not occur/appear at worker start since at that point the operator can't do anything about them... An idea was to have the jenkins/gerrit/zuul logs have these depreciation warnings turned on (perhaps in a blinky red/green color) to have them appear at development time (since these would be targeted to features depreciated that are only really relevant to developers, not operators). Once released then they can just stay off (which I believe is the python default[1] to turn these off unless '-Wonce or -Wall' is passed to the worker/runtime on python startup)... https://docs.python.org/2/using/cmdline.html#cmdoption-W (DeprecationWarning and its descendants are ignored by default since python 2.7+) -Josh > > -Sean > >> Doug >> >>> * bagman >>> * deprecate >>> * deprecation >>> * baggage >>> >>> Any other neat names people can think about? >>> >>> Or in general any other comments/ideas about providing such a deprecation pattern library? >>> >>> -Josh >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > From mestery at mestery.com Wed Dec 10 23:10:33 2014 From: mestery at mestery.com (Kyle Mestery) Date: Wed, 10 Dec 2014 16:10:33 -0700 Subject: [openstack-dev] [neutron] Services are now split out and neutron is open for commits! Message-ID: Folks, just a heads up that we have completed splitting out the services (FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3]. This was all done in accordance with the spec approved here [4]. Thanks to all involved, but a special thanks to Doug and Anita, as well as infra. Without all of their work and help, this wouldn't have been possible! Neutron and the services repositories are now open for merges again. We're going to be landing some major L3 agent refactoring across the 4 repositories in the next four days, look for Carl to be leading that work with the L3 team. In the meantime, please report any issues you have in launchpad [5] as bugs, and find people in #openstack-neutron or send an email. We've verified things come up and all the tempest and API tests for basic neutron work fine. In the coming week, we'll be getting all the tests working for the services repositories. Medium term, we need to also move all the advanced services tempest tests out of tempest and into the respective repositories. We also need to beef these tests up considerably, so if you want to help out on a critical project for Neutron, please let me know. Thanks! Kyle [1] http://git.openstack.org/cgit/openstack/neutron-fwaas [2] http://git.openstack.org/cgit/openstack/neutron-lbaas [3] http://git.openstack.org/cgit/openstack/neutron-vpnaas [4] http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo/services-split.rst [5] https://bugs.launchpad.net/neutron -------------- next part -------------- An HTML attachment was scrubbed... URL: From edgar.magana at workday.com Wed Dec 10 23:14:26 2014 From: edgar.magana at workday.com (Edgar Magana) Date: Wed, 10 Dec 2014 23:14:26 +0000 Subject: [openstack-dev] [neutron] Services are now split out and neutron is open for commits! In-Reply-To: References: Message-ID: Great Work Team! Congratulations.. Edgar From: Kyle Mestery > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, December 10, 2014 at 3:10 PM To: "OpenStack Development Mailing List (not for usage questions)" > Subject: [openstack-dev] [neutron] Services are now split out and neutron is open for commits! Folks, just a heads up that we have completed splitting out the services (FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3]. This was all done in accordance with the spec approved here [4]. Thanks to all involved, but a special thanks to Doug and Anita, as well as infra. Without all of their work and help, this wouldn't have been possible! Neutron and the services repositories are now open for merges again. We're going to be landing some major L3 agent refactoring across the 4 repositories in the next four days, look for Carl to be leading that work with the L3 team. In the meantime, please report any issues you have in launchpad [5] as bugs, and find people in #openstack-neutron or send an email. We've verified things come up and all the tempest and API tests for basic neutron work fine. In the coming week, we'll be getting all the tests working for the services repositories. Medium term, we need to also move all the advanced services tempest tests out of tempest and into the respective repositories. We also need to beef these tests up considerably, so if you want to help out on a critical project for Neutron, please let me know. Thanks! Kyle [1] http://git.openstack.org/cgit/openstack/neutron-fwaas [2] http://git.openstack.org/cgit/openstack/neutron-lbaas [3] http://git.openstack.org/cgit/openstack/neutron-vpnaas [4] http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo/services-split.rst [5] https://bugs.launchpad.net/neutron -------------- next part -------------- An HTML attachment was scrubbed... URL: From mestery at mestery.com Wed Dec 10 23:14:57 2014 From: mestery at mestery.com (Kyle Mestery) Date: Wed, 10 Dec 2014 16:14:57 -0700 Subject: [openstack-dev] [neutron] mid-cycle update Message-ID: The Neutron mid-cycle [1] is now complete, I wanted to let everyone know how it went. Thanks to all who attended, we got a lot done. I admit to being skeptical of mid-cycles, especially given the cross project meeting a month back on the topic. But this particular one was very useful. We had defined tasks to complete, and we made a lot of progress! What we accomplished was: 1. We finished splitting out neutron advanced services and get things working again post-split. 2. We had a team refactoring the L3 agent who now have a batch of commits to merge post services-split. 3. We worked on refactoring the core API and WSGI layer, and produced multiple specs on this topic and some POC code. 4. We had someone working on IPV6 tempest tests for the gate who made good progress here. 5. We had multiple people working on plugin decomposition who are close to getting this working. Overall, it was a great sprint! Thanks to Adobe for hosting, Utah is a beautiful state. Looking forward to the rest of Kilo! Kyle [1] https://wiki.openstack.org/wiki/Sprints/NeutronKiloSprint -------------- next part -------------- An HTML attachment was scrubbed... URL: From tqtran at us.ibm.com Wed Dec 10 23:37:12 2014 From: tqtran at us.ibm.com (Thai Q Tran) Date: Wed, 10 Dec 2014 16:37:12 -0700 Subject: [openstack-dev] Moving _conf and _scripts to dashboard In-Reply-To: References: , , <547DD08A.7000402@redhat.com> Message-ID: An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Wed Dec 10 23:44:50 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Wed, 10 Dec 2014 23:44:50 +0000 Subject: [openstack-dev] Moving _conf and _scripts to dashboard References: <547DD08A.7000402@redhat.com> Message-ID: +1 to moving application configuration to the application, out of the library. Richard On Thu Dec 11 2014 at 10:38:20 AM Thai Q Tran wrote: > The way we are structuring our javascripts today is complicated. All of > our static javascripts reside in /horizon/static and are imported through > _conf.html and _scripts.html. Notice that there are already some panel > specific javascripts like: horizon.images.js, horizon.instances.js, > horizon.users.js. They do not belong in horizon. They belong in > openstack_dashboard because they are specific to a panel. > > Why am I raising this issue now? In Angular, we need controllers written > in javascript for each panel. As we angularize more and more panels, we > need to store them in a way that make sense. To me, it make sense for us to > move _conf and _scripts to openstack_dashboard. Or if this is not possible, > then provide a mechanism to override them in openstack_dashboard. > > Thoughts? > Thai > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vishvananda at gmail.com Thu Dec 11 00:06:12 2014 From: vishvananda at gmail.com (Vishvananda Ishaya) Date: Wed, 10 Dec 2014 16:06:12 -0800 Subject: [openstack-dev] [Nova] Spring cleaning nova-core In-Reply-To: References: Message-ID: On Dec 4, 2014, at 4:05 PM, Michael Still wrote: > One of the things that happens over time is that some of our core > reviewers move on to other projects. This is a normal and healthy > thing, especially as nova continues to spin out projects into other > parts of OpenStack. > > However, it is important that our core reviewers be active, as it > keeps them up to date with the current ways we approach development in > Nova. I am therefore removing some no longer sufficiently active cores > from the nova-core group. > > I?d like to thank the following people for their contributions over the years: > > * cbehrens: Chris Behrens > * vishvananda: Vishvananda Ishaya Thank you Michael. I knew this would happen eventually. I am around and I still do reviews from time to time, so everyone feel free to ping me on irc if there are specific reviews that need my historical knowledge! Vish > * dan-prince: Dan Prince > * belliott: Brian Elliott > * p-draigbrady: Padraig Brady > > I?d love to see any of these cores return if they find their available > time for code reviews increases. > > Thanks, > Michael > > -- > Rackspace Australia > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From sgordon at redhat.com Thu Dec 11 00:13:37 2014 From: sgordon at redhat.com (Steve Gordon) Date: Wed, 10 Dec 2014 19:13:37 -0500 (EST) Subject: [openstack-dev] [Telco][NFV] Meeting Minutes and Logs from Dec. 10 In-Reply-To: <13832748.19628.1418256602658.JavaMail.sgordon@localhost.localdomain> Message-ID: <20948099.19636.1418256801529.JavaMail.sgordon@localhost.localdomain> Hi all, Minutes and logs from todays OpenStack Telco Working Group meeting are available at the locations below: * Meeting ended Wed Dec 10 22:59:12 2014 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) * Minutes: http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-10-22.00.html * Minutes (text): http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-10-22.00.txt * Log: http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-10-22.00.log.html Thanks, Steve From mikal at stillhq.com Thu Dec 11 00:41:45 2014 From: mikal at stillhq.com (Michael Still) Date: Thu, 11 Dec 2014 11:41:45 +1100 Subject: [openstack-dev] [neutron] mid-cycle update In-Reply-To: References: Message-ID: On Thu, Dec 11, 2014 at 10:14 AM, Kyle Mestery wrote: > The Neutron mid-cycle [1] is now complete, I wanted to let everyone know how > it went. Thanks to all who attended, we got a lot done. I admit to being > skeptical of mid-cycles, especially given the cross project meeting a month > back on the topic. But this particular one was very useful. We had defined > tasks to complete, and we made a lot of progress! What we accomplished was: > > 1. We finished splitting out neutron advanced services and get things > working again post-split. > 2. We had a team refactoring the L3 agent who now have a batch of commits to > merge post services-split. > 3. We worked on refactoring the core API and WSGI layer, and produced > multiple specs on this topic and some POC code. > 4. We had someone working on IPV6 tempest tests for the gate who made good > progress here. > 5. We had multiple people working on plugin decomposition who are close to > getting this working. This all sounds like good work. Did you manage to progress the nova-network to neutron migration tasks as well? > Overall, it was a great sprint! Thanks to Adobe for hosting, Utah is a > beautiful state. > > Looking forward to the rest of Kilo! > > Kyle > > [1] https://wiki.openstack.org/wiki/Sprints/NeutronKiloSprint Thanks, Michael -- Rackspace Australia From henry4hly at gmail.com Thu Dec 11 00:49:51 2014 From: henry4hly at gmail.com (henry hly) Date: Thu, 11 Dec 2014 08:49:51 +0800 Subject: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup? In-Reply-To: References: <87ppc090ne.fsf@metaswitch.com> <87d27z8kxj.fsf@metaswitch.com> Message-ID: On Thu, Dec 11, 2014 at 12:36 AM, Kevin Benton wrote: > What would the port binding operation do in this case? Just mark the port as > bound and nothing else? > Also to set the vif type to tap, but don't care what the real backend switch is. From john.griffith8 at gmail.com Thu Dec 11 01:01:33 2014 From: john.griffith8 at gmail.com (John Griffith) Date: Wed, 10 Dec 2014 18:01:33 -0700 Subject: [openstack-dev] Announcing the openstack ansible deployment repo In-Reply-To: <1FB68B52-F7C7-47E4-A44F-4846740B598A@rackspace.com> References: <1FB68B52-F7C7-47E4-A44F-4846740B598A@rackspace.com> Message-ID: On Wed, Dec 10, 2014 at 3:16 PM, Kevin Carter wrote: > Hello all, > > > The RCBOPS team at Rackspace has developed a repository of Ansible roles, playbooks, scripts, and libraries to deploy Openstack inside containers for production use. We?ve been running this deployment for a while now, > and at the last OpenStack summit we discussed moving the repo into Stackforge as a community project. Today, I?m happy to announce that the "os-ansible-deployment" repo is online within Stackforge. This project is a work in progress and we welcome anyone who?s interested in contributing. > > This project includes: > * Ansible playbooks for deployment and orchestration of infrastructure resources. > * Isolation of services using LXC containers. > * Software deployed from source using python wheels. > > Where to find us: > * IRC: #openstack-ansible > * Launchpad: https://launchpad.net/openstack-ansible > * Meetings: #openstack-ansible IRC channel every Tuesday at 14:30 UTC. (The meeting schedule is not fully formalized and may be subject to change.) > * Code: https://github.com/stackforge/os-ansible-deployment > > Thanks and we hope to see you in the channel. > > ? > > Kevin > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Hey Kevin, Really cool! I have some questions though, I've been trying to do this exact sort of thing on my own with Cinder but can't get iscsi daemon running in a container. In fact I run into a few weird networking problems that I haven't sorted, but the storage piece seems to be a big stumbling point for me even when I cut some of the extra stuff I was trying to do with devstack out of it. Anyway, are you saying that this enables running the reference LVM impl c-vol service in a container as well? I'd love to hear/see more and play around with this. Thanks, John From stefano at openstack.org Thu Dec 11 01:08:12 2014 From: stefano at openstack.org (Stefano Maffulli) Date: Wed, 10 Dec 2014 17:08:12 -0800 Subject: [openstack-dev] [nova] Kilo specs review day In-Reply-To: References: Message-ID: <5488EE7C.5080902@openstack.org> On 12/10/2014 01:41 PM, Michael Still wrote: > at the design summit we said that we would not approve specifications > after the kilo-1 deadline, which is 18 December. Unfortunately, we?ve > had a lot of specifications proposed this cycle (166 to my count), and > haven?t kept up with the review workload. Great idea, mikal, thanks for raising this topic. I have asked the Product and Win The Enterprise working groups to help out, too. cheers, stef From oomichi at mxs.nes.nec.co.jp Thu Dec 11 00:42:37 2014 From: oomichi at mxs.nes.nec.co.jp (Kenichi Oomichi) Date: Thu, 11 Dec 2014 00:42:37 +0000 Subject: [openstack-dev] [nova] Kilo specs review day In-Reply-To: References: Message-ID: <5488E87D.8070502@mxs.nes.nec.co.jp> On 2014/12/11 6:41, Michael Still wrote: > Hi, > > at the design summit we said that we would not approve specifications > after the kilo-1 deadline, which is 18 December. Unfortunately, we?ve > had a lot of specifications proposed this cycle (166 to my count), and > haven?t kept up with the review workload. > > Therefore, I propose that Friday this week be a specs review day. We > need to burn down the queue of specs needing review, as well as > abandoning those which aren?t getting regular updates based on our > review comments. > > I?d appreciate nova-specs-core doing reviews on Friday, but its always > super helpful when non-cores review as well. A +1 for a developer or > operator gives nova-specs-core a good signal of what might be ready to > approve, and that helps us optimize our review time. > > For reference, the specs to review may be found at: > > https://review.openstack.org/#/q/project:openstack/nova-specs+status:open,n,z +1 for the review day, and the list is very long. Thanks Ken'ichi Ohmichi From henry4hly at gmail.com Thu Dec 11 01:37:31 2014 From: henry4hly at gmail.com (henry hly) Date: Thu, 11 Dec 2014 09:37:31 +0800 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> Message-ID: On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells wrote: > On 10 December 2014 at 01:31, Daniel P. Berrange > wrote: >> >> >> So the problem of Nova review bandwidth is a constant problem across all >> areas of the code. We need to solve this problem for the team as a whole >> in a much broader fashion than just for people writing VIF drivers. The >> VIF drivers are really small pieces of code that should be straightforward >> to review & get merged in any release cycle in which they are proposed. >> I think we need to make sure that we focus our energy on doing this and >> not ignoring the problem by breaking stuff off out of tree. > > > The problem is that we effectively prevent running an out of tree Neutron > driver (which *is* perfectly legitimate) if it uses a VIF plugging mechanism > that isn't in Nova, as we can't use out of tree code and we won't accept in > code ones for out of tree drivers. The question is, do we really need such flexibility for so many nova vif types? I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example, nova shouldn't known too much details about switch backend, it should only care about the VIF itself, how the VIF is plugged to switch belongs to Neutron half. However I'm not saying to move existing vif driver out, those open backend have been used widely. But from now on the tap and vhostuser mode should be encouraged: one common vif driver to many long-tail backend. Best Regards, Henry > This will get more confusing as *all* of > the Neutron drivers and plugins move out of the tree, as that constraint > becomes essentially arbitrary. > > Your issue is one of testing. Is there any way we could set up a better > testing framework for VIF drivers where Nova interacts with something to > test the plugging mechanism actually passes traffic? I don't believe > there's any specific limitation on it being *Neutron* that uses the plugging > interaction. > -- > Ian. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From seanroberts66 at gmail.com Thu Dec 11 01:52:09 2014 From: seanroberts66 at gmail.com (Sean Roberts) Date: Wed, 10 Dec 2014 17:52:09 -0800 Subject: [openstack-dev] People of OpenStack (and their IRC nicks) In-Reply-To: References: <20141209125828.GA10149@redhat.redhat.com> <20141209140407.GO2497@yuggoth.org> <54874973.5060903@openstack.org> <1418169303-sup-199@fewbar.com> <54889368.8020606@openstack.org> Message-ID: I re-noticed that the free form projects involved field in doesn't show up on the personal wiki page. Some weird people like me do more "other" than normal stuff. It would be nice to add that free form field, so others know what us unusuals do too for elections and such. ~sean On Dec 10, 2014, at 1:01 PM, Matthew Gilliard wrote: >>> I'll take the >>> old content out of https://wiki.openstack.org/wiki/People and leave a >>> message directing people where to look. >> Yes, please, let me know if you need help. > > Done. > >> to link >> directly gerrit IDs to openstack.org profile URL > > This may be possible with a little javascript hackery in gerrit - I'll > see what I can do there. > >>> which IRC nick goes to which person. Does anyone know how to do >>> that with the Foundation directory? >> I don't think there's a lookup for that (might be worth logging a >> feature request) > > Done: https://bugs.launchpad.net/openstack-org/+bug/1401264 > > Thanks for your time everyone. > > > Matthew > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From xianchaobo at huawei.com Thu Dec 11 02:09:03 2014 From: xianchaobo at huawei.com (xianchaobo) Date: Thu, 11 Dec 2014 02:09:03 +0000 Subject: [openstack-dev] [Ironic] Some questions about Ironic service In-Reply-To: References: Message-ID: Hi,Fox Kevin M Thanks for your help. Also,I want to know whether these features will be implemented in Ironic? Do we have a plan to implement them? Thanks Xianchaobo -----????----- ???: openstack-dev-request at lists.openstack.org [mailto:openstack-dev-request at lists.openstack.org] ????: 2014?12?9? 18:36 ???: openstack-dev at lists.openstack.org ??: OpenStack-dev Digest, Vol 32, Issue 25 Send OpenStack-dev mailing list submissions to openstack-dev at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev or, via email, send a message with subject or body 'help' to openstack-dev-request at lists.openstack.org You can reach the person managing the list at openstack-dev-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of OpenStack-dev digest..." Today's Topics: 1. [Mistral] Query on creating multiple resources (Sushma Korati) 2. Re: [neutron] Changes to the core team (trinath.somanchi at freescale.com) 3. [Neutron][OVS] ovs-ofctl-to-python blueprint (YAMAMOTO Takashi) 4. Re: [api] Using query string or request body to pass parameter (Alex Xu) 5. [Ironic] Some questions about Ironic service (xianchaobo) 6. [Ironic] How to get past pxelinux.0 bootloader? (Peeyush Gupta) 7. Re: [neutron] Changes to the core team (Gariganti, Sudhakar Babu) 8. Re: [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. (Samuel Bercovici) 9. [Mistral] Action context passed to all action executions by default (W Chan) 10. Cross-Project meeting, Tue December 9th, 21:00 UTC (Thierry Carrez) 11. Re: [Mistral] Query on creating multiple resources (Renat Akhmerov) 12. Re: [Mistral] Query on creating multiple resources (Renat Akhmerov) 13. Re: [Mistral] Event Subscription (Renat Akhmerov) 14. Re: [Mistral] Action context passed to all action executions by default (Renat Akhmerov) 15. Re: Cross-Project meeting, Tue December 9th, 21:00 UTC (joehuang) 16. Re: [Ironic] Some questions about Ironic service (Fox, Kevin M) 17. [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver (Maxime Leroy) 18. Re: [Ironic] How to get past pxelinux.0 bootloader? (Fox, Kevin M) 19. Re: [Ironic] Fuel agent proposal (Roman Prykhodchenko) 20. Re: [Ironic] How to get past pxelinux.0 bootloader? (Peeyush Gupta) 21. Re: Cross-Project meeting, Tue December 9th, 21:00 UTC (Thierry Carrez) 22. [neutron] mid-cycle "hot reviews" (Miguel ?ngel Ajo) 23. Re: [horizon] REST and Django (Tihomir Trifonov) ---------------------------------------------------------------------- Message: 1 Date: Tue, 9 Dec 2014 05:57:35 +0000 From: Sushma Korati To: "gokrokvertskhov at mirantis.com" , "zbitter at redhat.com" Cc: "openstack-dev at lists.openstack.org" Subject: [openstack-dev] [Mistral] Query on creating multiple resources Message-ID: <1418105060569.62922 at persistent.com> Content-Type: text/plain; charset="iso-8859-1" Hi, Thank you guys. Yes I am able to do this with heat, but I faced issues while trying the same with mistral. As suggested will try with the latest mistral branch. Thank you once again. Regards, Sushma ________________________________ From: Georgy Okrokvertskhov [mailto:gokrokvertskhov at mirantis.com] Sent: Tuesday, December 09, 2014 6:07 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Mistral] Query on creating multiple resources Hi Sushma, Did you explore Heat templates? As Zane mentioned you can do this via Heat template without writing any workflows. Do you have any specific use cases which you can't solve with Heat template? Create VM workflow was a demo example. Mistral potentially can be used by Heat or other orchestration tools to do actual interaction with API, but for user it might be easier to use Heat functionality. Thanks, Georgy On Mon, Dec 8, 2014 at 7:54 AM, Nikolay Makhotkin > wrote: Hi, Sushma! Can we create multiple resources using a single task, like multiple keypairs or security-groups or networks etc? Yes, we can. This feature is in the development now and it is considered as experimental - https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections Just clone the last master branch from mistral. You can specify "for-each" task property and provide the array of data to your workflow: -------------------- version: '2.0' name: secgroup_actions workflows: create_security_group: type: direct input: - array_with_names_and_descriptions tasks: create_secgroups: for-each: data: $.array_with_names_and_descriptions action: nova.security_groups_create name={$.data.name} description={$.data.description} ------------ On Mon, Dec 8, 2014 at 6:36 PM, Zane Bitter > wrote: On 08/12/14 09:41, Sushma Korati wrote: Can we create multiple resources using a single task, like multiple keypairs or security-groups or networks etc? Define them in a Heat template and create the Heat stack as a single task. - ZB _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best Regards, Nikolay _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 DISCLAIMER ========== This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails. -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Tue, 9 Dec 2014 05:57:44 +0000 From: "trinath.somanchi at freescale.com" To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [neutron] Changes to the core team Message-ID: Content-Type: text/plain; charset="utf-8" Congratulation Kevin and Henry ? -- Trinath Somanchi - B39208 trinath.somanchi at freescale.com | extn: 4048 From: Kyle Mestery [mailto:mestery at mestery.com] Sent: Monday, December 08, 2014 11:32 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] Changes to the core team On Tue, Dec 2, 2014 at 9:59 AM, Kyle Mestery > wrote: Now that we're in the thick of working hard on Kilo deliverables, I'd like to make some changes to the neutron core team. Reviews are the most important part of being a core reviewer, so we need to ensure cores are doing reviews. The stats for the 180 day period [1] indicate some changes are needed for cores who are no longer reviewing. First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from neutron-core. Bob and Nachi have been core members for a while now. They have contributed to Neutron over the years in reviews, code and leading sub-teams. I'd like to thank them for all that they have done over the years. I'd also like to propose that should they start reviewing more going forward the core team looks to fast track them back into neutron-core. But for now, their review stats place them below the rest of the team for 180 days. As part of the changes, I'd also like to propose two new members to neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have been very active in reviews, meetings, and code for a while now. Henry lead the DB team which fixed Neutron DB migrations during Juno. Kevin has been actively working across all of Neutron, he's done some great work on security fixes and stability fixes in particular. Their comments in reviews are insightful and they have helped to onboard new reviewers and taken the time to work with people on their patches. Existing neutron cores, please vote +1/-1 for the addition of Henry and Kevin to the core team. Enough time has passed now, and Kevin and Henry have received enough +1 votes. So I'd like to welcome them to the core team! Thanks, Kyle Thanks! Kyle [1] http://stackalytics.com/report/contribution/neutron-group/180 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Tue, 9 Dec 2014 14:58:04 +0900 (JST) From: yamamoto at valinux.co.jp (YAMAMOTO Takashi) To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [Neutron][OVS] ovs-ofctl-to-python blueprint Message-ID: <20141209055804.536E57094A at kuma.localdomain> Content-Type: Text/Plain; charset=us-ascii hi, here's a blueprint to make OVS agent use Ryu to talk with OVS. https://blueprints.launchpad.net/neutron/+spec/ovs-ofctl-to-python https://review.openstack.org/#/c/138980/ (kilo spec) given that ML2/OVS is one of the most popular plugins and the proposal has a few possible controversial points, i want to ask wider opinions. - it introduces a new requirement for OVS agent. (Ryu) - it makes OVS agent require newer OVS version than it currently does. - what to do for xenapi support is still under investigation/research. - possible security impact. please comment on gerrit if you have any opinions. thank you. YAMAMOTO Takashi ------------------------------ Message: 4 Date: Tue, 9 Dec 2014 14:28:33 +0800 From: Alex Xu To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [api] Using query string or request body to pass parameter Message-ID: Content-Type: text/plain; charset="utf-8" Kevin, thanks for the info! I agree with you. RFC is the authority. use payload in the DELETE isn't good way. 2014-12-09 7:58 GMT+08:00 Kevin L. Mitchell : > On Tue, 2014-12-09 at 07:38 +0800, Alex Xu wrote: > > Not sure all, nova is limited > > at > https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L79 > > That under our control. > > It is, but the client frameworks aren't, and some of them prohibit > sending a body with a DELETE request. Further, RFC7231 has this to say > about DELETE request bodies: > > A payload within a DELETE request message has no defined semantics; > sending a payload body on a DELETE request might cause some > existing > implementations to reject the request. > > (?4.3.5) > > I think we have to conclude that, if we need a request body, we cannot > use the DELETE method. We can modify the operation, such as setting a > "force" flag, with a query parameter on the URI, but a request body > should be considered out of bounds with respect to DELETE. > > > Maybe not just ask question for delete, also for other method. > > > > 2014-12-09 1:11 GMT+08:00 Kevin L. Mitchell < > kevin.mitchell at rackspace.com>: > > On Mon, 2014-12-08 at 14:07 +0800, Eli Qiao wrote: > > > I wonder if we can use body in delete, currently , there isn't > any > > > case used in v2/v3 api. > > > > No, many frameworks raise an error if you try to include a body > with a > > DELETE request. > > -- > > Kevin L. Mitchell > > Rackspace > > -- > Kevin L. Mitchell > Rackspace > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 5 Date: Tue, 9 Dec 2014 06:29:50 +0000 From: xianchaobo To: "openstack-dev at lists.openstack.org" Cc: "Luohao \(brian\)" Subject: [openstack-dev] [Ironic] Some questions about Ironic service Message-ID: Content-Type: text/plain; charset="us-ascii" Hello, all I'm trying to install and configure Ironic service, something confused me. I create two neutron networks, public network and private network. Private network is used to deploy physical machines Public network is used to provide floating ip. (1) Private network type can be VLAN or VXLAN? (In install guide, the network type is flat) (2) The network of deployed physical machines can be managed by neutron? (3) Different tenants can have its own network to manage physical machines? (4) Does the ironic provide some mechanism for deployed physical machines to use storage such as shared storage,cinder volume? Thanks, XianChaobo -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 6 Date: Tue, 09 Dec 2014 12:25:39 +0530 From: Peeyush Gupta To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [Ironic] How to get past pxelinux.0 bootloader? Message-ID: <54869CEB.4040602 at linux.vnet.ibm.com> Content-Type: text/plain; charset=ISO-8859-1 Hi all, So, I have set up a devstack ironic setup for baremetal deployment. I have been able to deploy a baremetal node successfully using pxe_ipmitool driver. Now, I am trying to boot a server where I already have a bootloader i.e. I don't need pxelinux to go and fetch kernel and initrd images for me. I want to transfer them directly. I checked out the code and figured out that there are dhcp opts available, that are modified using pxe_utils.py, changing it didn't help. Then I moved to ironic.conf, but here also I only see an option to add pxe_bootfile_name, which is exactly what I want to avoid. Can anyone please help me with this situation? I don't want to go through pxelinux.0 bootloader, I just directly want to transfer kernel and initrd images. Thanks. -- Peeyush Gupta gpeeyush at linux.vnet.ibm.com ------------------------------ Message: 7 Date: Tue, 9 Dec 2014 07:07:20 +0000 From: "Gariganti, Sudhakar Babu" To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [neutron] Changes to the core team Message-ID: Content-Type: text/plain; charset="utf-8" Congrats Kevin and Henry ?. From: Kyle Mestery [mailto:mestery at mestery.com] Sent: Monday, December 08, 2014 11:32 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] Changes to the core team On Tue, Dec 2, 2014 at 9:59 AM, Kyle Mestery > wrote: Now that we're in the thick of working hard on Kilo deliverables, I'd like to make some changes to the neutron core team. Reviews are the most important part of being a core reviewer, so we need to ensure cores are doing reviews. The stats for the 180 day period [1] indicate some changes are needed for cores who are no longer reviewing. First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from neutron-core. Bob and Nachi have been core members for a while now. They have contributed to Neutron over the years in reviews, code and leading sub-teams. I'd like to thank them for all that they have done over the years. I'd also like to propose that should they start reviewing more going forward the core team looks to fast track them back into neutron-core. But for now, their review stats place them below the rest of the team for 180 days. As part of the changes, I'd also like to propose two new members to neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have been very active in reviews, meetings, and code for a while now. Henry lead the DB team which fixed Neutron DB migrations during Juno. Kevin has been actively working across all of Neutron, he's done some great work on security fixes and stability fixes in particular. Their comments in reviews are insightful and they have helped to onboard new reviewers and taken the time to work with people on their patches. Existing neutron cores, please vote +1/-1 for the addition of Henry and Kevin to the core team. Enough time has passed now, and Kevin and Henry have received enough +1 votes. So I'd like to welcome them to the core team! Thanks, Kyle Thanks! Kyle [1] http://stackalytics.com/report/contribution/neutron-group/180 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 8 Date: Tue, 9 Dec 2014 07:28:03 +0000 From: Samuel Bercovici To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. Message-ID: Content-Type: text/plain; charset="utf-8" Hi, I agree that the most important thing is to conclude how status properties are being managed and handled so it will remain consistent as we move along. I am fine with starting with a simple model and expending as need to be. The L7 implementation is ready waiting for the rest of the model to get in so pool sharing under a listener is something that we should solve now. I think that pool sharing under listeners connected to the same LB is more common that what you describe. -Sam. From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] Sent: Tuesday, December 09, 2014 12:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. So... I should probably note that I see the case where a user actually shares object as being the exception. I expect that 90% of deployments will never need to share objects, except for a few cases-- those cases (of 1:N) relationships are: * Loadbalancers must be able to have many Listeners * When L7 functionality is introduced, L7 policies must be able to refer to the same Pool under a single Listener. (That is to say, sharing Pools under the scope of a single Listener makes sense, but only after L7 policies are introduced.) I specifically see the following kind of sharing having near zero demand: * Listeners shared across multiple loadbalancers * Pools shared across multiple listeners * Members shared across multiple pools So, despite the fact that sharing doesn't make status reporting any more or less complex, I'm still in favor of starting with 1:1 relationships between most kinds of objects and then changing those to 1:N or M:N as we get user demand for this. As I said in my first response, allowing too many many to many relationships feels like a solution to a problem that doesn't really exist, and introduces a lot of unnecessary complexity. Stephen On Sun, Dec 7, 2014 at 11:43 PM, Samuel Bercovici > wrote: +1 From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] Sent: Friday, December 05, 2014 7:59 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. German-- but the point is that sharing apparently has no effect on the number of permutations for status information. The only difference here is that without sharing it's more work for the user to maintain and modify trees of objects. On Fri, Dec 5, 2014 at 9:36 AM, Eichberger, German > wrote: Hi Brandon + Stephen, Having all those permutations (and potentially testing them) made us lean against the sharing case in the first place. It?s just a lot of extra work for only a small number of our customers. German From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] Sent: Thursday, December 04, 2014 9:17 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. Hi Brandon, Yeah, in your example, member1 could potentially have 8 different statuses (and this is a small example!)... If that member starts flapping, it means that every time it flaps there are 8 notifications being passed upstream. Note that this problem actually doesn't get any better if we're not sharing objects but are just duplicating them (ie. not sharing objects but the user makes references to the same back-end machine as 8 different members.) To be honest, I don't see sharing entities at many levels like this being the rule for most of our installations-- maybe a few percentage points of installations will do an excessive sharing of members, but I doubt it. So really, even though reporting status like this is likely to generate a pretty big tree of data, I don't think this is actually a problem, eh. And I don't see sharing entities actually reducing the workload of what needs to happen behind the scenes. (It just allows us to conceal more of this work from the user.) Stephen On Thu, Dec 4, 2014 at 4:05 PM, Brandon Logan > wrote: Sorry it's taken me a while to respond to this. So I wasn't thinking about this correctly. I was afraid you would have to pass in a full tree of parent child representations to /loadbalancers to update anything a load balancer it is associated to (including down to members). However, after thinking about it, a user would just make an association call on each object. For Example, associate member1 with pool1, associate pool1 with listener1, then associate loadbalancer1 with listener1. Updating is just as simple as updating each entity. This does bring up another problem though. If a listener can live on many load balancers, and a pool can live on many listeners, and a member can live on many pools, there's lot of permutations to keep track of for status. you can't just link a member's status to a load balancer bc a member can exist on many pools under that load balancer, and each pool can exist under many listeners under that load balancer. For example, say I have these: lb1 lb2 listener1 listener2 pool1 pool2 member1 member2 lb1 -> [listener1, listener2] lb2 -> [listener1] listener1 -> [pool1, pool2] listener2 -> [pool1] pool1 -> [member1, member2] pool2 -> [member1] member1 can now have a different statuses under pool1 and pool2. since listener1 and listener2 both have pool1, this means member1 will now have a different status for listener1 -> pool1 and listener2 -> pool2 combination. And so forth for load balancers. Basically there's a lot of permutations and combinations to keep track of with this model for statuses. Showing these in the body of load balancer details can get quite large. I hope this makes sense because my brain is ready to explode. Thanks, Brandon On Thu, 2014-11-27 at 08:52 +0000, Samuel Bercovici wrote: > Brandon, can you please explain further (1) bellow? > > -----Original Message----- > From: Brandon Logan [mailto:brandon.logan at RACKSPACE.COM] > Sent: Tuesday, November 25, 2014 12:23 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this. > > My impression is that the statuses of each entity will be shown on a detailed info request of a loadbalancer. The root level objects would not have any statuses. For example a user makes a GET request to /loadbalancers/{lb_id} and the status of every child of that load balancer is show in a "status_tree" json object. For example: > > {"name": "loadbalancer1", > "status_tree": > {"listeners": > [{"name": "listener1", "operating_status": "ACTIVE", > "default_pool": > {"name": "pool1", "status": "ACTIVE", > "members": > [{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}} > > Sam, correct me if I am wrong. > > I generally like this idea. I do have a few reservations with this: > > 1) Creating and updating a load balancer requires a full tree configuration with the current extension/plugin logic in neutron. Since updates will require a full tree, it means the user would have to know the full tree configuration just to simply update a name. Solving this would require nested child resources in the URL, which the current neutron extension/plugin does not allow. Maybe the new one will. > > 2) The status_tree can get quite large depending on the number of listeners and pools being used. This is a minor issue really as it will make horizon's (or any other UI tool's) job easier to show statuses. > > Thanks, > Brandon > > On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote: > > Hi Samuel, > > > > > > We've actually been avoiding having a deeper discussion about status > > in Neutron LBaaS since this can get pretty hairy as the back-end > > implementations get more complicated. I suspect managing that is > > probably one of the bigger reasons we have disagreements around object > > sharing. Perhaps it's time we discussed representing state "correctly" > > (whatever that means), instead of a round-a-bout discussion about > > object sharing (which, I think, is really just avoiding this issue)? > > > > > > Do you have a proposal about how status should be represented > > (possibly including a description of the state machine) if we collapse > > everything down to be logical objects except the loadbalancer object? > > (From what you're proposing, I suspect it might be too general to, for > > example, represent the UP/DOWN status of members of a given pool.) > > > > > > Also, from an haproxy perspective, sharing pools within a single > > listener actually isn't a problem. That is to say, having the same > > L7Policy pointing at the same pool is OK, so I personally don't have a > > problem allowing sharing of objects within the scope of parent > > objects. What do the rest of y'all think? > > > > > > Stephen > > > > > > > > On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici > > > wrote: > > Hi Stephen, > > > > > > > > 1. The issue is that if we do 1:1 and allow status/state > > to proliferate throughout all objects we will then get an > > issue to fix it later, hence even if we do not do sharing, I > > would still like to have all objects besides LB be treated as > > logical. > > > > 2. The 3rd use case bellow will not be reasonable without > > pool sharing between different policies. Specifying different > > pools which are the same for each policy make it non-started > > to me. > > > > > > > > -Sam. > > > > > > > > > > > > > > > > From: Stephen Balukoff [mailto:sbalukoff at bluebox.net] > > Sent: Friday, November 21, 2014 10:26 PM > > To: OpenStack Development Mailing List (not for usage > > questions) > > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects > > in LBaaS - Use Cases that led us to adopt this. > > > > > > > > I think the idea was to implement 1:1 initially to reduce the > > amount of code and operational complexity we'd have to deal > > with in initial revisions of LBaaS v2. Many to many can be > > simulated in this scenario, though it does shift the burden of > > maintenance to the end user. It does greatly simplify the > > initial code for v2, in any case, though. > > > > > > > > > > > > Did we ever agree to allowing listeners to be shared among > > load balancers? I think that still might be a N:1 > > relationship even in our latest models. > > > > > > > > > > There's also the difficulty introduced by supporting different > > flavors: Since flavors are essentially an association between > > a load balancer object and a driver (with parameters), once > > flavors are introduced, any sub-objects of a given load > > balancer objects must necessarily be purely logical until they > > are associated with a load balancer. I know there was talk of > > forcing these objects to be sub-objects of a load balancer > > which can't be accessed independently of the load balancer > > (which would have much the same effect as what you discuss: > > State / status only make sense once logical objects have an > > instantiation somewhere.) However, the currently proposed API > > treats most objects as root objects, which breaks this > > paradigm. > > > > > > > > > > > > How we handle status and updates once there's an instantiation > > of these logical objects is where we start getting into real > > complexity. > > > > > > > > > > > > It seems to me there's a lot of complexity introduced when we > > allow a lot of many to many relationships without a whole lot > > of benefit in real-world deployment scenarios. In most cases, > > objects are not going to be shared, and in those cases with > > sufficiently complicated deployments in which shared objects > > could be used, the user is likely to be sophisticated enough > > and skilled enough to manage updating what are essentially > > "copies" of objects, and would likely have an opinion about > > how individual failures should be handled which wouldn't > > necessarily coincide with what we developers of the system > > would assume. That is to say, allowing too many many to many > > relationships feels like a solution to a problem that doesn't > > really exist, and introduces a lot of unnecessary complexity. > > > > > > > > > > > > In any case, though, I feel like we should walk before we run: > > Implementing 1:1 initially is a good idea to get us rolling. > > Whether we then implement 1:N or M:N after that is another > > question entirely. But in any case, it seems like a bad idea > > to try to start with M:N. > > > > > > > > > > > > Stephen > > > > > > > > > > > > > > > > On Thu, Nov 20, 2014 at 4:52 AM, Samuel Bercovici > > > wrote: > > > > Hi, > > > > Per discussion I had at OpenStack Summit/Paris with Brandon > > and Doug, I would like to remind everyone why we choose to > > follow a model where pools and listeners are shared (many to > > many relationships). > > > > Use Cases: > > 1. The same application is being exposed via different LB > > objects. > > For example: users coming from the internal "private" > > organization network, have an LB1(private_VIP) --> > > Listener1(TLS) -->Pool1 and user coming from the "internet", > > have LB2(public_vip)-->Listener1(TLS)-->Pool1. > > This may also happen to support ipv4 and ipv6: LB_v4(ipv4_VIP) > > --> Listener1(TLS) -->Pool1 and LB_v6(ipv6_VIP) --> > > Listener1(TLS) -->Pool1 > > The operator would like to be able to manage the pool > > membership in cases of updates and error in a single place. > > > > 2. The same group of servers is being used via different > > listeners optionally also connected to different LB objects. > > For example: users coming from the internal "private" > > organization network, have an LB1(private_VIP) --> > > Listener1(HTTP) -->Pool1 and user coming from the "internet", > > have LB2(public_vip)-->Listener2(TLS)-->Pool1. > > The LBs may use different flavors as LB2 needs TLS termination > > and may prefer a different "stronger" flavor. > > The operator would like to be able to manage the pool > > membership in cases of updates and error in a single place. > > > > 3. The same group of servers is being used in several > > different L7_Policies connected to a listener. Such listener > > may be reused as in use case 1. > > For example: LB1(VIP1)-->Listener_L7(TLS) > > | > > > > +-->L7_Policy1(rules..)-->Pool1 > > | > > > > +-->L7_Policy2(rules..)-->Pool2 > > | > > > > +-->L7_Policy3(rules..)-->Pool1 > > | > > > > +-->L7_Policy3(rules..)-->Reject > > > > > > I think that the "key" issue handling correctly the > > "provisioning" state and the operation state in a many to many > > model. > > This is an issue as we have attached status fields to each and > > every object in the model. > > A side effect of the above is that to understand the > > "provisioning/operation" status one needs to check many > > different objects. > > > > To remedy this, I would like to turn all objects besides the > > LB to be logical objects. This means that the only place to > > manage the status/state will be on the LB object. > > Such status should be hierarchical so that logical object > > attached to an LB, would have their status consumed out of the > > LB object itself (in case of an error). > > We also need to discuss how modifications of a logical object > > will be "rendered" to the concrete LB objects. > > You may want to revisit > > https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r the "Logical Model + Provisioning Status + Operation Status + Statistics" for a somewhat more detailed explanation albeit it uses the LBaaS v1 model as a reference. > > > > Regards, > > -Sam. > > > > > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > > > > -- > > > > Stephen Balukoff > > Blue Box Group, LLC > > (800)613-4305 x807 > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > -- > > Stephen Balukoff > > Blue Box Group, LLC > > (800)613-4305 x807 > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 9 Date: Mon, 8 Dec 2014 23:39:38 -0800 From: W Chan To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [Mistral] Action context passed to all action executions by default Message-ID: Content-Type: text/plain; charset="utf-8" Renat, Is there any reason why Mistral do not pass action context such as workflow ID, execution ID, task ID, and etc to all of the action executions? I think it makes a lot of sense for that information to be made available by default. The action can then decide what to do with the information. It doesn't require a special signature in the __init__ method of the Action classes. What do you think? Thanks. Winson -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 10 Date: Tue, 09 Dec 2014 09:38:50 +0100 From: Thierry Carrez To: OpenStack Development Mailing List Subject: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC Message-ID: <5486B51A.6080001 at openstack.org> Content-Type: text/plain; charset=utf-8 Dear PTLs, cross-project liaisons and anyone else interested, We'll have a cross-project meeting Tuesday at 21:00 UTC, with the following agenda: * Convergence on specs process (johnthetubaguy) * Approval process differences * Path structure differences * specs.o.o aspect differences (toc) * osprofiler config options (kragniz) * Glance uses a different name from other projects * Consensus on what name to use * Open discussion & announcements See you there ! For more details, please see: https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting -- Thierry Carrez (ttx) ------------------------------ Message: 11 Date: Tue, 9 Dec 2014 14:48:10 +0600 From: Renat Akhmerov To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [Mistral] Query on creating multiple resources Message-ID: <8CC8FE04-056A-4574-926B-BAF5B7749C84 at mirantis.com> Content-Type: text/plain; charset=utf-8 Hey, I think it?s a question of what the final goal is. For just creating security groups as a resource I think Georgy and Zane are right, just use Heat. If the goal is to try Mistral or have this simple workflow as part of more complex then it?s totally fine to use Mistral. Sorry, I?m probably biased because Mistral is our baby :). Anyway, Nikolay has already answered the question technically, this ?for-each? feature will be available officially in about 2 weeks. > Create VM workflow was a demo example. Mistral potentially can be used by Heat or other orchestration tools to do actual interaction with API, but for user it might be easier to use Heat functionality. I kind of disagree with that statement. Mistral can be used by whoever finds its useful for their needs. Standard ?create_instance? workflow (which is in ?resources/workflows/create_instance.yaml?) is not so a demo example as well. It does a lot of good stuff you may really need in your case (e.g. retry policies). Even though it?s true that it has some limitations we?re aware of. For example, when it comes to configuring a network for newly created instance it?s now missing network related parameters to be able to alter behavior. One more thing: Now only will Heat be able to call Mistral somewhere underneath the surface. Mistral has already integration with Heat to be able to call it if needed and there?s a plan to make it even more useful and usable. Thanks Renat Akhmerov @ Mirantis Inc. ------------------------------ Message: 12 Date: Tue, 9 Dec 2014 14:49:32 +0600 From: Renat Akhmerov To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [Mistral] Query on creating multiple resources Message-ID: <33D7E423-9FDD-4D3D-9D0B-65ADA937852F at mirantis.com> Content-Type: text/plain; charset="iso-8859-1" No problem, let us know if you have any other questions. Renat Akhmerov @ Mirantis Inc. > On 09 Dec 2014, at 11:57, Sushma Korati wrote: > > > Hi, > > Thank you guys. > > Yes I am able to do this with heat, but I faced issues while trying the same with mistral. > As suggested will try with the latest mistral branch. Thank you once again. > > Regards, > Sushma > > > > > From: Georgy Okrokvertskhov [mailto:gokrokvertskhov at mirantis.com ] > Sent: Tuesday, December 09, 2014 6:07 AM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [Mistral] Query on creating multiple resources > > Hi Sushma, > > Did you explore Heat templates? As Zane mentioned you can do this via Heat template without writing any workflows. > Do you have any specific use cases which you can't solve with Heat template? > > Create VM workflow was a demo example. Mistral potentially can be used by Heat or other orchestration tools to do actual interaction with API, but for user it might be easier to use Heat functionality. > > Thanks, > Georgy > > On Mon, Dec 8, 2014 at 7:54 AM, Nikolay Makhotkin > wrote: > Hi, Sushma! > > Can we create multiple resources using a single task, like multiple keypairs or security-groups or networks etc? > > Yes, we can. This feature is in the development now and it is considered as experimental -https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections > > Just clone the last master branch from mistral. > > You can specify "for-each" task property and provide the array of data to your workflow: > > -------------------- > version: '2.0' > > name: secgroup_actions > > workflows: > create_security_group: > type: direct > input: > - array_with_names_and_descriptions > > tasks: > create_secgroups: > for-each: > data: $.array_with_names_and_descriptions > action: nova.security_groups_create name={$.data.name } description={$.data.description} > ------------ > > On Mon, Dec 8, 2014 at 6:36 PM, Zane Bitter > wrote: > On 08/12/14 09:41, Sushma Korati wrote: > Can we create multiple resources using a single task, like multiple > keypairs or security-groups or networks etc? > > Define them in a Heat template and create the Heat stack as a single task. > > - ZB > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Best Regards, > Nikolay > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Georgy Okrokvertskhov > Architect, > OpenStack Platform Products, > Mirantis > http://www.mirantis.com > Tel. +1 650 963 9828 > Mob. +1 650 996 3284 > DISCLAIMER ========== This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails. > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 13 Date: Tue, 9 Dec 2014 14:52:38 +0600 From: Renat Akhmerov To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [Mistral] Event Subscription Message-ID: <17C46BD9-F6F6-43AE-883E-2512EE698CDD at mirantis.com> Content-Type: text/plain; charset="utf-8" Ok, got it. So my general suggestion here is: let's keep it as simple as possible for now, create something that works and then let?s see how to improve it. And yes, consumers may be and mostly will be 3rd parties. Thanks Renat Akhmerov @ Mirantis Inc. > On 09 Dec 2014, at 08:25, W Chan wrote: > > Renat, > > On sending events to an "exchange", I mean an exchange on the transport (i.e. rabbitMQ exchange https://www.rabbitmq.com/tutorials/amqp-concepts.html ). On implementation we can probably explore the notification feature in oslo.messaging. But on second thought, this would limit the consumers to trusted subsystems or services though. If we want the event consumers to be any 3rd party, including untrusted, then maybe we should keep it as HTTP calls. > > Winson > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 14 Date: Tue, 9 Dec 2014 15:22:28 +0600 From: Renat Akhmerov To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [Mistral] Action context passed to all action executions by default Message-ID: <74548DCF-F417-4EA8-B3AE-CDD3024A0753 at mirantis.com> Content-Type: text/plain; charset=us-ascii Hi Winson, I think it makes perfect sense. The reason I think is mostly historical and this can be reviewed now. Can you pls file a BP and describe your suggested design on that? I mean how we need to alter interface Action etc. Thanks Renat Akhmerov @ Mirantis Inc. > On 09 Dec 2014, at 13:39, W Chan wrote: > > Renat, > > Is there any reason why Mistral do not pass action context such as workflow ID, execution ID, task ID, and etc to all of the action executions? I think it makes a lot of sense for that information to be made available by default. The action can then decide what to do with the information. It doesn't require a special signature in the __init__ method of the Action classes. What do you think? > > Thanks. > Winson > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ------------------------------ Message: 15 Date: Tue, 9 Dec 2014 09:33:47 +0000 From: joehuang To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541FB758 at szxema505-mbs.china.huawei.com> Content-Type: text/plain; charset="us-ascii" Hi, If time is available, how about adding one agenda to guide the direction for cascading to move forward? Thanks in advance. The topic is : " Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases. " -------------------------------------------------------------------------------------------------------------------------------- In the 40 minutes cross-project summit session "Approaches for scaling out"[1], almost 100 peoples attended the meeting, and the conclusion is that cells can not cover the use cases and requirements which the OpenStack cascading solution[2] aim to address, the background including use cases and requirements is also described in the mail. After the summit, we just ported the PoC[3] source code from IceHouse based to Juno based. Now, let's move forward: The major task is to introduce new driver/agent to existing core projects, for the core idea of cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. a). Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases. b). Volunteer as the cross project coordinator. c). Volunteers for implementation and CI. (Already 6 engineers working on cascading in the project StackForge/tricircle) Background of OpenStack cascading vs cells: 1. Use cases a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" ), establishing globally addressable tenants which result in efficient services deployment. b). Telefonica use case[5], create virtual DC( data center) cross multiple physical DCs with seamless experience. c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV cloud, it's in nature the cloud will be distributed but inter-connected in many data centers. 2.requirements a). The operator has multiple sites cloud; each site can use one or multiple vendor's OpenStack distributions. b). Each site with its own requirements and upgrade schedule while maintaining standard OpenStack API c). The multi-site cloud must provide unified resource management with global Open API exposed, for example create virtual DC cross multiple physical DCs with seamless experience. Although a prosperity orchestration layer could be developed for the multi-site cloud, but it's prosperity API in the north bound interface. The cloud operators want the ecosystem friendly global open API for the mutli-site cloud for global access. 3. What problems does cascading solve that cells doesn't cover: OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core architecture idea of OpenStack cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks (from different vendor's distribution, or different version ) which may located in different sites (or data centers ) through the OpenStack API, meanwhile the cloud still expose OpenStack API as the north-bound API in the cloud level. 4. Why cells can't do that: Cells provide the scale out capability to Nova, but from the OpenStack as a whole point of view, it's still working like one OpenStack instance. a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. This approach provides the multi-site cloud with one unified API endpoint and unified resource management, but consolidation of multi-vendor/multi-version OpenStack instances across one or more data centers cannot be fulfilled. b). Each site installed one child cell and accompanied standalone Cinder, Neutron(or Nova-network), Glance, Ceilometer. This approach makes multi-vendor/multi-version OpenStack distribution co-existence in multi-site seem to be feasible, but the requirement for unified API endpoint and unified resource management cannot be fulfilled. Cross Neutron networking automation is also missing, which should otherwise be done manually or use proprietary orchestration layer. For more information about cascading and cells, please refer to the discussion thread before Paris Summit [7]. [1]Approaches for scaling out: https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack [2]OpenStack cascading solution: https://wiki.openstack.org/wiki/OpenStack_cascading_solution [3]Cascading PoC: https://github.com/stackforge/tricircle [4]Vodafone use case (9'02" to 12'30"): https://www.youtube.com/watch?v=-KOJYvhmxQI [5]Telefonica use case: http://www.telefonica.com/en/descargas/mwc/present_20140224.pdf [6]ETSI NFV use cases: http://www.etsi.org/deliver/etsi_gs/nfv/001_099/001/01.01.01_60/gs_nfv001v010101p.pdf [7]Cascading thread before design summit: http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-td54115.html Best Regards Chaoyi Huang ( Joe Huang ) -----Original Message----- From: Thierry Carrez [mailto:thierry at openstack.org] Sent: Tuesday, December 09, 2014 4:39 PM To: OpenStack Development Mailing List Subject: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC Dear PTLs, cross-project liaisons and anyone else interested, We'll have a cross-project meeting Tuesday at 21:00 UTC, with the following agenda: * Convergence on specs process (johnthetubaguy) * Approval process differences * Path structure differences * specs.o.o aspect differences (toc) * osprofiler config options (kragniz) * Glance uses a different name from other projects * Consensus on what name to use * Open discussion & announcements See you there ! For more details, please see: https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting -- Thierry Carrez (ttx) _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ------------------------------ Message: 16 Date: Tue, 9 Dec 2014 09:52:11 +0000 From: "Fox, Kevin M" To: "OpenStack Development Mailing List (not for usage questions)" Cc: "Luohao \(brian\)" Subject: Re: [openstack-dev] [Ironic] Some questions about Ironic service Message-ID: <1A3C52DFCD06494D8528644858247BF017815FE1 at EX10MBOX03.pnnl.gov> Content-Type: text/plain; charset="windows-1252" No to questions 1, 3, and 4. Yes to 2, but very minimally. ________________________________ From: xianchaobo Sent: Monday, December 08, 2014 10:29:50 PM To: openstack-dev at lists.openstack.org Cc: Luohao (brian) Subject: [openstack-dev] [Ironic] Some questions about Ironic service Hello, all I?m trying to install and configure Ironic service, something confused me. I create two neutron networks, public network and private network. Private network is used to deploy physical machines Public network is used to provide floating ip. (1) Private network type can be VLAN or VXLAN? (In install guide, the network type is flat) (2) The network of deployed physical machines can be managed by neutron? (3) Different tenants can have its own network to manage physical machines? (4) Does the ironic provide some mechanism for deployed physical machines to use storage such as shared storage,cinder volume? Thanks, XianChaobo -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 17 Date: Tue, 9 Dec 2014 10:53:19 +0100 From: Maxime Leroy To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver Message-ID: Content-Type: text/plain; charset=UTF-8 Hi there. I would like some clarification regarding support of out-of-tree plugin in nova and in neutron. First, on neutron side, there are mechanisms to support out-of-tree plugin for L2 plugin (core_plugin) and ML2 mech driver (stevedore/entrypoint). Most of ML2/L2 plugins need to have a specific vif driver. As the vif_driver configuration option in nova has been removed of Juno, it's not possible to have anymore external Mech driver/L2 plugin. The nova community takes the decision to stop supporting VIF driver classes as a public extension point. (ref http://lists.openstack.org/pipermail/openstack-dev/2014-August/043174.html) At the opposite, the neutron community still continues to support external L2/ML2 mechdriver plugin. And more, the decision to put out-of-the-tree the monolithic plugins and ML2 Mechanism Drivers has been taken in the Paris summit (ref https://review.openstack.org/#/c/134680/15/specs/kilo/core-vendor-decomposition.rst) I am feeling a bit confused about these two opposite decisions of the two communities. What am I missing ? I have also proposed a blueprint to have a new plugin mechanism in nova to load external vif driver. (nova-specs: https://review.openstack.org/#/c/136827/ and nova (rfc patch): https://review.openstack.org/#/c/136857/) >From my point-of-view of a developer having a plugin framework for internal/external vif driver seems to be a good thing. It makes the code more modular and introduce a clear api for vif driver classes. So far, it raises legitimate questions concerning API stability and public API that request a wider discussion on the ML (as asking by John Garbut). I think having a plugin mechanism and a clear api for vif driver is not going against this policy: http://docs.openstack.org/developer/nova/devref/policies.html#out-of-tree-support. There is no needs to have a stable API. It is up to the owner of the external VIF driver to ensure that its driver is supported by the latest API. And not the nova community to manage a stable API for this external VIF driver. Does it make senses ? Considering the network V2 API, L2/ML2 mechanism driver and VIF driver need to exchange information such as: binding:vif_type and binding:vif_details. >From my understanding, 'binding:vif_type' and 'binding:vif_details' as a field part of the public network api. There is no validation constraints for these fields (see http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html), it means that any value is accepted by the API. So, the values set in 'binding:vif_type' and 'binding:vif_details' are not part of the public API. Is my understanding correct ? What other reasons am I missing to not have VIF driver classes as a public extension point ? Thanks in advance for your help. Maxime ------------------------------ Message: 18 Date: Tue, 9 Dec 2014 09:54:21 +0000 From: "Fox, Kevin M" To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [Ironic] How to get past pxelinux.0 bootloader? Message-ID: <1A3C52DFCD06494D8528644858247BF017815FEF at EX10MBOX03.pnnl.gov> Content-Type: text/plain; charset="us-ascii" You probably want to use the agent driver, not the pxe one. It lets you use bootloaders from the image. ________________________________ From: Peeyush Gupta Sent: Monday, December 08, 2014 10:55:39 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Ironic] How to get past pxelinux.0 bootloader? Hi all, So, I have set up a devstack ironic setup for baremetal deployment. I have been able to deploy a baremetal node successfully using pxe_ipmitool driver. Now, I am trying to boot a server where I already have a bootloader i.e. I don't need pxelinux to go and fetch kernel and initrd images for me. I want to transfer them directly. I checked out the code and figured out that there are dhcp opts available, that are modified using pxe_utils.py, changing it didn't help. Then I moved to ironic.conf, but here also I only see an option to add pxe_bootfile_name, which is exactly what I want to avoid. Can anyone please help me with this situation? I don't want to go through pxelinux.0 bootloader, I just directly want to transfer kernel and initrd images. Thanks. -- Peeyush Gupta gpeeyush at linux.vnet.ibm.com _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 19 Date: Tue, 9 Dec 2014 11:09:02 +0100 From: Roman Prykhodchenko To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [Ironic] Fuel agent proposal Message-ID: <2FD74259-FECF-4B5E-99FE-6A3EB6976582 at mirantis.com> Content-Type: text/plain; charset="utf-8" It is true that IPA and FuelAgent share a lot of functionality in common. However there is a major difference between them which is that they are intended to be used to solve a different problem. IPA is a solution for provision-use-destroy-use_by_different_user use-case and is really great for using it for providing BM nodes for other OS services or in services like Rackspace OnMetal. FuelAgent itself serves for provision-use-use-?-use use-case like Fuel or TripleO have. Those two use-cases require concentration on different details in first place. For instance for IPA proper decommissioning is more important than advanced disk management, but for FuelAgent priorities are opposite because of obvious reasons. Putting all functionality to a single driver and a single agent may cause conflicts in priorities and make a lot of mess inside both the driver and the agent. Actually previously changes to IPA were blocked right because of this conflict of priorities. Therefore replacing FuelAgent by IPA in where FuelAgent is used currently does not seem like a good option because come people (and I?m not talking about Mirantis) might loose required features because of different priorities. Having two separate drivers along with two separate agents for those different use-cases will allow to have two independent teams that are concentrated on what?s really important for a specific use-case. I don?t see any problem in overlapping functionality if it?s used differently. P. S. I realise that people may be also confused by the fact that FuelAgent is actually called like that and is used only in Fuel atm. Our point is to make it a simple, powerful and what?s more important a generic tool for provisioning. It is not bound to Fuel or Mirantis and if it will cause confusion in the future we will even be happy to give it a different and less confusing name. P. P. S. Some of the points of this integration do not look generic enough or nice enough. We look pragmatic on the stuff and are trying to implement what?s possible to implement as the first step. For sure this is going to have a lot more steps to make it better and more generic. > On 09 Dec 2014, at 01:46, Jim Rollenhagen wrote: > > > > On December 8, 2014 2:23:58 PM PST, Devananda van der Veen > wrote: >> I'd like to raise this topic for a wider discussion outside of the >> hallway >> track and code reviews, where it has thus far mostly remained. >> >> In previous discussions, my understanding has been that the Fuel team >> sought to use Ironic to manage "pets" rather than "cattle" - and doing >> so >> required extending the API and the project's functionality in ways that >> no >> one else on the core team agreed with. Perhaps that understanding was >> wrong >> (or perhaps not), but in any case, there is now a proposal to add a >> FuelAgent driver to Ironic. The proposal claims this would meet that >> teams' >> needs without requiring changes to the core of Ironic. >> >> https://review.openstack.org/#/c/138115/ > > I think it's clear from the review that I share the opinions expressed in this email. > > That said (and hopefully without derailing the thread too much), I'm curious how this driver could do software RAID or LVM without modifying Ironic's API or data model. How would the agent know how these should be built? How would an operator or user tell Ironic what the disk/partition/volume layout would look like? > > And before it's said - no, I don't think vendor passthru API calls are an appropriate answer here. > > // jim > >> >> The Problem Description section calls out four things, which have all >> been >> discussed previously (some are here [0]). I would like to address each >> one, >> invite discussion on whether or not these are, in fact, problems facing >> Ironic (not whether they are problems for someone, somewhere), and then >> ask >> why these necessitate a new driver be added to the project. >> >> >> They are, for reference: >> >> 1. limited partition support >> >> 2. no software RAID support >> >> 3. no LVM support >> >> 4. no support for hardware that lacks a BMC >> >> #1. >> >> When deploying a partition image (eg, QCOW format), Ironic's PXE deploy >> driver performs only the minimal partitioning necessary to fulfill its >> mission as an OpenStack service: respect the user's request for root, >> swap, >> and ephemeral partition sizes. When deploying a whole-disk image, >> Ironic >> does not perform any partitioning -- such is left up to the operator >> who >> created the disk image. >> >> Support for arbitrarily complex partition layouts is not required by, >> nor >> does it facilitate, the goal of provisioning physical servers via a >> common >> cloud API. Additionally, as with #3 below, nothing prevents a user from >> creating more partitions in unallocated disk space once they have >> access to >> their instance. Therefor, I don't see how Ironic's minimal support for >> partitioning is a problem for the project. >> >> #2. >> >> There is no support for defining a RAID in Ironic today, at all, >> whether >> software or hardware. Several proposals were floated last cycle; one is >> under review right now for DRAC support [1], and there are multiple >> call >> outs for RAID building in the state machine mega-spec [2]. Any such >> support >> for hardware RAID will necessarily be abstract enough to support >> multiple >> hardware vendor's driver implementations and both in-band creation (via >> IPA) and out-of-band creation (via vendor tools). >> >> Given the above, it may become possible to add software RAID support to >> IPA >> in the future, under the same abstraction. This would closely tie the >> deploy agent to the images it deploys (the latter image's kernel would >> be >> dependent upon a software RAID built by the former), but this would >> necessarily be true for the proposed FuelAgent as well. >> >> I don't see this as a compelling reason to add a new driver to the >> project. >> Instead, we should (plan to) add support for software RAID to the >> deploy >> agent which is already part of the project. >> >> #3. >> >> LVM volumes can easily be added by a user (after provisioning) within >> unallocated disk space for non-root partitions. I have not yet seen a >> compelling argument for doing this within the provisioning phase. >> >> #4. >> >> There are already in-tree drivers [3] [4] [5] which do not require a >> BMC. >> One of these uses SSH to connect and run pre-determined commands. Like >> the >> spec proposal, which states at line 122, "Control via SSH access >> feature >> intended only for experiments in non-production environment," the >> current >> SSHPowerDriver is only meant for testing environments. We could >> probably >> extend this driver to do what the FuelAgent spec proposes, as far as >> remote >> power control for cheap always-on hardware in testing environments with >> a >> pre-shared key. >> >> (And if anyone wonders about a use case for Ironic without external >> power >> control ... I can only think of one situation where I would rationally >> ever >> want to have a control-plane agent running inside a user-instance: I am >> both the operator and the only user of the cloud.) >> >> >> ---------------- >> >> In summary, as far as I can tell, all of the problem statements upon >> which >> the FuelAgent proposal are based are solvable through incremental >> changes >> in existing drivers, or out of scope for the project entirely. As >> another >> software-based deploy agent, FuelAgent would duplicate the majority of >> the >> functionality which ironic-python-agent has today. >> >> Ironic's driver ecosystem benefits from a diversity of >> hardware-enablement >> drivers. Today, we have two divergent software deployment drivers which >> approach image deployment differently: "agent" drivers use a local >> agent to >> prepare a system and download the image; "pxe" drivers use a remote >> agent >> and copy the image over iSCSI. I don't understand how a second driver >> which >> duplicates the functionality we already have, and shares the same goals >> as >> the drivers we already have, is beneficial to the project. >> >> Doing the same thing twice just increases the burden on the team; we're >> all >> working on the same problems, so let's do it together. >> >> -Devananda >> >> >> [0] >> https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition >> >> [1] https://review.openstack.org/#/c/107981/ >> >> [2] >> https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst >> >> >> [3] >> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py >> >> [4] >> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py >> >> [5] >> http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: ------------------------------ Message: 20 Date: Tue, 09 Dec 2014 15:51:39 +0530 From: Peeyush Gupta To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [Ironic] How to get past pxelinux.0 bootloader? Message-ID: <5486CD33.4000602 at linux.vnet.ibm.com> Content-Type: text/plain; charset="iso-8859-1" So, basically if I am using pxe driver, I would "have to" provide pxelinux.0? On 12/09/2014 03:24 PM, Fox, Kevin M wrote: > You probably want to use the agent driver, not the pxe one. It lets you use bootloaders from the image. > > ________________________________ > From: Peeyush Gupta > Sent: Monday, December 08, 2014 10:55:39 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: [openstack-dev] [Ironic] How to get past pxelinux.0 bootloader? > > Hi all, > > So, I have set up a devstack ironic setup for baremetal deployment. I > have been able to deploy a baremetal node successfully using > pxe_ipmitool driver. Now, I am trying to boot a server where I already > have a bootloader i.e. I don't need pxelinux to go and fetch kernel and > initrd images for me. I want to transfer them directly. > > I checked out the code and figured out that there are dhcp opts > available, that are modified using pxe_utils.py, changing it didn't > help. Then I moved to ironic.conf, but here also I only see an option to > add pxe_bootfile_name, which is exactly what I want to avoid. Can anyone > please help me with this situation? I don't want to go through > pxelinux.0 bootloader, I just directly want to transfer kernel and > initrd images. > > Thanks. > > -- > Peeyush Gupta > gpeeyush at linux.vnet.ibm.com > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Peeyush Gupta gpeeyush at linux.vnet.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 21 Date: Tue, 09 Dec 2014 11:32:26 +0100 From: Thierry Carrez To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC Message-ID: <5486CFBA.8030204 at openstack.org> Content-Type: text/plain; charset=windows-1252 joehuang wrote: > If time is available, how about adding one agenda to guide the direction for cascading to move forward? Thanks in advance. > > The topic is : " Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases. " Hi Joe, we close the agenda one day before the meeting to let people arrange their attendance based on the published agenda. I added your topic in the backlog for next week agenda: https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting Regards, -- Thierry Carrez (ttx) ------------------------------ Message: 22 Date: Tue, 9 Dec 2014 11:33:01 +0100 From: Miguel ?ngel Ajo To: OpenStack Development Mailing List Subject: [openstack-dev] [neutron] mid-cycle "hot reviews" Message-ID: <7A64F4A9F9054721A45DB25C9E5A181B at redhat.com> Content-Type: text/plain; charset="utf-8" Hi all! It would be great if you could use this thread to post hot reviews on stuff that it?s being worked out during the mid-cycle, where others from different timezones could participate. I know posting reviews to the list is not permitted, but I think an exception in this case would be beneficial. Best regards, Miguel ?ngel Ajo -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 23 Date: Tue, 9 Dec 2014 12:36:08 +0200 From: Tihomir Trifonov To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [horizon] REST and Django Message-ID: Content-Type: text/plain; charset="utf-8" Sorry for the late reply, just few thoughts on the matter. IMO the REST middleware should be as thin as possible. And I mean thin in terms of processing - it should not do pre/post processing of the requests, but just unpack/pack. So here is an example: instead of making AJAX calls that contain instructions: ?? > POST --json --data {"action": "delete", "data": [ {"name": > "item1"}, {"name": "item2"}, {"name": "item3" ]} I think a better approach is just to pack/unpack batch commands, and leave execution to the frontend/backend and not middleware: ?? > POST --json --data {" > ?batch > ": > ?[ > {? > " > ? > action" > ? : "delete"? > , > ?"payload": ? > {"name": "item1"} > ?, > {? > " > ? > action" > ? : "delete"? > , > ? > "payload": > ? > {"name": "item > ?2 > "} > ?, > {? > " > ? > action" > ? : "delete"? > , > ? > "payload": > ? > {"name": "item > ?3 > "} > ? ] ? > ? > ? > } ?The idea is that the middleware should not know the actual data. It should ideally just unpack the data: ??responses = [] > for cmd in > ? ? > ? > ? > request.POST['batch']:? > ? > ??responses > ?.append(? > ? > getattr(controller, cmd['action'] > ?)(** > cmd['?payload'] > ?)?) > > ?return responses? > ?and the frontend(JS) will just send batches of simple commands, and will receive a list of responses for each command in the batch. The error handling will be done in the frontend?(JS) as well. ? For the more complex example of 'put()' where we have dependent objects: project = api.keystone.tenant_get(request, id) > kwargs = self._tenant_kwargs_from_DATA(request.DATA, enabled=None) > api.keystone.tenant_update(request, project, **kwargs) In practice the project data should be already present in the frontend(assuming that we already loaded it to render the project form/view), so ? ? POST --json --data {" ?batch ": ?[ {? " ? action" ? : "tenant_update"? , ?"payload": ? {"project": js_project_object.id, "name": "some name", "prop1": "some prop", "prop2": "other prop, etc."} ? ? ] ? ? ? }? So in general we don't need to recreate the full state on each REST call, if we make the Frontent full-featured application. This way - the frontend will construct the object, will hold the cached value, and will just send the needed requests as single ones or in batches, will receive the response from the API backend, and will render the results. The whole processing logic will be held in the Frontend(JS), while the middleware will just act as proxy(un/packer). This way we will maintain just the logic in the frontend, and will not need to duplicate some logic in the middleware. On Tue, Dec 2, 2014 at 4:45 PM, Adam Young wrote: > On 12/02/2014 12:39 AM, Richard Jones wrote: > > On Mon Dec 01 2014 at 4:18:42 PM Thai Q Tran wrote: > >> I agree that keeping the API layer thin would be ideal. I should add >> that having discrete API calls would allow dynamic population of table. >> However, I will make a case where it *might* be necessary to add >> additional APIs. Consider that you want to delete 3 items in a given table. >> >> If you do this on the client side, you would need to perform: n * (1 API >> request + 1 AJAX request) >> If you have some logic on the server side that batch delete actions: n * >> (1 API request) + 1 AJAX request >> >> Consider the following: >> n = 1, client = 2 trips, server = 2 trips >> n = 3, client = 6 trips, server = 4 trips >> n = 10, client = 20 trips, server = 11 trips >> n = 100, client = 200 trips, server 101 trips >> >> As you can see, this does not scale very well.... something to consider... >> > This is not something Horizon can fix. Horizon can make matters worse, > but cannot make things better. > > If you want to delete 3 users, Horizon still needs to make 3 distinct > calls to Keystone. > > To fix this, we need either batch calls or a standard way to do multiples > of the same operation. > > The unified API effort it the right place to drive this. > > > > > > > > Yep, though in the above cases the client is still going to be hanging, > waiting for those server-backend calls, with no feedback until it's all > done. I would hope that the client-server call overhead is minimal, but I > guess that's probably wishful thinking when in the land of random Internet > users hitting some provider's Horizon :) > > So yeah, having mulled it over myself I agree that it's useful to have > batch operations implemented in the POST handler, the most common operation > being DELETE. > > Maybe one day we could transition to a batch call with user feedback > using a websocket connection. > > > Richard > >> [image: Inactive hide details for Richard Jones ---11/27/2014 05:38:53 >> PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S > Jones ---11/27/2014 05:38:53 PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp, >> Travis S wrote: >> >> From: Richard Jones >> To: "Tripp, Travis S" , OpenStack List < >> openstack-dev at lists.openstack.org> >> Date: 11/27/2014 05:38 PM >> Subject: Re: [openstack-dev] [horizon] REST and Django >> ------------------------------ >> >> >> >> >> On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S <*travis.tripp at hp.com* >> > wrote: >> >> Hi Richard, >> >> You are right, we should put this out on the main ML, so copying >> thread out to there. ML: FYI that this started after some impromptu IRC >> discussions about a specific patch led into an impromptu google hangout >> discussion with all the people on the thread below. >> >> >> Thanks Travis! >> >> >> >> As I mentioned in the review[1], Thai and I were mainly discussing >> the possible performance implications of network hops from client to >> horizon server and whether or not any aggregation should occur server side. >> In other words, some views require several APIs to be queried before any >> data can displayed and it would eliminate some extra network requests from >> client to server if some of the data was first collected on the server side >> across service APIs. For example, the launch instance wizard will need to >> collect data from quite a few APIs before even the first step is displayed >> (I?ve listed those out in the blueprint [2]). >> >> The flip side to that (as you also pointed out) is that if we keep >> the API?s fine grained then the wizard will be able to optimize in one >> place the calls for data as it is needed. For example, the first step may >> only need half of the API calls. It also could lead to perceived >> performance increases just due to the wizard making a call for different >> data independently and displaying it as soon as it can. >> >> >> Indeed, looking at the current launch wizard code it seems like you >> wouldn't need to load all that data for the wizard to be displayed, since >> only some subset of it would be necessary to display any given panel of the >> wizard. >> >> >> >> I tend to lean towards your POV and starting with discrete API calls >> and letting the client optimize calls. If there are performance problems >> or other reasons then doing data aggregation on the server side could be >> considered at that point. >> >> >> I'm glad to hear it. I'm a fan of optimising when necessary, and not >> beforehand :) >> >> >> >> Of course if anybody is able to do some performance testing between >> the two approaches then that could affect the direction taken. >> >> >> I would certainly like to see us take some measurements when performance >> issues pop up. Optimising without solid metrics is bad idea :) >> >> >> Richard >> >> >> >> [1] >> *https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py* >> >> [2] >> *https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign* >> >> >> -Travis >> >> *From: *Richard Jones <*r1chardj0n3s at gmail.com* >> > >> * Date: *Wednesday, November 26, 2014 at 11:55 PM >> * To: *Travis Tripp <*travis.tripp at hp.com* >, Thai >> Q Tran/Silicon Valley/IBM <*tqtran at us.ibm.com* >, >> David Lyle <*dklyle0 at gmail.com* >, Maxime Vidori < >> *maxime.vidori at enovance.com* >, >> "Wroblewski, Szymon" <*szymon.wroblewski at intel.com* >> >, "Wood, Matthew David (HP Cloud - >> Horizon)" <*matt.wood at hp.com* >, "Chen, Shaoquan" < >> *sean.chen2 at hp.com* >, "Farina, Matt (HP Cloud)" < >> *matthew.farina at hp.com* >, Cindy Lu/Silicon >> Valley/IBM <*clu at us.ibm.com* >, Justin >> Pomeroy/Rochester/IBM <*jpomero at us.ibm.com* >, >> Neill Cox <*neill.cox at ingenious.com.au* > >> * Subject: *Re: REST and Django >> >> I'm not sure whether this is the appropriate place to discuss this, >> or whether I should be posting to the list under [Horizon] but I think we >> need to have a clear idea of what goes in the REST API and what goes in the >> client (angular) code. >> >> In my mind, the thinner the REST API the better. Indeed if we can get >> away with proxying requests through without touching any *client code, that >> would be great. >> >> Coding additional logic into the REST API means that a developer >> would need to look in two places, instead of one, to determine what was >> happening for a particular call. If we keep it thin then the API presented >> to the client developer is very, very similar to the API presented by the >> services. Minimum surprise. >> >> Your thoughts? >> >> >> Richard >> >> >> On Wed Nov 26 2014 at 2:40:52 PM Richard Jones < >> *r1chardj0n3s at gmail.com* > wrote: >> >> >> Thanks for the great summary, Travis. >> >> I've completed the work I pledged this morning, so now the REST >> API change set has: >> >> - no rest framework dependency >> - AJAX scaffolding in openstack_dashboard.api.rest.utils >> - code in openstack_dashboard/api/rest/ >> - renamed the API from "identity" to "keystone" to be consistent >> - added a sample of testing, mostly for my own sanity to check >> things were working >> >> *https://review.openstack.org/#/c/136676* >> >> >> >> Richard >> >> On Wed Nov 26 2014 at 12:18:25 PM Tripp, Travis S < >> *travis.tripp at hp.com* > wrote: >> >> >> Hello all, >> >> Great discussion on the REST urls today! I think that we are on >> track to come to a common REST API usage pattern. To provide quick summary: >> >> We all agreed that going to a straight REST pattern rather than >> through tables was a good idea. We discussed using direct get / post in >> Django views like what Max originally used[1][2] and Thai also started[3] >> with the identity table rework or to go with djangorestframework [5] like >> what Richard was prototyping with[4]. >> >> The main things we would use from Django Rest Framework were >> built in JSON serialization (avoid boilerplate), better exception handling, >> and some request wrapping. However, we all weren?t sure about the need for >> a full new framework just for that. At the end of the conversation, we >> decided that it was a cleaner approach, but Richard would see if he could >> provide some utility code to do that much for us without requiring the full >> framework. David voiced that he doesn?t want us building out a whole >> framework on our own either. >> >> So, Richard will do some investigation during his day today and >> get back to us. Whatever the case, we?ll get a patch in horizon for the >> base dependency (framework or Richard?s utilities) that both Thai?s work >> and the launch instance work is dependent upon. We?ll build REST style >> API?s using the same pattern. We will likely put the rest api?s in >> horizon/openstack_dashboard/api/rest/. >> >> [1] >> *https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/keypair.py* >> >> [2] >> *https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/launch.py* >> >> [3] >> *https://review.openstack.org/#/c/133767/8/openstack_dashboard/dashboards/identity/users/views.py* >> >> [4] >> *https://review.openstack.org/#/c/136676/4/openstack_dashboard/rest_api/identity.py* >> >> [5] *http://www.django-rest-framework.org/* >> >> >> Thanks, >> >> >> Travis_______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Tihomir Trifonov -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: ------------------------------ _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev End of OpenStack-dev Digest, Vol 32, Issue 25 ********************************************* From tqtran at us.ibm.com Thu Dec 11 02:20:56 2014 From: tqtran at us.ibm.com (Thai Q Tran) Date: Wed, 10 Dec 2014 19:20:56 -0700 Subject: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard In-Reply-To: References: , , , <547DD08A.7000402@redhat.com> Message-ID: An HTML attachment was scrubbed... URL: From kevin.carter at RACKSPACE.COM Thu Dec 11 02:49:23 2014 From: kevin.carter at RACKSPACE.COM (Kevin Carter) Date: Thu, 11 Dec 2014 02:49:23 +0000 Subject: [openstack-dev] Announcing the openstack ansible deployment repo In-Reply-To: References: <1FB68B52-F7C7-47E4-A44F-4846740B598A@rackspace.com> Message-ID: <33EFB4F4-A528-4188-A710-515D05A724C9@rackspace.com> Hey John, We too ran into the same issue with iSCSI and after a lot of digging and chasing red-hearings we found that the cinder-volume service wasn?t the cause of the issues, it was "iscsiadm login? that caused the problem and it was happening from within the nova-compute container. If we weren?t running cinder there were no issues with nova-compute running vm?s from within a container however once we attempted to attach a volume to a running VM iscsiadm would simply refuse to initiate. We followed up on an existing upstream bug regarding the issues but its gotten little traction at present: "https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855?. In testing we?ve found that if we give the compute container the raw device instead of using a bridge on a veth type interface we didn?t see the same issues however doing that was less than ideal so we opted to simply leave compute nodes as physical hosts. From within the playbooks we can set any service to run on bare metal as the ?container? type so that?s what we?ve done with nova-compute but hopefully sometime soon-ish well be able to move nova-compute back into a container, assuming the upstream bugs are fixed. I?d love to chat some more on this or anything else, hit me up anytime; I?m @cloudnull in the channel. ? Kevin > On Dec 10, 2014, at 19:01, John Griffith wrote: > > On Wed, Dec 10, 2014 at 3:16 PM, Kevin Carter > wrote: >> Hello all, >> >> >> The RCBOPS team at Rackspace has developed a repository of Ansible roles, playbooks, scripts, and libraries to deploy Openstack inside containers for production use. We?ve been running this deployment for a while now, >> and at the last OpenStack summit we discussed moving the repo into Stackforge as a community project. Today, I?m happy to announce that the "os-ansible-deployment" repo is online within Stackforge. This project is a work in progress and we welcome anyone who?s interested in contributing. >> >> This project includes: >> * Ansible playbooks for deployment and orchestration of infrastructure resources. >> * Isolation of services using LXC containers. >> * Software deployed from source using python wheels. >> >> Where to find us: >> * IRC: #openstack-ansible >> * Launchpad: https://launchpad.net/openstack-ansible >> * Meetings: #openstack-ansible IRC channel every Tuesday at 14:30 UTC. (The meeting schedule is not fully formalized and may be subject to change.) >> * Code: https://github.com/stackforge/os-ansible-deployment >> >> Thanks and we hope to see you in the channel. >> >> ? >> >> Kevin >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > Hey Kevin, > > Really cool! I have some questions though, I've been trying to do > this exact sort of thing on my own with Cinder but can't get iscsi > daemon running in a container. In fact I run into a few weird > networking problems that I haven't sorted, but the storage piece seems > to be a big stumbling point for me even when I cut some of the extra > stuff I was trying to do with devstack out of it. > > Anyway, are you saying that this enables running the reference LVM > impl c-vol service in a container as well? I'd love to hear/see more > and play around with this. > > Thanks, > John > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: From yamamoto at valinux.co.jp Thu Dec 11 03:17:23 2014 From: yamamoto at valinux.co.jp (YAMAMOTO Takashi) Date: Thu, 11 Dec 2014 12:17:23 +0900 (JST) Subject: [openstack-dev] [Neutron] XenAPI questions Message-ID: <20141211031723.AD32970A0A@kuma.localdomain> hi, i have questions for XenAPI folks: - what's the status of XenAPI support in neutron? - is there any CI covering it? i want to look at logs. - is it possible to write a small program which runs with the xen rootwrap and proxies OpenFlow channel between domains? (cf. https://review.openstack.org/#/c/138980/) thank you. YAMAMOTO Takashi From raghavendra.lad at accenture.com Thu Dec 11 04:25:38 2014 From: raghavendra.lad at accenture.com (raghavendra.lad at accenture.com) Date: Thu, 11 Dec 2014 04:25:38 +0000 Subject: [openstack-dev] [Murano] Oslo.messaging error Message-ID: <6b3581871a8e4823b89d2eba1fd4a38b@BY2PR42MB101.048d.mgd.msft.net> HI Team, I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the below install murano-api I encounter the below error. Please assist. When I install I am using the Murano guide link provided below: https://murano.readthedocs.org/en/latest/install/manual.html I am trying to execute the section 7 1. Open a new console and launch Murano API. A separate terminal is required because the console will be locked by a running process. 2. $ cd ~/murano/murano 3. $ tox -e venv -- murano-api \ 4. > --config-file ./etc/murano/murano.conf I am getting the below error : I have a Juno Openstack ready and trying to integrate Murano 2014-12-10 12:10:30.396 7721 DEBUG murano.openstack.common.service [-] neutron.endpoint_type = publicURL log_opt_values /home/ ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-] neutron.insecure = False log_opt_values /home/ubun tu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-] **************************************************************** **************** log_opt_values /home/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2050 2014-12-10 12:10:30.400 7721 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on controller:5672 2014-12-10 12:10:30.408 7721 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on controller:5672 2014-12-10 12:10:30.416 7721 INFO eventlet.wsgi [-] (7721) wsgi starting up on http://0.0.0.0:8082/ 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Updating statistic information. update_stats /home/ubuntu/murano/muran o/murano/common/statservice.py:57 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats object: update_stats /home/ubuntu/murano/murano/murano/common/statservice.py:58 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats: Requests:0 Errors: 0 Ave.Res.Time 0.0000 Per tenant: {} update_stats /home/ubuntu/murano/murano/murano/common/statservice.py:64 2014-12-10 12:10:30.433 7721 DEBUG oslo.db.sqlalchemy.session [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZER O_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /hom e/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py:509 2014-12-10 12:10:33.464 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server controller:5672 closed the connection. Check log in credentials: Socket closed 2014-12-10 12:10:33.465 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server controller:5672 closed the connection. Check log in credentials: Socket closed 2014-12-10 12:10:37.483 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server controller:5672 closed the connection. Check log in credentials: Socket closed 2014-12-10 12:10:37.484 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server controller:5672 closed the connection. Check log in credentials: Socket closed Warm Regards, Raghavendra Lad ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougw at a10networks.com Thu Dec 11 05:29:05 2014 From: dougw at a10networks.com (Doug Wiegley) Date: Thu, 11 Dec 2014 05:29:05 +0000 Subject: [openstack-dev] [neutron] Services are now split out and neutron is open for commits! In-Reply-To: References: Message-ID: Hi all, I?d like to echo the thanks to all involved, and thanks for the patience during this period of transition. And a logistical note: if you have any outstanding reviews against the now missing files/directories (db/{loadbalancer,firewall,vpn}, services/, or tests/unit/services), you must re-submit your review against the new repos. Existing neutron reviews for service code will be summarily abandoned in the near future. Lbaas folks, hold off on re-submitting feature/lbaasv2 reviews. I?ll have that branch merged in the morning, and ping in channel when it?s ready for submissions. Finally, if any tempest lovers want to take a crack at splitting the tempest runs into four, perhaps using salv?s reviews of splitting them in two as a guide, and then creating jenkins jobs, we need some help getting those going. Please ping me directly (IRC: dougwig). Thanks, doug From: Kyle Mestery > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, December 10, 2014 at 4:10 PM To: "OpenStack Development Mailing List (not for usage questions)" > Subject: [openstack-dev] [neutron] Services are now split out and neutron is open for commits! Folks, just a heads up that we have completed splitting out the services (FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3]. This was all done in accordance with the spec approved here [4]. Thanks to all involved, but a special thanks to Doug and Anita, as well as infra. Without all of their work and help, this wouldn't have been possible! Neutron and the services repositories are now open for merges again. We're going to be landing some major L3 agent refactoring across the 4 repositories in the next four days, look for Carl to be leading that work with the L3 team. In the meantime, please report any issues you have in launchpad [5] as bugs, and find people in #openstack-neutron or send an email. We've verified things come up and all the tempest and API tests for basic neutron work fine. In the coming week, we'll be getting all the tests working for the services repositories. Medium term, we need to also move all the advanced services tempest tests out of tempest and into the respective repositories. We also need to beef these tests up considerably, so if you want to help out on a critical project for Neutron, please let me know. Thanks! Kyle [1] http://git.openstack.org/cgit/openstack/neutron-fwaas [2] http://git.openstack.org/cgit/openstack/neutron-lbaas [3] http://git.openstack.org/cgit/openstack/neutron-vpnaas [4] http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo/services-split.rst [5] https://bugs.launchpad.net/neutron -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokrokvertskhov at mirantis.com Thu Dec 11 06:07:47 2014 From: gokrokvertskhov at mirantis.com (Georgy Okrokvertskhov) Date: Wed, 10 Dec 2014 22:07:47 -0800 Subject: [openstack-dev] [Murano] Oslo.messaging error In-Reply-To: <6b3581871a8e4823b89d2eba1fd4a38b@BY2PR42MB101.048d.mgd.msft.net> References: <6b3581871a8e4823b89d2eba1fd4a38b@BY2PR42MB101.048d.mgd.msft.net> Message-ID: Hi, Could you, please, check what is in your murano.conf file. There are two sections for rabbitmq configurations. Both of them should have proper IP address of RabbitMQ service as well as proper user\password and vhost. Also you could check if RabbitMQ is actually up and running and listens on this port\IP. "netstat -ltpn" command output will help to check if there is process listening on port 5672. Hope this help, Gosha On Wed, Dec 10, 2014 at 8:25 PM, wrote: > > > > > > > HI Team, > > > > I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the > below install murano-api I encounter the below error. Please assist. > > > > When I install > > > > I am using the Murano guide link provided below: > > https://murano.readthedocs.org/en/latest/install/manual.html > > > > > > I am trying to execute the section 7 > > > > 1. Open a new console and launch Murano API. A separate terminal is > required because the console will be locked by a running process. > > 2. $ cd ~/murano/murano > > 3. $ tox -e venv -- murano-api \ > > 4. > --config-file ./etc/murano/murano.conf > > > > > > I am getting the below error : I have a Juno Openstack ready and trying to > integrate Murano > > > > > > 2014-12-10 12:10:30.396 7721 DEBUG murano.openstack.common.service [-] > neutron.endpoint_type = publicURL log_opt_values > /home/ > ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048 > > 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-] > neutron.insecure = False log_opt_values > /home/ubun > tu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2048 > > 2014-12-10 12:10:30.397 7721 DEBUG murano.openstack.common.service [-] > **************************************************************** > **************** log_opt_values > /home/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/config/cfg.py:2050 > > 2014-12-10 12:10:30.400 7721 INFO oslo.messaging._drivers.impl_rabbit [-] > Connecting to AMQP server on controller:5672 > > 2014-12-10 12:10:30.408 7721 INFO oslo.messaging._drivers.impl_rabbit [-] > Connecting to AMQP server on controller:5672 > > 2014-12-10 12:10:30.416 7721 INFO eventlet.wsgi [-] (7721) wsgi starting > up on http://0.0.0.0:8082/ > > 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Updating > statistic information. update_stats > /home/ubuntu/murano/muran > o/murano/common/statservice.py:57 > > 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats > object: > ction object at 0x7fada950a510> update_stats > /home/ubuntu/murano/murano/murano/common/statservice.py:58 > > 2014-12-10 12:10:30.417 7721 DEBUG murano.common.statservice [-] Stats: > Requests:0 Errors: 0 Ave.Res.Time 0.0000 > > Per tenant: {} update_stats > /home/ubuntu/murano/murano/murano/common/statservice.py:64 > > 2014-12-10 12:10:30.433 7721 DEBUG oslo.db.sqlalchemy.session [-] MySQL > server mode set to > STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZER > O_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION > _check_effective_sql_mode /hom > e/ubuntu/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py:509 > > 2014-12-10 12:10:33.464 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] > AMQP server controller:5672 closed the connection. Check > log in credentials: Socket closed > > 2014-12-10 12:10:33.465 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] > AMQP server controller:5672 closed the connection. Check > log in credentials: Socket closed > > 2014-12-10 12:10:37.483 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] > AMQP server controller:5672 closed the connection. Check > log in credentials: Socket closed > > 2014-12-10 12:10:37.484 7721 ERROR oslo.messaging._drivers.impl_rabbit [-] > AMQP server controller:5672 closed the connection. Check > log in credentials: Socket closed > > > > > > Warm Regards, > > *Raghavendra Lad* > > > > ------------------------------ > > This message is for the designated recipient only and may contain > privileged, proprietary, or otherwise confidential information. If you have > received it in error, please notify the sender immediately and delete the > original. Any other use of the e-mail by you is prohibited. Where allowed > by local law, electronic communications with Accenture and its affiliates, > including e-mail and instant messaging (including content), may be scanned > by our systems for the purposes of information security and assessment of > internal compliance with Accenture policy. > > ______________________________________________________________________________________ > > www.accenture.com > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 -------------- next part -------------- An HTML attachment was scrubbed... URL: From anant.patil at hp.com Thu Dec 11 06:14:19 2014 From: anant.patil at hp.com (Anant Patil) Date: Thu, 11 Dec 2014 11:44:19 +0530 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <547FEEEB.3070507@redhat.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> Message-ID: <5489363B.2060008@hp.com> On 04-Dec-14 10:49, Zane Bitter wrote: > On 01/12/14 02:02, Anant Patil wrote: >> On GitHub:https://github.com/anantpatil/heat-convergence-poc > > I'm trying to review this code at the moment, and finding some stuff I > don't understand: > > https://github.com/anantpatil/heat-convergence-poc/blob/master/heat/engine/stack.py#L911-L916 > > This appears to loop through all of the resources *prior* to kicking off > any actual updates to check if the resource will change. This is > impossible to do in general, since a resource may obtain a property > value from an attribute of another resource and there is no way to know > whether an update to said other resource would cause a change in the > attribute value. > > In addition, no attempt to catch UpdateReplace is made. Although that > looks like a simple fix, I'm now worried about the level to which this > code has been tested. > We were working on new branch and as we discussed on Skype, we have handled all these cases. Please have a look at our current branch: https://github.com/anantpatil/heat-convergence-poc/tree/graph-version When a new resource is taken for convergence, its children are loaded and the resource definition is re-parsed. The frozen resource definition will have all the "get_attr" resolved. > > I'm also trying to wrap my head around how resources are cleaned up in > dependency order. If I understand correctly, you store in the > ResourceGraph table the dependencies between various resource names in > the current template (presumably there could also be some left around > from previous templates too?). For each resource name there may be a > number of rows in the Resource table, each with an incrementing version. > As far as I can tell though, there's nowhere that the dependency graph > for _previous_ templates is persisted? So if the dependency order > changes in the template we have no way of knowing the correct order to > clean up in any more? (There's not even a mechanism to associate a > resource version with a particular template, which might be one avenue > by which to recover the dependencies.) > > I think this is an important case we need to be able to handle, so I > added a scenario to my test framework to exercise it and discovered that > my implementation was also buggy. Here's the fix: > https://github.com/zaneb/heat-convergence-prototype/commit/786f367210ca0acf9eb22bea78fd9d51941b0e40 > Thanks for pointing this out Zane. We too had a buggy implementation for handling inverted dependency. I had a hard look at our algorithm where we were continuously merging the edges from new template into the edges from previous updates. It was an optimized way of traversing the graph in both forward and reverse order with out missing any resources. But, when the dependencies are inverted, this wouldn't work. We have changed our algorithm. The changes in edges are noted down in DB, only the delta of edges from previous template is calculated and kept. At any given point of time, the graph table has all the edges from current template and delta from previous templates. Each edge has template ID associated with it. For resource clean up, we start from the first template (template which was completed and updates were made on top of it, empty template otherwise), and move towards the current template in the order in which the updates were issued, and for each template the graph (edges if found for the template) is traversed in reverse order and resources are cleaned-up. The process ends up with current template being traversed in reverse order and resources being cleaned up. All the update-replaced resources from the older templates (older updates in concurrent updates) are cleaned up in the order in which they are suppose to be. Resources are now tied to template, they have a template_id instead of version. As we traverse the graph, we know which template we are working on, and can take the relevant action on resource. For rollback, another update is issued with the last completed template (it is designed to have an empty template as last completed template by default). The current template being worked on becomes predecessor for the new incoming template. In case of rollback, the last completed template becomes incoming new template, the current becomes the new template's predecessor and the successor of last completed template will have no predecessor. All these changes are available in the 'graph-version' branch. (The branch name is a misnomer though!) I think it is simpler to think about stack and concurrent updates when we associate resources and edges with template, and stack with current template and its predecessors (if any). I also think that we should decouple Resource from Stack. This is really a hindrance when workers work on individual resources. The resource should be abstracted enough from stack for the worker to work on the resource alone. The worker should load the required resource plug-in and start converging. The READEME.rst is really helpful for bringing up the minimal devstack and test the PoC. I also has some notes on design. > >> It was difficult, for me personally, to completely understand Zane's PoC >> and how it would lay the foundation for aforementioned design goals. It >> would be very helpful to have Zane's understanding here. I could >> understand that there are ideas like async message passing and notifying >> the parent which we also subscribe to. > > So I guess the thing to note is that there are essentially two parts to > my Poc: > 1) A simulation framework that takes what will be in the final > implementation multiple tasks running in parallel in separate processes > and talking to a database, and replaces it with an event loop that runs > the tasks sequentially in a single process with an in-memory data store. > I could have built a more realistic simulator using Celery or something, > but I preferred this way as it offers deterministic tests. > 2) A toy implementation of Heat on top of this framework. > > The files map roughly to Heat something like this: > > converge.engine -> heat.engine.service > converge.stack -> heat.engine.stack > converge.resource -> heat.engine.resource > converge.template -> heat.engine.template > converge.dependencies -> actually is heat.engine.dependencies > converge.sync_point -> no equivalent > converge.converger -> no equivalent (this is convergence "worker") > converge.reality -> represents the actual OpenStack services > > For convenience, I just use the @asynchronous decorator to turn an > ordinary method call into a simulated message. > > The concept is essentially as follows: > At the start of a stack update (creates and deletes are also just > updates) we create any new resources in the DB calculate the dependency > graph for the update from the data in the DB and template. This graph is > the same one used by updates in Heat currently, so it contains both the > forward and reverse (cleanup) dependencies. The stack update then kicks > off checks of all the leaf nodes, passing the pre-calculated dependency > graph. > > Each resource check may result in a call to the create(), update() or > delete() methods of a Resource plugin. The resource also reads any > attributes that will be required from it. Once this is complete, it > triggers any dependent resources that are ready, or updates a SyncPoint > in the database if there are dependent resources that have multiple > requirements. The message triggering the next resource will contain the > dependency graph again, as well as the RefIds and required attributes of > any resources it depends on. > > The new dependencies thus created are added to the resource itself in > the database at the time it is checked, allowing it to record the > changes caused by a requirement being unexpectedly replaced without > needing a global lock on anything. > > When cleaning up resources, we also endeavour to remove any that are > successfully deleted from the dependencies graph. > > Each traversal has a unique ID that is both stored in the stack and > passed down through the resource check triggers. (At present this is the > template ID, but it may make more sense to have a unique ID since old > template IDs can be resurrected in the case of a rollback.) As soon as > these fail to match the resource checks stop propagating, so only an > update of a single field is required (rather than locking an entire > table) before beginning a new stack update. > > Hopefully that helps a little. Please let me know if you have specific > questions. I'm *very* happy to incorporate other ideas into it, since > it's pretty quick to change, has tests to check for regressions, and is > intended to be thrown away anyhow (so I genuinely don't care if some > bits get thrown away earlier than others). > This is of tremendous help for us. > >> In retrospective, we had to struggle a lot to understand the existing >> Heat engine. We couldn't have done justice by just creating another >> project in GitHub and without any concrete understanding of existing >> state-of-affairs. > > I completely agree, and you guys did the right thing by starting out > looking at Heat. But remember, the valuable thing isn't the code, it's > what you learned. My concern is that now that you have Heat pretty well > figured out, you won't be able to continue to learn nearly as fast > trying to wrestle with the Heat codebase as you could with the > simulator. We don't want to fall into the trap of just shipping whatever > we have because it's too hard to explore the other options, we want to > identify a promising design and iterate it as quickly as possible. > I would have loved to, especially after the short tutorial given above :). The framework is great! I am in the middle of using DB transactions to replace stack lock for critical section. For that I need my devstack setup with the actual DB running. I liked the test cases (scenario tests) you have, and I am porting it so that we can run it against our PoC. > cheers, > Zane. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Zane, I have few questions: 1. Our current implementation is based on notifications from worker so that the engine can take up next set of tasks. I don't see this in your case. I think we should be doing this. It gels well with observer notification mechanism. When the observer comes, it would send a converge notification. Both, the provisioning of stack and the continuous observation, happens with notifications (async message passing). I see that the workers in your case pick up the parent when/if it is done and schedules it or updates the sync point. 2. The dependency graph travels everywhere. IMHO, we can keep the graph in DB and let the workers work on a resource, and engine decide which one to be scheduled next by looking at the graph. There wouldn't be a need for a lock here, in the engine, the DB transactions should take care of concurrent DB updates. Our new PoC follows this model. 3. The request ID is passed down to check_*_complete. Would the check method be interrupted if new request arrives? IMHO, the check method should not get interrupted. It should return back when the resource has reached a concrete state, either failed or completed. 4. Lot of synchronization issues which we faced in our PoC cannot be encountered with the framework. How do we evaluate what happens when synchronization issues are encountered (like stack lock kind of issues which we are replacing with DB transaction). - Anant From harshada.kakad at izeltech.com Thu Dec 11 06:51:23 2014 From: harshada.kakad at izeltech.com (Harshada Kakad) Date: Thu, 11 Dec 2014 12:21:23 +0530 Subject: [openstack-dev] [diskimage-builder] ramdisk-image-create fails for creating Centos/rhel images. Message-ID: Hi All, I am trying to build Centos/rhel image for baremetal deployment using ramdisk-image-create. I am using my build host as CentOS release 6.5 (Final). It fails saying no busybox available. Here are the logs for more information, Can anyone please help me on this. Running ramdisk-image-create for centos7 using below command it fails to install busybox. Attached output for more details. sudo bin/ramdisk-image-create -a amd64 centos7 deploy-ironic -o /tmp/deploy-ramdisk-centos7 Total Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : 12:dhcp-libs-4.2.5-27.el7.centos.2.x86_64 Updating : 12:dhcp-common-4.2.5-27.el7.centos.2.x86_64 Updating : 12:dhclient-4.2.5-27.el7.centos.2.x86_64 Cleanup : 12:dhclient-4.2.5-27.el7.centos.x86_64 Cleanup : 12:dhcp-common-4.2.5-27.el7.centos.x86_64 Cleanup : 12:dhcp-libs-4.2.5-27.el7.centos.x86_64 Verifying : 12:dhcp-libs-4.2.5-27.el7.centos.2.x86_64 Verifying : 12:dhclient-4.2.5-27.el7.centos.2.x86_64 Verifying : 12:dhcp-common-4.2.5-27.el7.centos.2.x86_64 Verifying : 12:dhclient-4.2.5-27.el7.centos.x86_64 Verifying : 12:dhcp-libs-4.2.5-27.el7.centos.x86_64 Verifying : 12:dhcp-common-4.2.5-27.el7.centos.x86_64 Updated: dhclient.x86_64 12:4.2.5-27.el7.centos.2 Dependency Updated: dhcp-common.x86_64 12:4.2.5-27.el7.centos.2 dhcp-libs.x86_64 12:4.2.5-27.e Complete! dib-run-parts Thu Oct 9 09:19:08 UTC 2014 20-install-dhcp-client completed dib-run-parts Thu Oct 9 09:19:08 UTC 2014 Running /tmp/in_target.d/install.d/50-store-build-settings dib-run-parts Thu Oct 9 09:19:08 UTC 2014 50-store-build-settings completed dib-run-parts Thu Oct 9 09:19:08 UTC 2014 Running /tmp/in_target.d/install.d/52-ramdisk-install-busybox Running install-packages install. Package list: busybox Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: dallas.tx.mirror.xygenhosting.com * epel: mirror.its.dal.ca * extras: mirror.cs.vt.edu * updates: mirrors.advancedhosters.com No package busybox available. Error: Nothing to do ------------------------------------------------------------------------------------------------------------- Running ramdisk-image-create for rhel and rhel7 using below commands. sudo bin/ramdisk-image-create -a amd64 rhel deploy-ironic -o /tmp/deploy-ramdisk-rhel sudo bin/ramdisk-image-create -a amd64 rhel7 deploy-ironic -o /tmp/deploy-ramdisk-rhel Here is the output. * subject: serialNumber=dmox-zPOCChZGgYyWu9xg8JTHSbjFg9P; C=US; ST=North Carolina; L=Raleigh; O=Red Hat Inc; OU=Web Operations; CN=*. redhat.com * start date: 2013-09-09 18:07:24 GMT * expire date: 2015-12-12 02:08:43 GMT * subjectAltName: rhn.redhat.com matched * issuer: C=US; O=GeoTrust, Inc.; CN=GeoTrust SSL CA * SSL certificate verify ok. > GET /rhel-guest-image-6.5-20140603.0.x86_64.qcow2 HTTP/1.0 > User-Agent: curl/7.35.0 > Host: rhn.redhat.com > Accept: */* > < HTTP/1.1 404 Not Found < Date: Thu, 09 Oct 2014 09:40:48 GMT * Server Apache is not blacklisted < Server: Apache < X-Frame-Options: SAMEORIGIN < Set-Cookie: pxt-session-cookie=4683981779x5ee55672220e170244faf07ecc0e558b; path=/; domain=rhn.redhat.com; expires=Fri, 10-Oct-2014 09:40:48 GMT; secure < Pragma: no-cache < Cache-control: no-cache < Content-Length: 50884 < X-Trace: 1B6697A2F0D89CF2871A25B9CC6CA3D7A60410E1666F0EAEAB8E81F1FD < Connection: close < Content-Type: text/html; charset=UTF-8 < Expires: Thu, 09 Oct 2014 09:40:48 GMT < { [data not shown] 100 50884 100 50884 0 0 48390 0 0:00:01 0:00:01 --:--:-- 48390 * Closing connection 1 * SSLv3, TLS alert, Client hello (1): } [data not shown] Server returned an unexpected response code. [404] -- *Regards,* *Harshada Kakad* *--------------------------------------------------------------------* *Sr. Software Engineer* *C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune ? 411013, India* *Mobile-9689187388* *Email-Id : harshada.kakad at izeltech.com * *website : www.izeltech.com * -- *****Disclaimer***** The information contained in this e-mail and any attachment(s) to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information of Izel Technologies Pvt. Ltd. If you are not the intended recipient, you are notified that any review, use, any form of reproduction, dissemination, copying, disclosure, modification, distribution and/or publication of this e-mail message, contents or its attachment(s) is strictly prohibited and you are requested to notify us the same immediately by e-mail and delete this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for virus infected e-mail or errors or omissions or consequences which may arise as a result of this e-mail transmission. *****End of Disclaimer***** -------------- next part -------------- An HTML attachment was scrubbed... URL: From raghavendra.lad at accenture.com Thu Dec 11 07:12:02 2014 From: raghavendra.lad at accenture.com (raghavendra.lad at accenture.com) Date: Thu, 11 Dec 2014 07:12:02 +0000 Subject: [openstack-dev] [Murano] Oslo.messaging error Message-ID: <0c970abc20af4b198488fd85fc71e23e@BY2PR42MB101.048d.mgd.msft.net> HI Team, I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the below install murano-api I encounter the below error. Please assist. When I install I am using the Murano guide link provided below: https://murano.readthedocs.org/en/latest/install/manual.html I am trying to execute the section 7 1. Open a new console and launch Murano API. A separate terminal is required because the console will be locked by a running process. 2. $ cd ~/murano/murano 3. $ tox -e venv -- murano-api \ 4. > --config-file ./etc/murano/murano.conf I am getting the below error : I have a Juno Openstack ready and trying to integrate Murano 2014-12-11 12:28:03.676 9524 INFO eventlet.wsgi [-] (9524) wsgi starting up on http://0.0.0.0:8082/ 2014-12-11 12:28:03.677 9524 DEBUG murano.common.statservice [-] Updating statistic information. update_stats /root/murano/murano/murano/common/statservice.py:57 2014-12-11 12:28:03.677 9524 DEBUG murano.common.statservice [-] Stats object: update_stats /root/murano/murano/murano/common/statservice.py:58 2014-12-11 12:28:03.677 9524 DEBUG murano.common.statservice [-] Stats: Requests:0 Errors: 0 Ave.Res.Time 0.0000 Per tenant: {} update_stats /root/murano/murano/murano/common/statservice.py:64 2014-12-11 12:28:03.692 9524 DEBUG oslo.db.sqlalchemy.session [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /root/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py:509 2014-12-11 12:28:06.721 9524 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server 192.168.x.x:5672 closed the connection. Check login credentials: Socket closed Warm Regards, Raghavendra Lad ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiaohui.xin at intel.com Thu Dec 11 07:34:23 2014 From: xiaohui.xin at intel.com (Xin, Xiaohui) Date: Thu, 11 Dec 2014 07:34:23 +0000 Subject: [openstack-dev] [Horizon] Proposal to add IPMI meters from Ceilometer in Horizon Message-ID: Hi, In Juno Release, the IPMI meters in Ceilometer have been implemented. We know that most of the meters implemented in Ceilometer can be observed in Horizon side. User admin can use the "Admin" dashboard -> "System" Panel Group -> "Resource Usage" Panel to show the "Resources Usage Overview". There are a lot of Ceilometer Metrics there now, each metric can be metered. Since IPMI meters have already been there, we'd like to add such Metric items for it in Horizon to get metered information. Is there anyone who oppose this proposal? If not, we'd like to add a blueprint in Horizon for it soon. Thanks Xiaohui -------------- next part -------------- An HTML attachment was scrubbed... URL: From sahid.ferdjaoui at redhat.com Thu Dec 11 08:20:20 2014 From: sahid.ferdjaoui at redhat.com (Sahid Orentino Ferdjaoui) Date: Thu, 11 Dec 2014 09:20:20 +0100 Subject: [openstack-dev] [nova] Kilo specs review day In-Reply-To: References: Message-ID: <20141211082020.GA2517@redhat.redhat.com> On Thu, Dec 11, 2014 at 08:41:49AM +1100, Michael Still wrote: > Hi, > > at the design summit we said that we would not approve specifications > after the kilo-1 deadline, which is 18 December. Unfortunately, we?ve > had a lot of specifications proposed this cycle (166 to my count), and > haven?t kept up with the review workload. > > Therefore, I propose that Friday this week be a specs review day. We > need to burn down the queue of specs needing review, as well as > abandoning those which aren?t getting regular updates based on our > review comments. > > I?d appreciate nova-specs-core doing reviews on Friday, but its always > super helpful when non-cores review as well. Sure it could be *super* useful :) - I will try to help on this way. > A +1 for a developer or > operator gives nova-specs-core a good signal of what might be ready to > approve, and that helps us optimize our review time. > > For reference, the specs to review may be found at: > > https://review.openstack.org/#/q/project:openstack/nova-specs+status:open,n,z > > Thanks heaps, > Michael > > -- > Rackspace Australia > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From maxime.leroy at 6wind.com Thu Dec 11 08:57:40 2014 From: maxime.leroy at 6wind.com (Maxime Leroy) Date: Thu, 11 Dec 2014 09:57:40 +0100 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> Message-ID: On Thu, Dec 11, 2014 at 2:37 AM, henry hly wrote: > On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells wrote: [..] >> >> The problem is that we effectively prevent running an out of tree Neutron >> driver (which *is* perfectly legitimate) if it uses a VIF plugging mechanism >> that isn't in Nova, as we can't use out of tree code and we won't accept in >> code ones for out of tree drivers. > +1 well said ! > The question is, do we really need such flexibility for so many nova vif types? > Are we going to accept a new VIF_TYPE in nova if it's only used by an external ml2/l2 plugin ? > I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example, > nova shouldn't known too much details about switch backend, it should > only care about the VIF itself, how the VIF is plugged to switch > belongs to Neutron half. VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is nice if your out-of-tree l2/ml2 plugin needs a tap interface or a vhostuser socket. But if your external l2/ml2 plugin needs a specific type of nic (i.e. a new method get_config to provide specific parameters to libvirt for the nic) that not supported in the nova tree, you still need to have a plugin mechanism. [..] >> Your issue is one of testing. Is there any way we could set up a better >> testing framework for VIF drivers where Nova interacts with something to >> test the plugging mechanism actually passes traffic? I don't believe >> there's any specific limitation on it being *Neutron* that uses the plugging >> interaction. My spec proposes to use the same plugin mechanism for the vif drivers in the tree and for the external vif drivers. Please see my RFC patch: https://review.openstack.org/#/c/136857/ Maxime From joehuang at huawei.com Thu Dec 11 09:02:37 2014 From: joehuang at huawei.com (joehuang) Date: Thu, 11 Dec 2014 09:02:37 +0000 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <5488B96B.2070207@redhat.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> Hello, Russell, Many thanks for your reply. See inline comments. -----Original Message----- From: Russell Bryant [mailto:rbryant at redhat.com] Sent: Thursday, December 11, 2014 5:22 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward >> On Fri, Dec 5, 2014 at 8:23 AM, joehuang wrote: >>> Dear all & TC & PTL, >>> >>> In the 40 minutes cross-project summit session ?Approaches for >>> scaling out?[1], almost 100 peoples attended the meeting, and the >>> conclusion is that cells can not cover the use cases and >>> requirements which the OpenStack cascading solution[2] aim to >>> address, the background including use cases and requirements is also >>> described in the mail. >I must admit that this was not the reaction I came away with the discussion with. >There was a lot of confusion, and as we started looking closer, many (or perhaps most) >people speaking up in the room did not agree that the requirements being stated are >things we want to try to satisfy. [joehuang] Could you pls. confirm your opinion: 1) cells can not cover the use cases and requirements which the OpenStack cascading solution aim to address. 2) Need further discussion whether to satisfy the use cases and requirements. On 12/05/2014 06:47 PM, joehuang wrote: >>> Hello, Davanum, >>> >>> Thanks for your reply. >>> >>> Cells can't meet the demand for the use cases and requirements described in the mail. >You're right that cells doesn't solve all of the requirements you're discussing. >Cells addresses scale in a region. My impression from the summit session > and other discussions is that the scale issues addressed by cells are considered > a priority, while the "global API" bits are not. [joehuang] Agree cells is in the first class priority. >>> 1. Use cases >>> a). Vodafone use case[4](OpenStack summit speech video from 9'02" >>> to 12'30" ), establishing globally addressable tenants which result >>> in efficient services deployment. > Keystone has been working on federated identity. >That part makes sense, and is already well under way. [joehuang] The major challenge for VDF use case is cross OpenStack networking for tenants. The tenant's VM/Volume may be allocated in different data centers geographically, but virtual network (L2/L3/FW/VPN/LB) should be built for each tenant automatically and isolated between tenants. Keystone federation can help authorization automation, but the cross OpenStack network automation challenge is still there. Using prosperity orchestration layer can solve the automation issue, but VDF don't like prosperity API in the north-bound, because no ecosystem is available. And other issues, for example, how to distribute image, also cannot be solved by Keystone federation. >>> b). Telefonica use case[5], create virtual DC( data center) cross >>> multiple physical DCs with seamless experience. >If we're talking about multiple DCs that are effectively local to each other >with high bandwidth and low latency, that's one conversation. >My impression is that you want to provide a single OpenStack API on top of >globally distributed DCs. I honestly don't see that as a problem we should >be trying to tackle. I'd rather continue to focus on making OpenStack work >*really* well split into regions. > I think some people are trying to use cells in a geographically distributed way, > as well. I'm not sure that's a well understood or supported thing, though. > Perhaps the folks working on the new version of cells can comment further. [joehuang] 1) Splited region way cannot provide cross OpenStack networking automation for tenant. 2) exactly, the motivation for cascading is "single OpenStack API on top of globally distributed DCs". Of course, cascading can also be used for DCs close to each other with high bandwidth and low latency. 3) Folks comment from cells are welcome. . >>> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, >>> 8#. For NFV cloud, it?s in nature the cloud will be distributed but >>> inter-connected in many data centers. >I'm afraid I don't understand this one. In many conversations about NFV, I haven't heard this before. [joehuang] This is the ETSI requirement and use cases specification for NFV. ETSI is the home of the Industry Specification Group for NFV. In Figure 14 (virtualization of EPC) of this document, you can see that the operator's cloud including many data centers to provide connection service to end user by inter-connected VNFs. The requirements listed in (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW etc) to run over cloud, eg. migrate the traditional telco. APP from prosperity hardware to cloud. Not all NFV requirements have been covered yet. Forgive me there are so many telco terms here. >> >>> 2.requirements >>> a). The operator has multiple sites cloud; each site can use one or >>> multiple vendor?s OpenStack distributions. >Is this a technical problem, or is a business problem of vendors not >wanting to support a mixed environment that you're trying to work >around with a technical solution? [joehuang] Pls. refer to VDF use case, the multi-vendor policy has been stated very clearly: 1) Local relationships: Operating Companies also have long standing relationships to their own choice of vendors; 2) Multi-Vendor :Each site can use one or multiple vendors which leads to better use of local resources and capabilities. Technical solution must be provided for multi-vendor integration and verification, It's usually ETSI standard in the past for mobile network. But how to do that in multi-vendor's cloud infrastructure? Cascading provide a way to use OpenStack API as the integration interface. >> b). Each site with its own requirements and upgrade schedule while >> maintaining standard OpenStack API c). The multi-site cloud must >> provide unified resource management with global Open API exposed, for >> example create virtual DC cross multiple physical DCs with seamless >> experience. >> Although a prosperity orchestration layer could be developed for the >> multi-site cloud, but it's prosperity API in the north bound >> interface. The cloud operators want the ecosystem friendly global >> open API for the mutli-site cloud for global access. >I guess the question is, do we see a "global API" as something we want >to accomplish. What you're talking about is huge, and I'm not even sure >how you would expect it to work in some cases (like networking). [joehuang] Yes, the most challenge part is networking. In the PoC, L2 networking cross OpenStack is to leverage the L2 population mechanism.The L2proxy for DC1 in the cascading layer will detect the new VM1(in DC1)'s port is up, and then ML2 L2 population will be activated, the VM1's tunneling endpoint- host IP or L2GW IP in DC1, will be populated to L2proxy for DC2, and L2proxy for DC2 will create a external port in the DC2 Neutron with the VM1's tunneling endpoint- host IP or L2GW IP in DC1. The external port will be attached to the L2GW or only external port created, L2 population(if not L2GW used) inside DC2 can be activated to notify all VMs located in DC2 for the same L2 network. For L3 networking finished in the PoC is to use extra route over GRE to serve local VLAN/VxLAN networks located in different DCs. Of course, other L3 networking method can be developed, for example, through VPN service. There are 4 or 5 BPs talking about edge network gateway to connect OpenStack tenant network to outside network, all these technologies can be leveraged to do cross OpenStack networking for different scenario. To experience the cross OpenStack networking, please try PoC source code: https://github.com/stackforge/tricircle >In any case, to be as clear as possible, I'm not convinced this is something >we should be working on. I'm going to need to see much more >overwhelming support for the idea before helping to figure out any further steps. [joehuang] If you or any other have any doubts, please feel free to ignite a discussion thread. For time difference reason, we (working in China) are not able to join most of IRC meeting, so mail-list is a good way for discussion. Russell Bryant _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Best Regards Chaoyi Huang ( joehuang ) From xianchaobo at huawei.com Thu Dec 11 09:07:54 2014 From: xianchaobo at huawei.com (xianchaobo) Date: Thu, 11 Dec 2014 09:07:54 +0000 Subject: [openstack-dev] [Ironic] Some questions about Ironic service In-Reply-To: <98B730463BF8F84A885ABF3A8F6149516B8D8B97@SZXEMA501-MBS.china.huawei.com> References: <98B730463BF8F84A885ABF3A8F6149516B8D8B97@SZXEMA501-MBS.china.huawei.com> Message-ID: Hi,Fox Kevin M Thanks for your help. Also,I want to know whether these features will be implemented in Ironic? Do we have a plan to implement them? Thanks Xianchaobo From: Fox, Kevin M [mailto:Kevin.Fox at pnnl.gov] Sent: Tuesday, December 09, 2014 5:52 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Luohao (brian) Subject: Re: [openstack-dev] [Ironic] Some questions about Ironic service No to questions 1, 3, and 4. Yes to 2, but very minimally. ________________________________ From: xianchaobo Sent: Monday, December 08, 2014 10:29:50 PM To: openstack-dev at lists.openstack.org Cc: Luohao (brian) Subject: [openstack-dev] [Ironic] Some questions about Ironic service Hello, all I'm trying to install and configure Ironic service, something confused me. I create two neutron networks, public network and private network. Private network is used to deploy physical machines Public network is used to provide floating ip. (1) Private network type can be VLAN or VXLAN? (In install guide, the network type is flat) (2) The network of deployed physical machines can be managed by neutron? (3) Different tenants can have its own network to manage physical machines? (4) Does the ironic provide some mechanism for deployed physical machines to use storage such as shared storage,cinder volume? Thanks, XianChaobo -------------- next part -------------- An HTML attachment was scrubbed... URL: From efedorova at mirantis.com Thu Dec 11 09:08:28 2014 From: efedorova at mirantis.com (Ekaterina Chernova) Date: Thu, 11 Dec 2014 13:08:28 +0400 Subject: [openstack-dev] [Murano] Oslo.messaging error In-Reply-To: <0c970abc20af4b198488fd85fc71e23e@BY2PR42MB101.048d.mgd.msft.net> References: <0c970abc20af4b198488fd85fc71e23e@BY2PR42MB101.048d.mgd.msft.net> Message-ID: Hi! I recommend you to create separate user and password in Rabbit MQ and do not use 'quest' user. Don't forget to edit config file. I recommend you to go to our IRC channel #murano Freenode. We will help you to set up your environment step by step! Regards, Kate. On Thu, Dec 11, 2014 at 10:12 AM, wrote: > > > > > > > HI Team, > > > > I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the > below install murano-api I encounter the below error. Please assist. > > > > When I install > > > > I am using the Murano guide link provided below: > > https://murano.readthedocs.org/en/latest/install/manual.html > > > > > > I am trying to execute the section 7 > > > > 1. Open a new console and launch Murano API. A separate terminal is > required because the console will be locked by a running process. > > 2. $ cd ~/murano/murano > > 3. $ tox -e venv -- murano-api \ > > 4. > --config-file ./etc/murano/murano.conf > > > > > > I am getting the below error : I have a Juno Openstack ready and trying to > integrate Murano > > > > > > 2014-12-11 12:28:03.676 9524 INFO eventlet.wsgi [-] (9524) wsgi starting > up on http://0.0.0.0:8082/ > > 2014-12-11 12:28:03.677 9524 DEBUG murano.common.statservice [-] Updating > statistic information. update_stats > /root/murano/murano/murano/common/statservice.py:57 > > 2014-12-11 12:28:03.677 9524 DEBUG murano.common.statservice [-] Stats > object: object at 0x7ff72837d410> update_stats > /root/murano/murano/murano/common/statservice.py:58 > > 2014-12-11 12:28:03.677 9524 DEBUG murano.common.statservice [-] Stats: > Requests:0 Errors: 0 Ave.Res.Time 0.0000 > > Per tenant: {} update_stats > /root/murano/murano/murano/common/statservice.py:64 > > 2014-12-11 12:28:03.692 9524 DEBUG oslo.db.sqlalchemy.session [-] MySQL > server mode set to > STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION > _check_effective_sql_mode > /root/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py:509 > > 2014-12-11 12:28:06.721 9524 ERROR oslo.messaging._drivers.impl_rabbit [-] > AMQP server 192.168.x.x:5672 closed the connection. Check login > credentials: Socket closed > > > > Warm Regards, > > *Raghavendra Lad* > > > > ------------------------------ > > This message is for the designated recipient only and may contain > privileged, proprietary, or otherwise confidential information. If you have > received it in error, please notify the sender immediately and delete the > original. Any other use of the e-mail by you is prohibited. Where allowed > by local law, electronic communications with Accenture and its affiliates, > including e-mail and instant messaging (including content), may be scanned > by our systems for the purposes of information security and assessment of > internal compliance with Accenture policy. > > ______________________________________________________________________________________ > > www.accenture.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From julien at danjou.info Thu Dec 11 10:04:05 2014 From: julien at danjou.info (Julien Danjou) Date: Thu, 11 Dec 2014 11:04:05 +0100 Subject: [openstack-dev] [oslo] deprecation 'pattern' library?? In-Reply-To: (Joshua Harlow's message of "Wed, 10 Dec 2014 12:26:47 -0800") References: Message-ID: On Wed, Dec 10 2014, Joshua Harlow wrote: [?] > Or in general any other comments/ideas about providing such a deprecation > pattern library? +1 > * debtcollector made me think of "loanshark" :)" -- Julien Danjou -- Free Software hacker -- http://julien.danjou.info -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 826 bytes Desc: not available URL: From berrange at redhat.com Thu Dec 11 10:41:37 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Thu, 11 Dec 2014 10:41:37 +0000 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> Message-ID: <20141211104137.GD23831@redhat.com> On Thu, Dec 11, 2014 at 09:37:31AM +0800, henry hly wrote: > On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells wrote: > > On 10 December 2014 at 01:31, Daniel P. Berrange > > wrote: > >> > >> > >> So the problem of Nova review bandwidth is a constant problem across all > >> areas of the code. We need to solve this problem for the team as a whole > >> in a much broader fashion than just for people writing VIF drivers. The > >> VIF drivers are really small pieces of code that should be straightforward > >> to review & get merged in any release cycle in which they are proposed. > >> I think we need to make sure that we focus our energy on doing this and > >> not ignoring the problem by breaking stuff off out of tree. > > > > > > The problem is that we effectively prevent running an out of tree Neutron > > driver (which *is* perfectly legitimate) if it uses a VIF plugging mechanism > > that isn't in Nova, as we can't use out of tree code and we won't accept in > > code ones for out of tree drivers. > > The question is, do we really need such flexibility for so many nova vif types? > > I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example, > nova shouldn't known too much details about switch backend, it should > only care about the VIF itself, how the VIF is plugged to switch > belongs to Neutron half. > > However I'm not saying to move existing vif driver out, those open > backend have been used widely. But from now on the tap and vhostuser > mode should be encouraged: one common vif driver to many long-tail > backend. Yes, I really think this is a key point. When we introduced the VIF type mechanism we never intended for there to be soo many different VIF types created. There is a very small, finite number of possible ways to configure the libvirt guest XML and it was intended that the VIF types pretty much mirror that. This would have given us about 8 distinct VIF type maximum. I think the reason for the larger than expected number of VIF types, is that the drivers are being written to require some arbitrary tools to be invoked in the plug & unplug methods. It would really be better if those could be accomplished in the Neutron code than the Nova code, via a host agent run & provided by the Neutron mechanism. This would let us have a very small number of VIF types and so avoid the entire problem that this thread is bringing up. Failing that though, I could see a way to accomplish a similar thing without a Neutron launched agent. If one of the VIF type binding parameters were the name of a script, we could run that script on plug & unplug. So we'd have a finite number of VIF types, and each new Neutron mechanism would merely have to provide a script to invoke eg consider the existing midonet & iovisor VIF types as an example. Both of them use the libvirt "ethernet" config, but have different things running in their plug methods. If we had a mechanism for associating a "plug script" with a vif type, we could use a single VIF type for both. eg iovisor port binding info would contain vif_type=ethernet vif_plug_script=/usr/bin/neutron-iovisor-vif-plug while midonet would contain vif_type=ethernet vif_plug_script=/usr/bin/neutron-midonet-vif-plug And so you see implementing a new Neutron mechanism in this way would not require *any* changes in Nova whatsoever. The work would be entirely self-contained within the scope of Neutron. It is simply a packaging task to get the vif script installed on the compute hosts, so that Nova can execute it. This is essentially providing a flexible VIF plugin system for Nova, without having to have it plug directly into the Nova codebase with the API & RPC stability constraints that implies. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From akilesh1597 at gmail.com Thu Dec 11 10:43:51 2014 From: akilesh1597 at gmail.com (Akilesh K) Date: Thu, 11 Dec 2014 16:13:51 +0530 Subject: [openstack-dev] [neutron][sriov] PciDeviceRequestFailed error In-Reply-To: <3bef26e6.f22e.14a281154ac.Coremail.ayshihanzhang@126.com> References: <3bef26e6.f22e.14a281154ac.Coremail.ayshihanzhang@126.com> Message-ID: Hey guys sorry for the delayed reply. The problem was with the whitelist. I had whitelisted the id of the physical function instead of the virtual function. On Mon, Dec 8, 2014 at 9:33 AM, shihanzhang wrote: > I think the problem is in nova, can you show your > "pci_passthrough_whitelist" in nova.conf? > > > > > > > At 2014-12-04 18:26:21, "Akilesh K" wrote: > > Hi, > I am using neutron-plugin-sriov-agent. > > I have configured pci_whitelist in nova.conf > > I have configured ml2_conf_sriov.ini. > > But when I launch instance I get the exception in subject. > > On further checking with the help of some forum messages, I discovered > that pci_stats are empty. > mysql> select hypervisor_hostname,pci_stats from compute_nodes; > +---------------------+-----------+ > | hypervisor_hostname | pci_stats | > +---------------------+-----------+ > | openstack | [] | > +---------------------+-----------+ > 1 row in set (0.00 sec) > > > Further to this I found that PciDeviceStats.pools is an empty list too. > > Can anyone tell me what I am missing. > > > Thank you, > Ageeleshwar K > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Thu Dec 11 10:50:49 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 11 Dec 2014 11:50:49 +0100 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: <54897709.5090305@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 +100. I vote -1 there and would like to point out that we *must* keep history during the split, and split from u/s code base, not random repositories. If you don't know how to achieve this, ask oslo people, they did it plenty of times when graduating libraries from oslo-incubator. /Ihar On 10/12/14 19:18, Cedric OLLIVIER wrote: > > > 2014-12-09 18:32 GMT+01:00 Armando M. >: > > > By the way, if Kyle can do it in his teeny tiny time that he has > left after his PTL duties, then anyone can do it! :) > > https://review.openstack.org/#/c/140191/ > > Fully cloning Dave Tucker's repository [1] and the outdated fork of > the ODL ML2 MechanismDriver included raises some questions (e.g. > [2]). I wish the next patch set removes some files. At least it > should take the mainstream work into account (e.g. [3]) . > > [1] https://github.com/dave-tucker/odl-neutron-drivers [2] > https://review.openstack.org/#/c/113330/ [3] > https://review.openstack.org/#/c/96459/ > > > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUiXcIAAoJEC5aWaUY1u57dBMH/17unffokpb0uxqewPYrPNMI ukDzG4dW8mIP3yfbVNsHQXe6gWj/kj/SkBWJrO13BusTu8hrr+DmOmmfF/42s3vY E+6EppQDoUjR+QINBwE46nU+E1w9hIHyAZYbSBtaZQ32c8aQbmHmF+rgoeEQq349 PfpPLRI6MamFWRQMXSgF11VBTg8vbz21PXnN3KbHbUgzI/RS2SELv4SWmPgKZCEl l1K5J1/Vnz2roJn4pr/cfc7vnUIeAB5a9AuBHC6o+6Je2RDy79n+oBodC27kmmIx lVGdypoxZ9tF3yfRM9nngjkOtozNzZzaceH0Sc/5JR4uvNReVN4exzkX5fDH+SM= =dfe/ -----END PGP SIGNATURE----- From ihrachys at redhat.com Thu Dec 11 10:53:29 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 11 Dec 2014 11:53:29 +0100 Subject: [openstack-dev] Lack of quota - security bug or not? In-Reply-To: <20141210211219.GA2497@yuggoth.org> References: <5488A255.9030302@gmail.com> <5488AE71.1080304@gmail.com> <20141210210539.GZ2497@yuggoth.org> <5488B617.3080604@gmail.com> <20141210211219.GA2497@yuggoth.org> Message-ID: <548977A9.3090202@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 10/12/14 22:12, Jeremy Stanley wrote: > On 2014-12-10 16:07:35 -0500 (-0500), Jay Pipes wrote: >> On 12/10/2014 04:05 PM, Jeremy Stanley wrote: >>> I think the bigger question is whether the lack of a quota >>> implementation for everything a tenant could ever possibly >>> create is something we should have reported in secret, worked >>> under embargo, backported to supported stable branches, and >>> announced via high-profile security advisories once fixed. >> >> Sure, fine. > > Any tips for how to implement new quota features in a way that the > patches won't violate our stable backport policies? > If we consider it a security issue worth CVE, then security concerns generally beat stability concerns. We'll obviously need to document the change in default behaviour in release notes though, and maybe provide a documented way to disable the change for stable releases (I suspect we already have a way to disable specific quotas, but we should make sure it's the case and we provide operators commands ready to be executed to achieve this). /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUiXeoAAoJEC5aWaUY1u57i3EIAMZp5XoTfayE2EblAruo+hK+ I4c8EvrhCNOVe51BsI42VFkuqp4vf9nKpHYz/PtSOp/9tLxXgpt0tFgEEOUS2xR9 rIMR0vkJSLWgT6v7aGMR7cDQ1MSGkmjCQl2SgmRgsyG0Jcx1/+El9zUToTI9hTFu Yw97cN04j/pFda7Noo91ck7htq0pSCsLtR2jRVePgcIc6UeW372aaXn8zboTtCks c03VXiZHc5TpZurZiFopT+CLbiDl5k0JvMuptP7YOhnfzzNsaaL/Bd8+9f6SGpol Dy7Ha2CDsAl1WEMx0VvAHvH5O4YRbbE0sIvY1r0pxmMQB8lJwx6KfcDwIrer2Og= =ZY3+ -----END PGP SIGNATURE----- From bob.ball at citrix.com Thu Dec 11 10:55:19 2014 From: bob.ball at citrix.com (Bob Ball) Date: Thu, 11 Dec 2014 10:55:19 +0000 Subject: [openstack-dev] [Neutron] XenAPI questions Message-ID: Hi Yamamoto, XenAPI and Neutron do work well together, and we have an private CI that is running Neutron jobs. As it's not currently the public CI it's harder to access logs. We're working on trying to move the existing XenServer CI from a nova-network base to a neutron base, at which point the logs will of course be publically accessible and tested against any changes, thus making it easy to answer questions such as the below. Bob > -----Original Message----- > From: YAMAMOTO Takashi [mailto:yamamoto at valinux.co.jp] > Sent: 11 December 2014 03:17 > To: openstack-dev at lists.openstack.org > Subject: [openstack-dev] [Neutron] XenAPI questions > > hi, > > i have questions for XenAPI folks: > > - what's the status of XenAPI support in neutron? > - is there any CI covering it? i want to look at logs. > - is it possible to write a small program which runs with the xen > rootwrap and proxies OpenFlow channel between domains? > (cf. https://review.openstack.org/#/c/138980/) > > thank you. > > YAMAMOTO Takashi > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gkotton at vmware.com Thu Dec 11 11:23:39 2014 From: gkotton at vmware.com (Gary Kotton) Date: Thu, 11 Dec 2014 11:23:39 +0000 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: <54897709.5090305@redhat.com> References: <54897709.5090305@redhat.com> Message-ID: On 12/11/14, 12:50 PM, "Ihar Hrachyshka" wrote: >-----BEGIN PGP SIGNED MESSAGE----- >Hash: SHA512 > >+100. I vote -1 there and would like to point out that we *must* keep >history during the split, and split from u/s code base, not random >repositories. If you don't know how to achieve this, ask oslo people, >they did it plenty of times when graduating libraries from oslo-incubator. >/Ihar > >On 10/12/14 19:18, Cedric OLLIVIER wrote: >> >> >> 2014-12-09 18:32 GMT+01:00 Armando M. > >: >> >> >> By the way, if Kyle can do it in his teeny tiny time that he has >> left after his PTL duties, then anyone can do it! :) >> >> https://review.openstack.org/#/c/140191/ This patch looses the recent hacking changes that we have made. This is a slight example to try and highly the problem that we may incur as a community. >> >> Fully cloning Dave Tucker's repository [1] and the outdated fork of >> the ODL ML2 MechanismDriver included raises some questions (e.g. >> [2]). I wish the next patch set removes some files. At least it >> should take the mainstream work into account (e.g. [3]) . >> >> [1] https://github.com/dave-tucker/odl-neutron-drivers [2] >> https://review.openstack.org/#/c/113330/ [3] >> https://review.openstack.org/#/c/96459/ >> >> >> _______________________________________________ OpenStack-dev >> mailing list OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >-----BEGIN PGP SIGNATURE----- >Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > >iQEcBAEBCgAGBQJUiXcIAAoJEC5aWaUY1u57dBMH/17unffokpb0uxqewPYrPNMI >ukDzG4dW8mIP3yfbVNsHQXe6gWj/kj/SkBWJrO13BusTu8hrr+DmOmmfF/42s3vY >E+6EppQDoUjR+QINBwE46nU+E1w9hIHyAZYbSBtaZQ32c8aQbmHmF+rgoeEQq349 >PfpPLRI6MamFWRQMXSgF11VBTg8vbz21PXnN3KbHbUgzI/RS2SELv4SWmPgKZCEl >l1K5J1/Vnz2roJn4pr/cfc7vnUIeAB5a9AuBHC6o+6Je2RDy79n+oBodC27kmmIx >lVGdypoxZ9tF3yfRM9nngjkOtozNzZzaceH0Sc/5JR4uvNReVN4exzkX5fDH+SM= >=dfe/ >-----END PGP SIGNATURE----- > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From george.shuklin at gmail.com Thu Dec 11 11:51:25 2014 From: george.shuklin at gmail.com (George Shuklin) Date: Thu, 11 Dec 2014 13:51:25 +0200 Subject: [openstack-dev] Lack of quota - security bug or not? In-Reply-To: <5488AE71.1080304@gmail.com> References: <5488A255.9030302@gmail.com> <5488AE71.1080304@gmail.com> Message-ID: <5489853D.1000804@gmail.com> On 12/10/2014 10:34 PM, Jay Pipes wrote: > On 12/10/2014 02:43 PM, George Shuklin wrote: >> I have some small discussion in launchpad: is lack of a quota for >> unprivileged user counted as security bug (or at least as a bug)? >> >> If user can create 100500 objects in database via normal API and ops >> have no way to restrict this, is it OK for Openstack or not? > > That would be a major security bug. Please do file one and we'll get > on it immediately. > (private bug at that moment) https://bugs.launchpad.net/ossa/+bug/1401170 There is discussion about this. Quote: Jeremy Stanley (fungi): Traditionally we've not considered this sort of exploit a security vulnerability. The lack of built-in quota for particular kinds of database entries isn't necessarily a design flaw, but even if it can/should be fixed it's likely not going to get addressed in stable backports, is not something for which we would issue a security advisory, and so doesn't need to be kept under secret embargo. Does anyone else disagree? If anyone have access to OSSA tracker, please say your opinion in that bug. From akamyshnikova at mirantis.com Thu Dec 11 12:22:36 2014 From: akamyshnikova at mirantis.com (Anna Kamyshnikova) Date: Thu, 11 Dec 2014 16:22:36 +0400 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group Message-ID: Hello everyone! In neutron there is a rather old bug [1] about adding uniqueness for security group name and tenant id. I found this idea reasonable and started working on fix for this bug [2]. I think it is good to add a uniqueconstraint because: 1) In nova there is such constraint for security groups https://github.com/openstack/nova/blob/stable/juno/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py#L1155-L1157. So I think that it is rather disruptive that it is impossible to create security group with the same name in nova, but possible in neutron. 2) Users get confused having security groups with the same name. In comment for proposed change Assaf Muller and Maru Newby object for such solution and suggested another option, so I think we need more eyes on this change. I would like to ask you to share your thoughts on this topic. [1] - https://bugs.launchpad.net/neutron/+bug/1194579 [2] - https://review.openstack.org/135006 -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Dec 11 13:00:27 2014 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 11 Dec 2014 14:00:27 +0100 Subject: [openstack-dev] [neutron] Services are now split out and neutron is open for commits! In-Reply-To: References: Message-ID: <5489956B.8030301@openstack.org> Kyle Mestery wrote: > Folks, just a heads up that we have completed splitting out the services > (FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3]. This > was all done in accordance with the spec approved here [4]. Thanks to > all involved, but a special thanks to Doug and Anita, as well as infra. > Without all of their work and help, this wouldn't have been possible! Congrats! That's a good example where having an in-person sprint really facilitates getting things done in a reasonable amount of time -- just having a set of interested people up at the same time and focused on the same priorities helps! Now let's see if we manage to publish those all for kilo-1 next week :) -- Thierry Carrez (ttx) From thierry at openstack.org Thu Dec 11 13:16:00 2014 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 11 Dec 2014 14:16:00 +0100 Subject: [openstack-dev] Lack of quota - security bug or not? In-Reply-To: <5489853D.1000804@gmail.com> References: <5488A255.9030302@gmail.com> <5488AE71.1080304@gmail.com> <5489853D.1000804@gmail.com> Message-ID: <54899910.1060106@openstack.org> George Shuklin wrote: > > > On 12/10/2014 10:34 PM, Jay Pipes wrote: >> On 12/10/2014 02:43 PM, George Shuklin wrote: >>> I have some small discussion in launchpad: is lack of a quota for >>> unprivileged user counted as security bug (or at least as a bug)? >>> >>> If user can create 100500 objects in database via normal API and ops >>> have no way to restrict this, is it OK for Openstack or not? >> >> That would be a major security bug. Please do file one and we'll get >> on it immediately. >> > > (private bug at that moment) https://bugs.launchpad.net/ossa/+bug/1401170 > > There is discussion about this. Quote: > > Jeremy Stanley (fungi): > Traditionally we've not considered this sort of exploit a security > vulnerability. The lack of built-in quota for particular kinds of > database entries isn't necessarily a design flaw, but even if it > can/should be fixed it's likely not going to get addressed in stable > backports, is not something for which we would issue a security > advisory, and so doesn't need to be kept under secret embargo. Does > anyone else disagree? > > If anyone have access to OSSA tracker, please say your opinion in that bug. It also depends a lot on the details. Is there amplification ? Is there a cost associated ? I bet most public cloud providers would be fine with a user creating and paying for running 100500 instances, and that user would certainly end up creating at least 100500 objects in database via normal API. So this is really a per-report call, which is why we usually discuss them all separately. -- Thierry Carrez (ttx) From visnusaran.murugan at hp.com Thu Dec 11 13:26:58 2014 From: visnusaran.murugan at hp.com (Murugan, Visnusaran) Date: Thu, 11 Dec 2014 13:26:58 +0000 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <54888721.50404@redhat.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> Message-ID: <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> > -----Original Message----- > From: Zane Bitter [mailto:zbitter at redhat.com] > Sent: Wednesday, December 10, 2014 11:17 PM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept > showdown > > You really need to get a real email client with quoting support ;) Apologies :) I screwed up my mail client's configuration. > > On 10/12/14 06:42, Murugan, Visnusaran wrote: > > Well, we still have to persist the dependencies of each version of a > resource _somehow_, because otherwise we can't know how to clean them > up in the correct order. But what I think you meant to say is that this > approach doesn't require it to be persisted in a separate table where the > rows are marked as traversed as we work through the graph. > > > > [Murugan, Visnusaran] > > In case of rollback where we have to cleanup earlier version of resources, > we could get the order from old template. We'd prefer not to have a graph > table. > > In theory you could get it by keeping old templates around. But that means > keeping a lot of templates, and it will be hard to keep track of when you want > to delete them. It also means that when starting an update you'll need to > load every existing previous version of the template in order to calculate the > dependencies. It also leaves the dependencies in an ambiguous state when a > resource fails, and although that can be worked around it will be a giant pain > to implement. > Agree that looking to all templates for a delete is not good. But baring Complexity, we feel we could achieve it by way of having an update and a delete stream for a stack update operation. I will elaborate in detail in the etherpad sometime tomorrow :) > I agree that I'd prefer not to have a graph table. After trying a couple of > different things I decided to store the dependencies in the Resource table, > where we can read or write them virtually for free because it turns out that > we are always reading or updating the Resource itself at exactly the same > time anyway. > Not sure how this will work in an update scenario when a resource does not change and its dependencies do. Also taking care of deleting resources in order will be an issue. This implies that there will be different versions of a resource which will even complicate further. > >> This approach reduces DB queries by waiting for completion notification > on a topic. The drawback I see is that delete stack stream will be huge as it > will have the entire graph. We can always dump such data in > ResourceLock.data Json and pass a simple flag "load_stream_from_db" to > converge RPC call as a workaround for delete operation. > > > > This seems to be essentially equivalent to my 'SyncPoint' proposal[1], with > the key difference that the data is stored in-memory in a Heat engine rather > than the database. > > > > I suspect it's probably a mistake to move it in-memory for similar > > reasons to the argument Clint made against synchronising the marking off > of dependencies in-memory. The database can handle that and the problem > of making the DB robust against failures of a single machine has already been > solved by someone else. If we do it in-memory we are just creating a single > point of failure for not much gain. (I guess you could argue it doesn't matter, > since if any Heat engine dies during the traversal then we'll have to kick off > another one anyway, but it does limit our options if that changes in the > future.) [Murugan, Visnusaran] Resource completes, removes itself from > resource_lock and notifies engine. Engine will acquire parent lock and initiate > parent only if all its children are satisfied (no child entry in resource_lock). > This will come in place of Aggregator. > > Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly what I did. > The three differences I can see are: > > 1) I think you are proposing to create all of the sync points at the start of the > traversal, rather than on an as-needed basis. This is probably a good idea. I > didn't consider it because of the way my prototype evolved, but there's now > no reason I can see not to do this. > If we could move the data to the Resource table itself then we could even > get it for free from an efficiency point of view. +1. But we will need engine_id to be stored somewhere for recovery purpose (easy to be queried format). Sync points are created as-needed. Single resource is enough to restart that entire stream. I think there is a disconnect in our understanding. I will detail it as well in the etherpad. > 2) You're using a single list from which items are removed, rather than two > lists (one static, and one to which items are added) that get compared. > Assuming (1) then this is probably a good idea too. Yeah. We have a single list per active stream which work by removing Complete/satisfied resources from it. > 3) You're suggesting to notify the engine unconditionally and let the engine > decide if the list is empty. That's probably not a good idea - not only does it > require extra reads, it introduces a race condition that you then have to solve > (it can be solved, it's just more work). > Since the update to remove a child from the list is atomic, it's best to just > trigger the engine only if the list is now empty. > No. Notify only if stream has something to be processed. The newer Approach based on db lock will be that the last resource will initiate its parent. This is opposite to what our Aggregator model had suggested. > > It's not clear to me how the 'streams' differ in practical terms from > > just passing a serialisation of the Dependencies object, other than > > being incomprehensible to me ;). The current Dependencies > > implementation > > (1) is a very generic implementation of a DAG, (2) works and has plenty of > unit tests, (3) has, with I think one exception, a pretty straightforward API, > (4) has a very simple serialisation, returned by the edges() method, which > can be passed back into the constructor to recreate it, and (5) has an API that > is to some extent relied upon by resources, and so won't likely be removed > outright in any event. > > Whatever code we need to handle dependencies ought to just build on > this existing implementation. > > [Murugan, Visnusaran] Our thought was to reduce payload size > (template/graph). Just planning for worst case scenario (million resource > stack) We could always dump them in ResourceLock.data to be loaded by > Worker. > > If there's a smaller representation of a graph than a list of edges then I don't > know what it is. The proposed stream structure certainly isn't it, unless you > mean as an alternative to storing the entire graph once for each resource. A > better alternative is to store it once centrally - in my current implementation > it is passed down through the trigger messages, but since only one traversal > can be in progress at a time it could just as easily be stored in the Stack table > of the database at the slight cost of an extra write. > Agree that edge is the smallest representation of a graph. But it does not give us a complete picture without doing a DB lookup. Our assumption was to store streams in IN_PROGRESS resource_lock.data column. This could be in resource table instead. > I'm not opposed to doing that, BTW. In fact, I'm really interested in your input > on how that might help make recovery from failure more robust. I know > Anant mentioned that not storing enough data to recover when a node dies > was his big concern with my current approach. > With streams, We feel recovery will be easier. All we need is a trigger :) > I can see that by both creating all the sync points at the start of the traversal > and storing the dependency graph in the database instead of letting it flow > through the RPC messages, we would be able to resume a traversal where it > left off, though I'm not sure what that buys us. > > And I guess what you're suggesting is that by having an explicit lock with the > engine ID specified, we can detect when a resource is stuck in IN_PROGRESS > due to an engine going down? That's actually pretty interesting. > Yeah :) > > Based on our call on Thursday, I think you're taking the idea of the Lock > table too literally. The point of referring to locks is that we can use the same > concepts as the Lock table relies on to do atomic updates on a particular row > of the database, and we can use those atomic updates to prevent race > conditions when implementing SyncPoints/Aggregators/whatever you want > to call them. It's not that we'd actually use the Lock table itself, which > implements a mutex and therefore offers only a much slower and more > stateful way of doing what we want (lock mutex, change data, unlock > mutex). > > [Murugan, Visnusaran] Are you suggesting something like a select-for- > update in resource table itself without having a lock table? > > Yes, that's exactly what I was suggesting. DB is always good for sync. But we need to be careful not to overdo it. Will update etherpad by tomorrow. > > cheers, > Zane. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From robert.clark at hp.com Thu Dec 11 13:30:33 2014 From: robert.clark at hp.com (Clark, Robert Graham) Date: Thu, 11 Dec 2014 13:30:33 +0000 Subject: [openstack-dev] Lack of quota - security bug or not? In-Reply-To: <54899910.1060106@openstack.org> References: <5488A255.9030302@gmail.com> <5488AE71.1080304@gmail.com> <5489853D.1000804@gmail.com> <54899910.1060106@openstack.org> Message-ID: On 11/12/2014 13:16, "Thierry Carrez" wrote: >George Shuklin wrote: >> >> >> On 12/10/2014 10:34 PM, Jay Pipes wrote: >>> On 12/10/2014 02:43 PM, George Shuklin wrote: >>>> I have some small discussion in launchpad: is lack of a quota for >>>> unprivileged user counted as security bug (or at least as a bug)? >>>> >>>> If user can create 100500 objects in database via normal API and ops >>>> have no way to restrict this, is it OK for Openstack or not? >>> >>> That would be a major security bug. Please do file one and we'll get >>> on it immediately. >>> >> >> (private bug at that moment) >>https://bugs.launchpad.net/ossa/+bug/1401170 >> >> There is discussion about this. Quote: >> >> Jeremy Stanley (fungi): >> Traditionally we've not considered this sort of exploit a security >> vulnerability. The lack of built-in quota for particular kinds of >> database entries isn't necessarily a design flaw, but even if it >> can/should be fixed it's likely not going to get addressed in stable >> backports, is not something for which we would issue a security >> advisory, and so doesn't need to be kept under secret embargo. Does >> anyone else disagree? >> >> If anyone have access to OSSA tracker, please say your opinion in that >>bug. > >It also depends a lot on the details. Is there amplification ? Is there >a cost associated ? I bet most public cloud providers would be fine with >a user creating and paying for running 100500 instances, and that user >would certainly end up creating at least 100500 objects in database via >normal API. > >So this is really a per-report call, which is why we usually discuss >them all separately. > >-- >Thierry Carrez (ttx) Most public cloud providers would not be in any way happy with a new customer spinning up anything like that number of instances. Fraud and Abuse are major concerns for public cloud providers. Automated checks take time. Imagine someone using a stolen but not yet cancelled credit card spinning up 1000?s of instances. The card checks out ok when the user signs up but has been cancelled by the time the billing cycle closes - massive loss to the cloud provider in at least three ways. Direct lost revenue from that customer, the loss of capacity which possibly stopped other customers bringing business to the platform and finally the likelyhood that the account was setup for malicious purposes, either internet facing or against the cloud infrastructure itself. Please add me to the bug if you?d like to discuss further. -Rob From jaypipes at gmail.com Thu Dec 11 13:43:46 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 11 Dec 2014 08:43:46 -0500 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: References: Message-ID: <54899F92.2060900@gmail.com> On 12/11/2014 07:22 AM, Anna Kamyshnikova wrote: > Hello everyone! > > In neutron there is a rather old bug [1] about adding uniqueness for > security group name and tenant id. I found this idea reasonable and > started working on fix for this bug [2]. I think it is good to add a > uniqueconstraint because: > > 1) In nova there is such constraint for security groups > https://github.com/openstack/nova/blob/stable/juno/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py#L1155-L1157. > So I think that it is rather disruptive that it is impossible to create > security group with the same name in nova, but possible in neutron. > 2) Users get confused having security groups with the same name. > > In comment for proposed change Assaf Muller and Maru Newby object for > such solution and suggested another option, so I think we need more eyes > on this change. > > I would like to ask you to share your thoughts on this topic. > [1] - https://bugs.launchpad.net/neutron/+bug/1194579 > [2] - https://review.openstack.org/135006 I'm generally in favor of making name attributes opaque, utf-8 strings that are entirely user-defined and have no constraints on them. I consider the name to be just a tag that the user places on some resource. It is the resource's ID that is unique. I do realize that Nova takes a different approach to *some* resources, including the security group name. End of the day, it's probably just a personal preference whether names should be unique to a tenant/user or not. Maru had asked me my opinion on whether names should be unique and I answered my personal opinion that no, they should not be, and if Neutron needed to ensure that there was one and only one default security group for a tenant, that a way to accomplish such a thing in a race-free way, without use of SELECT FOR UPDATE, was to use the approach I put into the pastebin on the review above. Best, -jay From dmakogon at mirantis.com Thu Dec 11 13:46:47 2014 From: dmakogon at mirantis.com (Denis Makogon) Date: Thu, 11 Dec 2014 15:46:47 +0200 Subject: [openstack-dev] [project-config][infra] Advanced way to run specific jobs against code changes Message-ID: Good day Stackers. I?d like to raise question about implementing custom pipelines for Zuul. For those of you how?s pretty familiar with project-config and infra itself it wouldn?t be a news that for now Zuul layout supports only few pipelines types [1] . Most of OpenStack projects are maintaining more than one type of drivers (for Nova - virt driver, Trove - datastore drivers, Cinder - volume backends, etc.). And, as it can be seen, existing jenkins check jobs are not wisely utilize infra resources. This is a real problem, just remember end of every release - number of check/recheck jobs is huge. So, how can we utilize resources more wisely and run only needed check job? Like we?ve been processing unstable new check jobs - putting them into ?experimental? pipeline. So why can?t we provide such ability for projects to define their own pipelines? For example, as code reviewer, i see that patch touches specific functionality of Driver A and i know that project testing infrastructure provides an ability to examine specific workflow for Driver A. Then it seems to be more than valid to post a comment on the review like ?check driver-a?. As you can see i want to ask gerrit to trig custom pipeline for given project. Let me describe more concrete example from ?real world?. In Trove we maintain 5 different drivers for different datastores and it doesn?t look like a good thing to run all check jobs against code that doesn?t actually touch any of existing datastore drivers (this is what we have right now [2] ). Now here comes my proposal. I?d like to extend existing Zuul pipeline to support any of needed check jobs (see example of Triple-O, [3] ). But, as i can, see there are possible problems with such approach, so i also have an alternative proposal to one above. The only one way to deal with such approach is to use REGEX ?files? for job definitions (example: requirements check job [4] ). In this case we?d still maintain only one pipeline ?experimental? for all second-priority jobs. To make small summary, two ways were proposed: - Pipeline(s) per Project. Pros: reviewer can trig specific pipeline by himself. Cons: spamming status/zuul. - REGEX files per additional jobs. Sorry, but i?m not able to describe all Pros/Cons for each of proposals. So, if you know them, please help me to figure out them. All thoughts/suggestions are welcome. Kind regards Denis M. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Thu Dec 11 14:20:38 2014 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 11 Dec 2014 14:20:38 +0000 Subject: [openstack-dev] [Ironic] Some questions about Ironic service In-Reply-To: References: <98B730463BF8F84A885ABF3A8F6149516B8D8B97@SZXEMA501-MBS.china.huawei.com>, Message-ID: <1A3C52DFCD06494D8528644858247BF017816F01@EX10MBOX03.pnnl.gov> I would hope yes to all, but there is a lot of hard work there. Some things there require neutron to configure physical switches. Some require a guest agent in the image to get cinder working, or cinder controlled hardware. And all require developers interested enough in making it happen. No time frame on any of it. Thanks, Kevin ________________________________ From: xianchaobo Sent: Thursday, December 11, 2014 1:07:54 AM To: openstack-dev at lists.openstack.org Cc: Luohao (brian) Subject: Re: [openstack-dev] [Ironic] Some questions about Ironic service Hi,Fox Kevin M Thanks for your help. Also,I want to know whether these features will be implemented in Ironic? Do we have a plan to implement them? Thanks Xianchaobo From: Fox, Kevin M [mailto:Kevin.Fox at pnnl.gov] Sent: Tuesday, December 09, 2014 5:52 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Luohao (brian) Subject: Re: [openstack-dev] [Ironic] Some questions about Ironic service No to questions 1, 3, and 4. Yes to 2, but very minimally. ________________________________ From: xianchaobo Sent: Monday, December 08, 2014 10:29:50 PM To: openstack-dev at lists.openstack.org Cc: Luohao (brian) Subject: [openstack-dev] [Ironic] Some questions about Ironic service Hello, all I?m trying to install and configure Ironic service, something confused me. I create two neutron networks, public network and private network. Private network is used to deploy physical machines Public network is used to provide floating ip. (1) Private network type can be VLAN or VXLAN? (In install guide, the network type is flat) (2) The network of deployed physical machines can be managed by neutron? (3) Different tenants can have its own network to manage physical machines? (4) Does the ironic provide some mechanism for deployed physical machines to use storage such as shared storage,cinder volume? Thanks, XianChaobo -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Thu Dec 11 14:35:00 2014 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 11 Dec 2014 15:35:00 +0100 Subject: [openstack-dev] [Ironic] ironic-discoverd status update Message-ID: <5489AB94.2020603@redhat.com> Hi all! As you know I actively promote ironic-discoverd project [1] as one of the means to do hardware inspection for Ironic (see e.g. spec [2]), so I decided it's worth to give some updates to the community from time to time. This email is purely informative, you may safely skip it, if you're not interested. Background ========== The discoverd project (I usually skip the "ironic-" part when talking about it) solves the problem of populating information about a node in Ironic database without help of any vendor-specific tool. This information usually includes Nova scheduling properties (CPU, RAM, disk size) and MAC's for ports. Introspection is done by booting a ramdisk on a node, collecting data there and posting it back to discoverd HTTP API. Thus actually discoverd consists of 2 components: the service [1] and the ramdisk [3]. The service handles 2 major tasks: * Processing data posted by the ramdisk, i.e. finding the node in Ironic database and updating node properties with new data. * Managing iptables so that the default PXE environment for introspection does not interfere with Neutron The project was born from a series of patches to Ironic itself after we discovered that this change is going to be too intrusive. Discoverd was actively tested as part of Instack [4] and it's RPM is a part of Juno RDO. After the Paris summit, we agreed on bringing it closer to the Ironic upstream, and now discoverd is hosted on StackForge and tracks bugs on Launchpad. Future ====== The basic feature of discoverd: supply Ironic with properties required for scheduling, is pretty finished as of the latest stable series 0.2. However, more features are planned for release 1.0.0 this January [5]. They go beyond the bare minimum of finding out CPU, RAM, disk size and NIC MAC's. Plugability ~~~~~~~~~~~ An interesting feature of discoverd is support for plugins, which I prefer to call hooks. It's possible to hook into the introspection data processing chain in 2 places: * Before any data processing. This opens opportunity to adopt discoverd to ramdisks that have different data format. The only requirement is that the ramdisk posts a JSON object. * After a node is found in Ironic database and ports are created for MAC's, but before any actual data update. This gives an opportunity to alter, which properties discoverd is going to update. Actually, even the default logic of update Node.properties is contained in a plugin - see SchedulerHook in ironic_discoverd/plugins/standard.py [6]. This plugability opens wide opportunities for integrating with 3rd party ramdisks and CMDB's (which as we know Ironic is not ;). Enrolling ~~~~~~~~~ Some people have found it limiting that the introspection requires power credentials (IPMI user name and password) to be already set. The recent set of patches [7] introduces a possibility to request manual power on of the machine and update IPMI credentials via the ramdisk to the expected values. Note that support of this feature in the reference ramdisk [3] is not ready yet. Also note that this scenario is only possible when using discoverd directly via it's API, not via Ironic API like in [2]. Get Involved ============ Discoverd terribly lacks reviews. Out team is very small and self-approving is not a rare case. I'm even not against fast-tracking any existing Ironic core to a discoverd core after a couple of meaningful reviews :) And of course patches are welcome, especially plugins for integration with existing systems doing similar things and CMDB's. Patches are accepted via usual Gerrit workflow. Ideas are accepted as Launchpad blueprints (we do not follow the Gerrit spec process right now). Finally, please comment on the Ironic spec [2], I'd like to know what you think. References ========== [1] https://pypi.python.org/pypi/ironic-discoverd [2] https://review.openstack.org/#/c/135605/ [3] https://github.com/openstack/diskimage-builder/tree/master/elements/ironic-discoverd-ramdisk [4] https://github.com/agroup/instack-undercloud/ [5] https://bugs.launchpad.net/ironic-discoverd/+milestone/1.0.0 [6] https://github.com/stackforge/ironic-discoverd/blob/master/ironic_discoverd/plugins/standard.py [7] https://blueprints.launchpad.net/ironic-discoverd/+spec/setup-ipmi-credentials From berrange at redhat.com Thu Dec 11 14:43:19 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Thu, 11 Dec 2014 14:43:19 +0000 Subject: [openstack-dev] [Nova] question about "Get Guest Info" row in HypervisorSupportMatrix In-Reply-To: <20141209153935.GM29167@redhat.com> References: <7997383.8LhO9nnzxZ@dblinov.sw.ru> <20141209153935.GM29167@redhat.com> Message-ID: <20141211144319.GM23831@redhat.com> On Tue, Dec 09, 2014 at 03:39:35PM +0000, Daniel P. Berrange wrote: > On Tue, Dec 09, 2014 at 06:33:47PM +0300, Dmitry Guryanov wrote: > > Hello! > > > > There is a feature in HypervisorSupportMatrix > > (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called "Get Guest > > Info". Does anybody know, what does it mean? I haven't found anything like > > this neither in nova api nor in horizon and nova command line. > > I've pretty much no idea what the intention was for that field. I've > been working on formally documenting all those things, but draw a blank > for that > > FYI: > > https://review.openstack.org/#/c/136380/1/doc/hypervisor-support.ini It is now able to auto-generate nova docs showing the support matrix in a more friendly fashion http://docs-draft.openstack.org/80/136380/2/check/gate-nova-docs/94c33ba/doc/build/html/support-matrix.html Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From george.shuklin at gmail.com Thu Dec 11 14:48:26 2014 From: george.shuklin at gmail.com (George Shuklin) Date: Thu, 11 Dec 2014 16:48:26 +0200 Subject: [openstack-dev] Lack of quota - security bug or not? In-Reply-To: <54899910.1060106@openstack.org> References: <5488A255.9030302@gmail.com> <5488AE71.1080304@gmail.com> <5489853D.1000804@gmail.com> <54899910.1060106@openstack.org> Message-ID: <5489AEBA.5030308@gmail.com> On 12/11/2014 03:16 PM, Thierry Carrez wrote: > George Shuklin wrote: >> On 12/10/2014 10:34 PM, Jay Pipes wrote: >>> On 12/10/2014 02:43 PM, George Shuklin wrote: >>>> I have some small discussion in launchpad: is lack of a quota for >>>> unprivileged user counted as security bug (or at least as a bug)? >>>> >>>> If user can create 100500 objects in database via normal API and ops >>>> have no way to restrict this, is it OK for Openstack or not? >>> That would be a major security bug. Please do file one and we'll get >>> on it immediately. >> (private bug at that moment) https://bugs.launchpad.net/ossa/+bug/1401170 >> >> There is discussion about this. Quote: >> >> Jeremy Stanley (fungi): >> Traditionally we've not considered this sort of exploit a security >> vulnerability. The lack of built-in quota for particular kinds of >> database entries isn't necessarily a design flaw, but even if it >> can/should be fixed it's likely not going to get addressed in stable >> backports, is not something for which we would issue a security >> advisory, and so doesn't need to be kept under secret embargo. Does >> anyone else disagree? >> >> If anyone have access to OSSA tracker, please say your opinion in that bug. > It also depends a lot on the details. Is there amplification ? Is there > a cost associated ? I bet most public cloud providers would be fine with > a user creating and paying for running 100500 instances, and that user > would certainly end up creating at least 100500 objects in database via > normal API. > > So this is really a per-report call, which is why we usually discuss > them all separately. > No one gonna be happy if the single user can grab unlimited resources (like ten /16 nets of white IP's). Whole idea of quotas is to give ops freedom and power to restrict user to comfortable for infrastructure levels of consuming. And every op for every infrastructure decide where is that level. For busy cloud is really hard to detect malicious user before problem happens, and it's really hard to clean up after (10 minutes for each data query after 15 minutes of lazy attack - is serious, I think). From t.trifonov at gmail.com Thu Dec 11 14:53:16 2014 From: t.trifonov at gmail.com (Tihomir Trifonov) Date: Thu, 11 Dec 2014 16:53:16 +0200 Subject: [openstack-dev] [horizon] REST and Django In-Reply-To: References: <547DD08A.7000402@redhat.com> Message-ID: > > *??Client just needs to know which URL to hit in order to invoke a certain > API, and does not need to know the procedure name or parameters ordering.* ?That's where the difference is. I think the client has to know the procedure name and parameters. Otherwise? we have a translation factory pattern, that converts one naming convention to another. And you won't be able to call any service API if there is no code in the middleware to translate it to the service API procedure name and parameters. To avoid this - we can use a transparent proxy model - direct mapping of a client call to service API naming, which can be done if the client invokes the methods with the names in the service API, so that the middleware will just pass parameters, and will not translate. Instead of: updating user data: => => we may use: => => ?The idea here is that if we have keystone 4.0 client, ?we will have to just add it to the clients [] list and nothing more is required at the middleware level. Just create the frontend code to use the new Keystone 4.0 methods. Otherwise we will have to add all new/different signatures of 4.0 against 2.0/3.0 in the middleware in order to use Keystone 4.0. There is also a great example of using a pluggable/new feature in Horizon. Do you remember the volume types support patch? The patch was pending in Gerrit for few months - first waiting the cinder support for volume types to go upstream, then waiting few more weeks for review. I am not sure, but as far as I remember, the Horizon patch even missed a release milestone and was introduced in the next release. If we have a transparent middleware - this will be no more an issue. As long as someone has written the frontend modules(which should be easy to add and customize), and they install the required version of the service API - they will not need updated Horizon to start using the feature. Maybe I am not the right person to give examples here, but how many of you had some kind of Horizon customization being locally merged/patched in your local distros/setups, until the patch is being pushed upstream? I will say it again. Nova, Keystone, Cinder, Glance etc. already have stable public APIs. Why do we want to add the translation middleware and to introduce another level of REST API? This layer will often hide new features, added to the service APIs and will delay their appearance in Horizon. That's simply not needed. I believe it is possible to just wrap the authentication in the middleware REST, but not to translate anything as RPC methods/parameters. ?And one more example: ?@rest_utils.ajax() def put(self, request, id): """Update a single project. The POST data should be an application/json object containing the parameters to update: "name" (string), "description" (string), "domain_id" (string) and "enabled" (boolean, defaults to true). Additional, undefined parameters may also be provided, but you'll have to look deep into keystone to figure out what they might be. This method returns HTTP 204 (no content) on success. """ project = api.keystone.tenant_get(request, id) kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None) api.keystone.tenant_update(request, project, **kwargs) ?Do we really need the lines:? project = api.keystone.tenant_get(request, id) kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None) ? ? ?Since we update the project on the client, it is obvious that we already fetched the project data. So we can simply send: POST /keystone/3.0/tenant_update Content-Type: application/json {"id": cached.id, "domain_id": cached.domain_id, "name": "new name", "description": "new description", "enabled": cached.enabled} Fewer requests, faster application. On Wed, Dec 10, 2014 at 8:39 PM, Thai Q Tran wrote: > > ?? > I think we're arguing for the same thing, but maybe slightly different > approach. I think we can both agree that a middle-layer is required, > whether we intend to use it as a proxy or REST endpoints. Regardless of the > approach, the client needs to relay what API it wants to invoke, and you > can do that either via RPC or REST. I personally prefer the REST approach > because it shields the client. Client just needs to know which URL to hit > in order to invoke a certain API, and does not need to know the procedure > name or parameters ordering. Having said all of that, I do believe we > should keep it as thin as possible. I do like the idea of having separate > classes for different API versions. What we have today is a thin REST layer > that acts like a proxy. You hit a certain URL, and the middle layer > forwards the API invokation. The only exception to this rule is support for > batch deletions. > > -----Tihomir Trifonov wrote: ----- > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > From: Tihomir Trifonov > Date: 12/10/2014 03:04AM > > Subject: Re: [openstack-dev] [horizon] REST and Django > > Richard, thanks for the reply, > > > I agree that the given example is not a real REST. But we already have the > REST API - that's Keystone, Nova, Cinder, Glance, Neutron etc, APIs. So > what we plan to do here? To add a new REST layer to communicate with other > REST API? Do we really need Frontend-REST-REST architecture ? My opinion is > that we don't need another REST layer as we currently are trying to go away > from the Django layer, which is the same - another processing layer. > Although we call it REST proxy or whatever - it doesn't need to be a real > REST, but just an aggregation proxy that combines and forwards some > requests with adding minimal processing overhead. What makes sense for me > is to keep the authentication in this layer as it is now - push a cookie to > the frontend, but the REST layer will extract the auth tokens from the > session storage and prepare the auth context for the REST API request to OS > services. This way we will not expose the tokens to the JS frontend, and > will have strict control over the authentication. The frontend will just > send data requests, they will be wrapped with auth context and forwarded. > > Regarding the existing issues with versions in the API - for me the > existing approach is wrong. All these fixes were made as workarounds. What > should have been done is to create abstractions for each version and to use > a separate class for each version. This was partially done for the > keystoneclient in api/keystone.py, but not for the forms/views, where we > still have if-else for versions. What I suggest here is to have > different(concrete) views/forms for each version, and to use them according > the context. If the Keystone backend is v2.0 - then in the Frontend use > keystone2() object, otherwise use keystone3() object. This of course needs > some more coding, but is much cleaner in terms of customization and > testing. For me the current hacks with 'if keystone.version == 3.0' are > wrong at many levels. And this can be solved now. *The problem till now > was that we had one frontend that had to be backed by different versions of > backend components*. *Now we can have different frontends that map to > specific backend*. That's how I understand the power of Angular with it's > views and directives. That's where I see the real benefit of using > full-featured frontend. Also imagine how easy will be then to deprecate a > component version, compared to what we need to do now for the same. > > Otherwise we just rewrite the current Django middleware with another > DjangoRest middleware and don't change anything, we don't fix the problems. > We just move them to another place. > > I still think that in Paris we talked about a new generation of the > Dashboard, a different approach on building the frontend for OpenStack. > What I've heard there from users/operators of Horizon was that it was > extremely hard to add customizations and new features to the Dashboard, as > all these needed to go through upstream changes and to wait until next > release cycle to get them. Do we still want to address these concerns and > how? Please, correct me if I got things wrong. > > > On Wed, Dec 10, 2014 at 11:56 AM, Richard Jones > wrote: > >> Sorry I didn't respond to this earlier today, I had intended to. >> >> What you're describing isn't REST, and the principles of REST are what >> have been guiding the design of the new API so far. I see a lot of value in >> using REST approaches, mostly around clarity of the interface. >> >> While the idea of a very thin proxy seemed like a great idea at one >> point, my conversations at the summit convinced me that there was value in >> both using the client interfaces present in the openstack_dashboard/api >> code base (since they abstract away many issues in the apis including >> across versions) and also value in us being able to clean up (for example, >> using "project_id" rather than "project" in the user API we've already >> implemented) and extend those interfaces (to allow batched operations). >> >> We want to be careful about what we expose in Horizon to the JS clients >> through this API. That necessitates some amount of code in Horizon. About >> half of the current API for keysone represents that control (the other half >> is docstrings :) >> >> >> Richard >> >> >> On Tue Dec 09 2014 at 9:37:47 PM Tihomir Trifonov >> wrote: >> >>> Sorry for the late reply, just few thoughts on the matter. >>> >>> IMO the REST middleware should be as thin as possible. And I mean thin >>> in terms of processing - it should not do pre/post processing of the >>> requests, but just unpack/pack. So here is an example: >>> >>> instead of making AJAX calls that contain instructions: >>> >>> ?? >>>> POST --json --data {"action": "delete", "data": [ {"name": >>>> "item1"}, {"name": "item2"}, {"name": "item3" ]} >>> >>> >>> I think a better approach is just to pack/unpack batch commands, and >>> leave execution to the frontend/backend and not middleware: >>> >>> >>> ?? >>>> POST --json --data {" >>>> ?batch >>>> ": >>>> ?[ >>>> {? >>>> " >>>> ? >>>> action" >>>> ? : "delete"? >>>> , >>>> ?"payload": ? >>>> {"name": "item1"} >>>> ?, >>>> {? >>>> " >>>> ? >>>> action" >>>> ? : "delete"? >>>> , >>>> ? >>>> "payload": >>>> ? >>>> {"name": "item >>>> ?2 >>>> "} >>>> ?, >>>> {? >>>> " >>>> ? >>>> action" >>>> ? : "delete"? >>>> , >>>> ? >>>> "payload": >>>> ? >>>> {"name": "item >>>> ?3 >>>> "} >>>> ? ] ? >>>> ? >>>> ? >>>> } >>> >>> >>> ?The idea is that the middleware should not know the actual data. It >>> should ideally just unpack the data: >>> >>> ??responses = [] >>>> >>> >>> for cmd in >>>> ? ? >>>> ? >>>> ? >>>> request.POST['batch']:? >>> >>> >>>> ? >>>> ??responses >>>> ?.append(? >>>> ? >>>> getattr(controller, cmd['action'] >>>> ?)(** >>>> cmd['?payload'] >>>> ?)?) >>>> >>> >>>> ?return responses? >>>> >>> >>> >>> ?and the frontend(JS) will just send batches of simple commands, and >>> will receive a list of responses for each command in the batch. The error >>> handling will be done in the frontend?(JS) as well. >>> >>> ? >>> >>> For the more complex example of 'put()' where we have dependent objects: >>> >>> project = api.keystone.tenant_get(request, id) >>>> kwargs = self._tenant_kwargs_from_DATA(request.DATA, enabled=None) >>>> api.keystone.tenant_update(request, project, **kwargs) >>> >>> >>> >>> In practice the project data should be already present in the >>> frontend(assuming that we already loaded it to render the project >>> form/view), so >>> >>> ? >>> ? >>> POST --json --data {" >>> ?batch >>> ": >>> ?[ >>> {? >>> " >>> ? >>> action" >>> ? : "tenant_update"? >>> , >>> ?"payload": ? >>> {"project": js_project_object.id, "name": "some name", "prop1": "some >>> prop", "prop2": "other prop, etc."} >>> ? >>> ? ] ? >>> ? >>> ? >>> }? >>> >>> So in general we don't need to recreate the full state on each REST >>> call, if we make the Frontent full-featured application. This way - the >>> frontend will construct the object, will hold the cached value, and will >>> just send the needed requests as single ones or in batches, will receive >>> the response from the API backend, and will render the results. The whole >>> processing logic will be held in the Frontend(JS), while the middleware >>> will just act as proxy(un/packer). This way we will maintain just the logic >>> in the frontend, and will not need to duplicate some logic in the >>> middleware. >>> >>> >>> >>> >>> On Tue, Dec 2, 2014 at 4:45 PM, Adam Young wrote: >>> >>>> On 12/02/2014 12:39 AM, Richard Jones wrote: >>>> >>>> On Mon Dec 01 2014 at 4:18:42 PM Thai Q Tran wrote: >>>> >>>>> I agree that keeping the API layer thin would be ideal. I should add >>>>> that having discrete API calls would allow dynamic population of table. >>>>> However, I will make a case where it *might* be necessary to add >>>>> additional APIs. Consider that you want to delete 3 items in a given table. >>>>> >>>>> If you do this on the client side, you would need to perform: n * (1 >>>>> API request + 1 AJAX request) >>>>> If you have some logic on the server side that batch delete actions: n >>>>> * (1 API request) + 1 AJAX request >>>>> >>>>> Consider the following: >>>>> n = 1, client = 2 trips, server = 2 trips >>>>> n = 3, client = 6 trips, server = 4 trips >>>>> n = 10, client = 20 trips, server = 11 trips >>>>> n = 100, client = 200 trips, server 101 trips >>>>> >>>>> As you can see, this does not scale very well.... something to >>>>> consider... >>>>> >>>> This is not something Horizon can fix. Horizon can make matters >>>> worse, but cannot make things better. >>>> >>>> If you want to delete 3 users, Horizon still needs to make 3 distinct >>>> calls to Keystone. >>>> >>>> To fix this, we need either batch calls or a standard way to do >>>> multiples of the same operation. >>>> >>>> The unified API effort it the right place to drive this. >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> Yep, though in the above cases the client is still going to be >>>> hanging, waiting for those server-backend calls, with no feedback until >>>> it's all done. I would hope that the client-server call overhead is >>>> minimal, but I guess that's probably wishful thinking when in the land of >>>> random Internet users hitting some provider's Horizon :) >>>> >>>> So yeah, having mulled it over myself I agree that it's useful to >>>> have batch operations implemented in the POST handler, the most common >>>> operation being DELETE. >>>> >>>> Maybe one day we could transition to a batch call with user feedback >>>> using a websocket connection. >>>> >>>> >>>> Richard >>>> >>>>> Richard Jones ---11/27/2014 05:38:53 PM---On Fri Nov 28 2014 at >>>>> 5:58:00 AM Tripp, Travis S wrote: >>>>> >>>>> From: Richard Jones >>>>> To: "Tripp, Travis S" , OpenStack List < >>>>> openstack-dev at lists.openstack.org> >>>>> Date: 11/27/2014 05:38 PM >>>>> Subject: Re: [openstack-dev] [horizon] REST and Django >>>>> >>>>> ------------------------------ >>>>> >>>>> >>>>> >>>>> >>>>> On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S < >>>>> *travis.tripp at hp.com* > wrote: >>>>> >>>>> Hi Richard, >>>>> >>>>> You are right, we should put this out on the main ML, so copying >>>>> thread out to there. ML: FYI that this started after some impromptu IRC >>>>> discussions about a specific patch led into an impromptu google hangout >>>>> discussion with all the people on the thread below. >>>>> >>>>> >>>>> Thanks Travis! >>>>> >>>>> >>>>> >>>>> As I mentioned in the review[1], Thai and I were mainly discussing >>>>> the possible performance implications of network hops from client to >>>>> horizon server and whether or not any aggregation should occur server side. >>>>> In other words, some views require several APIs to be queried before any >>>>> data can displayed and it would eliminate some extra network requests from >>>>> client to server if some of the data was first collected on the server side >>>>> across service APIs. For example, the launch instance wizard will need to >>>>> collect data from quite a few APIs before even the first step is displayed >>>>> (I?ve listed those out in the blueprint [2]). >>>>> >>>>> The flip side to that (as you also pointed out) is that if we keep >>>>> the API?s fine grained then the wizard will be able to optimize in one >>>>> place the calls for data as it is needed. For example, the first step may >>>>> only need half of the API calls. It also could lead to perceived >>>>> performance increases just due to the wizard making a call for different >>>>> data independently and displaying it as soon as it can. >>>>> >>>>> >>>>> Indeed, looking at the current launch wizard code it seems like you >>>>> wouldn't need to load all that data for the wizard to be displayed, since >>>>> only some subset of it would be necessary to display any given panel of the >>>>> wizard. >>>>> >>>>> >>>>> >>>>> I tend to lean towards your POV and starting with discrete API >>>>> calls and letting the client optimize calls. If there are performance >>>>> problems or other reasons then doing data aggregation on the server side >>>>> could be considered at that point. >>>>> >>>>> >>>>> I'm glad to hear it. I'm a fan of optimising when necessary, and not >>>>> beforehand :) >>>>> >>>>> >>>>> >>>>> Of course if anybody is able to do some performance testing >>>>> between the two approaches then that could affect the direction taken. >>>>> >>>>> >>>>> I would certainly like to see us take some measurements when >>>>> performance issues pop up. Optimising without solid metrics is bad idea :) >>>>> >>>>> >>>>> Richard >>>>> >>>>> >>>>> >>>>> [1] >>>>> *https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py* >>>>> >>>>> [2] >>>>> *https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign* >>>>> >>>>> >>>>> -Travis >>>>> >>>>> *From: *Richard Jones <*r1chardj0n3s at gmail.com* >>>>> > >>>>> * Date: *Wednesday, November 26, 2014 at 11:55 PM >>>>> * To: *Travis Tripp <*travis.tripp at hp.com* >, >>>>> Thai Q Tran/Silicon Valley/IBM <*tqtran at us.ibm.com* >>>>> >, David Lyle <*dklyle0 at gmail.com* >>>>> >, Maxime Vidori <*maxime.vidori at enovance.com* >>>>> >, "Wroblewski, Szymon" < >>>>> *szymon.wroblewski at intel.com* >, >>>>> "Wood, Matthew David (HP Cloud - Horizon)" <*matt.wood at hp.com* >>>>> >, "Chen, Shaoquan" <*sean.chen2 at hp.com* >>>>> >, "Farina, Matt (HP Cloud)" < >>>>> *matthew.farina at hp.com* >, Cindy Lu/Silicon >>>>> Valley/IBM <*clu at us.ibm.com* >, Justin >>>>> Pomeroy/Rochester/IBM <*jpomero at us.ibm.com* >, >>>>> Neill Cox <*neill.cox at ingenious.com.au* >>>>> > >>>>> * Subject: *Re: REST and Django >>>>> >>>>> I'm not sure whether this is the appropriate place to discuss >>>>> this, or whether I should be posting to the list under [Horizon] but I >>>>> think we need to have a clear idea of what goes in the REST API and what >>>>> goes in the client (angular) code. >>>>> >>>>> In my mind, the thinner the REST API the better. Indeed if we can >>>>> get away with proxying requests through without touching any *client code, >>>>> that would be great. >>>>> >>>>> Coding additional logic into the REST API means that a developer >>>>> would need to look in two places, instead of one, to determine what was >>>>> happening for a particular call. If we keep it thin then the API presented >>>>> to the client developer is very, very similar to the API presented by the >>>>> services. Minimum surprise. >>>>> >>>>> Your thoughts? >>>>> >>>>> >>>>> Richard >>>>> >>>>> >>>>> On Wed Nov 26 2014 at 2:40:52 PM Richard Jones < >>>>> *r1chardj0n3s at gmail.com* > wrote: >>>>> >>>>> >>>>> Thanks for the great summary, Travis. >>>>> >>>>> I've completed the work I pledged this morning, so now the REST >>>>> API change set has: >>>>> >>>>> - no rest framework dependency >>>>> - AJAX scaffolding in openstack_dashboard.api.rest.utils >>>>> - code in openstack_dashboard/api/rest/ >>>>> - renamed the API from "identity" to "keystone" to be consistent >>>>> - added a sample of testing, mostly for my own sanity to check >>>>> things were working >>>>> >>>>> *https://review.openstack.org/#/c/136676* >>>>> >>>>> >>>>> >>>>> Richard >>>>> >>>>> On Wed Nov 26 2014 at 12:18:25 PM Tripp, Travis S < >>>>> *travis.tripp at hp.com* > wrote: >>>>> >>>>> >>>>> Hello all, >>>>> >>>>> Great discussion on the REST urls today! I think that we are >>>>> on track to come to a common REST API usage pattern. To provide quick >>>>> summary: >>>>> >>>>> We all agreed that going to a straight REST pattern rather >>>>> than through tables was a good idea. We discussed using direct get / post >>>>> in Django views like what Max originally used[1][2] and Thai also >>>>> started[3] with the identity table rework or to go with djangorestframework >>>>> [5] like what Richard was prototyping with[4]. >>>>> >>>>> The main things we would use from Django Rest Framework were >>>>> built in JSON serialization (avoid boilerplate), better exception handling, >>>>> and some request wrapping. However, we all weren?t sure about the need for >>>>> a full new framework just for that. At the end of the conversation, we >>>>> decided that it was a cleaner approach, but Richard would see if he could >>>>> provide some utility code to do that much for us without requiring the full >>>>> framework. David voiced that he doesn?t want us building out a whole >>>>> framework on our own either. >>>>> >>>>> So, Richard will do some investigation during his day today >>>>> and get back to us. Whatever the case, we?ll get a patch in horizon for >>>>> the base dependency (framework or Richard?s utilities) that both Thai?s >>>>> work and the launch instance work is dependent upon. We?ll build REST >>>>> style API?s using the same pattern. We will likely put the rest api?s in >>>>> horizon/openstack_dashboard/api/rest/. >>>>> >>>>> [1] >>>>> *https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/keypair.py* >>>>> >>>>> [2] >>>>> *https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/launch.py* >>>>> >>>>> [3] >>>>> *https://review.openstack.org/#/c/133767/8/openstack_dashboard/dashboards/identity/users/views.py* >>>>> >>>>> [4] >>>>> *https://review.openstack.org/#/c/136676/4/openstack_dashboard/rest_api/identity.py* >>>>> >>>>> [5] *http://www.django-rest-framework.org/* >>>>> >>>>> >>>>> Thanks, >>>>> >>>>> >>>>> Travis_______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> >>> -- >>> Regards, >>> Tihomir Trifonov >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Regards, > Tihomir Trifonov > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Tihomir Trifonov -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.994faa67a8e28335_0.0.1.1.gif Type: image/gif Size: 105 bytes Desc: not available URL: From mark at mcclain.xyz Thu Dec 11 14:59:37 2014 From: mark at mcclain.xyz (Mark McClain) Date: Thu, 11 Dec 2014 09:59:37 -0500 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <54899F92.2060900@gmail.com> References: <54899F92.2060900@gmail.com> Message-ID: > On Dec 11, 2014, at 8:43 AM, Jay Pipes wrote: > > I'm generally in favor of making name attributes opaque, utf-8 strings that are entirely user-defined and have no constraints on them. I consider the name to be just a tag that the user places on some resource. It is the resource's ID that is unique. > > I do realize that Nova takes a different approach to *some* resources, including the security group name. > > End of the day, it's probably just a personal preference whether names should be unique to a tenant/user or not. > > Maru had asked me my opinion on whether names should be unique and I answered my personal opinion that no, they should not be, and if Neutron needed to ensure that there was one and only one default security group for a tenant, that a way to accomplish such a thing in a race-free way, without use of SELECT FOR UPDATE, was to use the approach I put into the pastebin on the review above. > I agree with Jay. We should not care about how a user names the resource. There other ways to prevent this race and Jay?s suggestion is a good one. mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbogun at mirantis.com Thu Dec 11 15:09:12 2014 From: dbogun at mirantis.com (Dmitry Bogun) Date: Thu, 11 Dec 2014 17:09:12 +0200 Subject: [openstack-dev] [nova] - filter out fields in keypair object. Message-ID: <592038CB-6AC6-44D9-BC93-3BD02F2DE48B@mirantis.com> Hi. Why we filter out some fields from keeper object in "create" and "list" operations? in module nova.api.openstack.compute.plugins.v3.keypairs class KeypairController(wsgi.Controller): # ... def _filter_keypair(self, keypair, **attrs): clean = { 'name': keypair.name, 'public_key': keypair.public_key, 'fingerprint': keypair.fingerprint, } for attr in attrs: clean[attr] = keypair[attr] return clean we have method _filter_kaypar. This method used to create response in "create" and "index" methods. Why we need it? PS I need user_id field in "list/index" request in horizon. The only way to got it now - use "get/show" method for each returned object. -------------- next part -------------- An HTML attachment was scrubbed... URL: From davanum at gmail.com Thu Dec 11 15:10:01 2014 From: davanum at gmail.com (Davanum Srinivas) Date: Thu, 11 Dec 2014 07:10:01 -0800 Subject: [openstack-dev] [oslo] deprecation 'pattern' library?? In-Reply-To: References: Message-ID: Surprisingly "deprecator" is still available on pypi On Thu, Dec 11, 2014 at 2:04 AM, Julien Danjou wrote: > On Wed, Dec 10 2014, Joshua Harlow wrote: > > > [?] > >> Or in general any other comments/ideas about providing such a deprecation >> pattern library? > > +1 > >> * debtcollector > > made me think of "loanshark" :)" > > -- > Julien Danjou > -- Free Software hacker > -- http://julien.danjou.info > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims From maxime.leroy at 6wind.com Thu Dec 11 15:15:00 2014 From: maxime.leroy at 6wind.com (Maxime Leroy) Date: Thu, 11 Dec 2014 16:15:00 +0100 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: <20141211104137.GD23831@redhat.com> References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> <20141211104137.GD23831@redhat.com> Message-ID: On Thu, Dec 11, 2014 at 11:41 AM, Daniel P. Berrange wrote: > On Thu, Dec 11, 2014 at 09:37:31AM +0800, henry hly wrote: >> On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells wrote: >> > On 10 December 2014 at 01:31, Daniel P. Berrange >> > wrote: >> >> >> >> [..] >> The question is, do we really need such flexibility for so many nova vif types? >> >> I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example, >> nova shouldn't known too much details about switch backend, it should >> only care about the VIF itself, how the VIF is plugged to switch >> belongs to Neutron half. >> >> However I'm not saying to move existing vif driver out, those open >> backend have been used widely. But from now on the tap and vhostuser >> mode should be encouraged: one common vif driver to many long-tail >> backend. > > Yes, I really think this is a key point. When we introduced the VIF type > mechanism we never intended for there to be soo many different VIF types > created. There is a very small, finite number of possible ways to configure > the libvirt guest XML and it was intended that the VIF types pretty much > mirror that. This would have given us about 8 distinct VIF type maximum. > > I think the reason for the larger than expected number of VIF types, is > that the drivers are being written to require some arbitrary tools to > be invoked in the plug & unplug methods. It would really be better if > those could be accomplished in the Neutron code than the Nova code, via > a host agent run & provided by the Neutron mechanism. This would let > us have a very small number of VIF types and so avoid the entire problem > that this thread is bringing up. > > Failing that though, I could see a way to accomplish a similar thing > without a Neutron launched agent. If one of the VIF type binding > parameters were the name of a script, we could run that script on > plug & unplug. So we'd have a finite number of VIF types, and each > new Neutron mechanism would merely have to provide a script to invoke > > eg consider the existing midonet & iovisor VIF types as an example. > Both of them use the libvirt "ethernet" config, but have different > things running in their plug methods. If we had a mechanism for > associating a "plug script" with a vif type, we could use a single > VIF type for both. > > eg iovisor port binding info would contain > > vif_type=ethernet > vif_plug_script=/usr/bin/neutron-iovisor-vif-plug > > while midonet would contain > > vif_type=ethernet > vif_plug_script=/usr/bin/neutron-midonet-vif-plug > Having less VIF types, then using scripts to plug/unplug the vif in nova is a good idea. So, +1 for the idea. If you want, I can propose a new spec for this. Do you think we have enough time to approve this new spec before the 18th December? Anyway I think we still need to have a vif_driver plugin mechanism: For example, if your external l2/ml2 plugin needs a specific type of nic (i.e. a new method get_config to provide specific parameters to libvirt for the nic) that is not supported in the nova tree. Maybe we can find an other way to support it? Or, are we going to accept new VIF_TYPE in nova if it's only used by an external ml2/l2 plugin? Maxime From iwallis at gmail.com Thu Dec 11 15:24:31 2014 From: iwallis at gmail.com (Ivan Wallis) Date: Thu, 11 Dec 2014 07:24:31 -0800 Subject: [openstack-dev] [barbican] PKCS#11 configuration Message-ID: Hi, I am trying to configure Barbican with an external HSM that has a PKCS#11 provider and just wondering if someone can point me in the right direction on how to configure this type of environment? Regards, Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From berrange at redhat.com Thu Dec 11 15:24:53 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Thu, 11 Dec 2014 15:24:53 +0000 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> <20141211104137.GD23831@redhat.com> Message-ID: <20141211152452.GO23831@redhat.com> On Thu, Dec 11, 2014 at 04:15:00PM +0100, Maxime Leroy wrote: > On Thu, Dec 11, 2014 at 11:41 AM, Daniel P. Berrange > wrote: > > On Thu, Dec 11, 2014 at 09:37:31AM +0800, henry hly wrote: > >> On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells wrote: > >> > On 10 December 2014 at 01:31, Daniel P. Berrange > >> > wrote: > >> >> > >> >> > [..] > >> The question is, do we really need such flexibility for so many nova vif types? > >> > >> I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example, > >> nova shouldn't known too much details about switch backend, it should > >> only care about the VIF itself, how the VIF is plugged to switch > >> belongs to Neutron half. > >> > >> However I'm not saying to move existing vif driver out, those open > >> backend have been used widely. But from now on the tap and vhostuser > >> mode should be encouraged: one common vif driver to many long-tail > >> backend. > > > > Yes, I really think this is a key point. When we introduced the VIF type > > mechanism we never intended for there to be soo many different VIF types > > created. There is a very small, finite number of possible ways to configure > > the libvirt guest XML and it was intended that the VIF types pretty much > > mirror that. This would have given us about 8 distinct VIF type maximum. > > > > I think the reason for the larger than expected number of VIF types, is > > that the drivers are being written to require some arbitrary tools to > > be invoked in the plug & unplug methods. It would really be better if > > those could be accomplished in the Neutron code than the Nova code, via > > a host agent run & provided by the Neutron mechanism. This would let > > us have a very small number of VIF types and so avoid the entire problem > > that this thread is bringing up. > > > > Failing that though, I could see a way to accomplish a similar thing > > without a Neutron launched agent. If one of the VIF type binding > > parameters were the name of a script, we could run that script on > > plug & unplug. So we'd have a finite number of VIF types, and each > > new Neutron mechanism would merely have to provide a script to invoke > > > > eg consider the existing midonet & iovisor VIF types as an example. > > Both of them use the libvirt "ethernet" config, but have different > > things running in their plug methods. If we had a mechanism for > > associating a "plug script" with a vif type, we could use a single > > VIF type for both. > > > > eg iovisor port binding info would contain > > > > vif_type=ethernet > > vif_plug_script=/usr/bin/neutron-iovisor-vif-plug > > > > while midonet would contain > > > > vif_type=ethernet > > vif_plug_script=/usr/bin/neutron-midonet-vif-plug > > > > Having less VIF types, then using scripts to plug/unplug the vif in > nova is a good idea. So, +1 for the idea. > > If you want, I can propose a new spec for this. Do you think we have > enough time to approve this new spec before the 18th December? > > Anyway I think we still need to have a vif_driver plugin mechanism: > For example, if your external l2/ml2 plugin needs a specific type of > nic (i.e. a new method get_config to provide specific parameters to > libvirt for the nic) that is not supported in the nova tree. As I said above, there's a really small finite set of libvirt configs we need to care about. We don't need to have a plugin system for that. It is no real burden to support them in tree Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From mestery at mestery.com Thu Dec 11 15:29:11 2014 From: mestery at mestery.com (Kyle Mestery) Date: Thu, 11 Dec 2014 08:29:11 -0700 Subject: [openstack-dev] [neutron] mid-cycle update In-Reply-To: References: Message-ID: On Wed, Dec 10, 2014 at 5:41 PM, Michael Still wrote: > On Thu, Dec 11, 2014 at 10:14 AM, Kyle Mestery > wrote: > > The Neutron mid-cycle [1] is now complete, I wanted to let everyone know > how > > it went. Thanks to all who attended, we got a lot done. I admit to being > > skeptical of mid-cycles, especially given the cross project meeting a > month > > back on the topic. But this particular one was very useful. We had > defined > > tasks to complete, and we made a lot of progress! What we accomplished > was: > > > > 1. We finished splitting out neutron advanced services and get things > > working again post-split. > > 2. We had a team refactoring the L3 agent who now have a batch of > commits to > > merge post services-split. > > 3. We worked on refactoring the core API and WSGI layer, and produced > > multiple specs on this topic and some POC code. > > 4. We had someone working on IPV6 tempest tests for the gate who made > good > > progress here. > > 5. We had multiple people working on plugin decomposition who are close > to > > getting this working. > > This all sounds like good work. Did you manage to progress the > nova-network to neutron migration tasks as well? > > I forgot to mention that, but yes, there was some work done on that as well. I'll follow up with Oleg on this. Michael, I think it makes sense for us to discuss this in an upcoming Neutron meeting as well. I'll figure out a time and let you know. Having some nova folks there would be good. Thanks, Kyle > > Overall, it was a great sprint! Thanks to Adobe for hosting, Utah is a > > beautiful state. > > > > Looking forward to the rest of Kilo! > > > > Kyle > > > > [1] https://wiki.openstack.org/wiki/Sprints/NeutronKiloSprint > > Thanks, > Michael > > -- > Rackspace Australia > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.laski at rackspace.com Thu Dec 11 15:30:01 2014 From: andrew.laski at rackspace.com (Andrew Laski) Date: Thu, 11 Dec 2014 10:30:01 -0500 Subject: [openstack-dev] [nova] Kilo specs review day In-Reply-To: References: Message-ID: <5489B879.7030006@rackspace.com> On 12/10/2014 04:41 PM, Michael Still wrote: > Hi, > > at the design summit we said that we would not approve specifications > after the kilo-1 deadline, which is 18 December. Unfortunately, we?ve > had a lot of specifications proposed this cycle (166 to my count), and > haven?t kept up with the review workload. > > Therefore, I propose that Friday this week be a specs review day. We > need to burn down the queue of specs needing review, as well as > abandoning those which aren?t getting regular updates based on our > review comments. > > I?d appreciate nova-specs-core doing reviews on Friday, but its always > super helpful when non-cores review as well. A +1 for a developer or > operator gives nova-specs-core a good signal of what might be ready to > approve, and that helps us optimize our review time. > > For reference, the specs to review may be found at: > > https://review.openstack.org/#/q/project:openstack/nova-specs+status:open,n,z > > Thanks heaps, > Michael > It will be nice to have a good push before we hit the deadline. I would like to remind priority owners to update their list of any outstanding specs at https://etherpad.openstack.org/p/kilo-nova-priorities-tracking so they can be targeted during the review day. From mark at mcclain.xyz Thu Dec 11 15:48:58 2014 From: mark at mcclain.xyz (Mark McClain) Date: Thu, 11 Dec 2014 10:48:58 -0500 Subject: [openstack-dev] [neutron] mid-cycle update In-Reply-To: References: Message-ID: <5E64971E-34B7-4624-9DEB-E2CC9364CBD6@mcclain.xyz> > On Dec 11, 2014, at 10:29 AM, Kyle Mestery wrote: > > On Wed, Dec 10, 2014 at 5:41 PM, Michael Still > wrote: > > This all sounds like good work. Did you manage to progress the > nova-network to neutron migration tasks as well? > > I forgot to mention that, but yes, there was some work done on that as well. I'll follow up with Oleg on this. Michael, I think it makes sense for us to discuss this in an upcoming Neutron meeting as well. I'll figure out a time and let you know. Having some nova folks there would be good. > Oleg and I sat down at the mid-cycle and worked through the design spec more. He?s working through the spec draft to validate a few bits of the spec to make sure they?re doable. mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at dague.net Thu Dec 11 15:59:46 2014 From: sean at dague.net (Sean Dague) Date: Thu, 11 Dec 2014 10:59:46 -0500 Subject: [openstack-dev] [nova] meeting location change Message-ID: <5489BF72.8070503@dague.net> In today's early Nova meeting (UTC 1400), we realized that there no longer is a conflicting meeting in #openstack-meeting. I've adjusted the location here - https://wiki.openstack.org/wiki/Meetings#Nova_team_Meeting Also added a formula based on date "+%U" to let you figure out if it's an early or late week. -Sean -- Sean Dague http://dague.net From gessau at cisco.com Thu Dec 11 16:00:59 2014 From: gessau at cisco.com (Henry Gessau) Date: Thu, 11 Dec 2014 09:00:59 -0700 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: References: <54899F92.2060900@gmail.com> Message-ID: <5489BFBB.50802@cisco.com> On Thu, Dec 11, 2014, Mark McClain wrote: > >> On Dec 11, 2014, at 8:43 AM, Jay Pipes > > wrote: >> >> I'm generally in favor of making name attributes opaque, utf-8 strings that >> are entirely user-defined and have no constraints on them. I consider the >> name to be just a tag that the user places on some resource. It is the >> resource's ID that is unique. >> >> I do realize that Nova takes a different approach to *some* resources, >> including the security group name. >> >> End of the day, it's probably just a personal preference whether names >> should be unique to a tenant/user or not. >> >> Maru had asked me my opinion on whether names should be unique and I >> answered my personal opinion that no, they should not be, and if Neutron >> needed to ensure that there was one and only one default security group for >> a tenant, that a way to accomplish such a thing in a race-free way, without >> use of SELECT FOR UPDATE, was to use the approach I put into the pastebin on >> the review above. >> > > I agree with Jay. We should not care about how a user names the resource. > There other ways to prevent this race and Jay?s suggestion is a good one. However we should open a bug against Horizon because the user experience there is terrible with duplicate security group names. From dkranz at redhat.com Thu Dec 11 16:35:52 2014 From: dkranz at redhat.com (David Kranz) Date: Thu, 11 Dec 2014 11:35:52 -0500 Subject: [openstack-dev] Reason for mem/vcpu ratio in default flavors Message-ID: <5489C7E8.2010103@redhat.com> Perhaps this is a historical question, but I was wondering how the default OpenStack flavor size ratio of 2/1 was determined? According to http://aws.amazon.com/ec2/instance-types/, ec2 defines the flavors for General Purpose (M3) at about 3.7/1, with Compute Intensive (C3) at about 1.9/1 and Memory Intensive (R3) at about 7.6/1. -David From jbernard at tuxion.com Thu Dec 11 16:36:43 2014 From: jbernard at tuxion.com (Jon Bernard) Date: Thu, 11 Dec 2014 11:36:43 -0500 Subject: [openstack-dev] [nova][cinder][infra] Ceph CI status update Message-ID: <20141211163643.GA10911@helmut> Heya, quick Ceph CI status update. Once the test_volume_boot_pattern was marked as skipped, only the revert_resize test was failing. I have submitted a patch to nova for this [1], and that yields an all green ceph ci run [2]. So at the moment, and with my revert patch, we're in good shape. I will fix up that patch today so that it can be properly reviewed and hopefully merged. From there I'll submit a patch to infra to move the job to the check queue as non-voting, and we can go from there. [1] https://review.openstack.org/#/c/139693/ [2] http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html Cheers, -- Jon From adidenko at mirantis.com Thu Dec 11 16:38:11 2014 From: adidenko at mirantis.com (Aleksandr Didenko) Date: Thu, 11 Dec 2014 18:38:11 +0200 Subject: [openstack-dev] [Fuel] We lost some commits during upstream puppet manifests merge In-Reply-To: <4B80B327-69EF-4513-B400-2561B75B181E@mirantis.com> References: <4B80B327-69EF-4513-B400-2561B75B181E@mirantis.com> Message-ID: > Also I?m just wondering how do we keep upstream modules in our repo? They are not submodules, so how is it organized? Currently, we don't have any automatic tracking system for changes we apply to the community/upstream modules, that could help us to re-apply them during the sync. Only git or diff comparison between original module and our copy. But that should not be a problem when we finish current sync and switch to the new contribution workflow described in the doc, Vladimir has mentioned in the initial email [1]. Also, in the nearest future we're planning to add unit tests (rake spec) and puppet noop tests into our CI. I think we should combine noop tests with "regression" testing by using 'rake spec'. But this time I mean RSpec tests for puppet host, not for specific classes as I suggested in the previous email. Such tests would compile a complete catalog using our 'site.pp' for specific astute.yaml settings and it will check that needed puppet resources present in the catalog and have needed attributes. Here's a draft - [2]. It checks catalog compilation for a controller node and runs few checks for 'keystone' class and keystone cache driver settings. Since all the test logic is outside of our puppet modules directory, it won't be affected by any further upstream syncs or changes we apply in our modules. So in case some commit removes anything critical that is covered by "regression"/noop tests, then it will get '-1' from CI and attract our attention :) [1] http://docs.mirantis.com/fuel-dev/develop/module_structure.html#contributing-to-existing-fuel-library-modules [2] https://review.openstack.org/141022 Regards, Aleksandr On Fri, Nov 21, 2014 at 8:07 PM, Tomasz Napierala wrote: > > > On 21 Nov 2014, at 17:15, Aleksandr Didenko > wrote: > > > > Hi, > > > > following our docs/workflow plus writing rspec tests for every new > option we add/modify in our manifests could help with regressions. For > example: > > ? we add new keystone config option in openstack::keystone class - > keystone_config {'cache/backend': value => 'keystone.cache.memcache_pool';} > > ? we create new test for openstack::keystone class, something like > this: > > ? should > contain_keystone_config("cache/backend").with_value('keystone.cache.memcache_pool') > > So with such test, if for some reason we lose > keystone_config("cache/backend") option, 'rake spec' would alert us about > it right away and we'll get "-1" from CI. Of course we should also > implement 'rake spec' CI gate for this. > > > > But from the other hand, if someone changes option in manifests and > updates rspec tests accordingly, then such commit will pass 'rake spec' > test and we can still lose some specific option. > > > > > We should speed up development of some modular testing framework that > will check that corresponding change affects only particular pieces. > > > > Such test would not catch this particular regressions with > "keystone_config {'cache/backend': value => > 'keystone.cache.memcache_pool';}", because even with regression (i.e. > dogpile backend) keystone was working OK. It has passed several BVTs and > custom system tests, because 'dogpile' cache backend was working just fine > while all memcached servers are up and running. So it looks like we need > some kind of tests that will ensure that particular config options (or > particular puppet resources) have some particular values (like "backend = > keystone.cache.memcache_pool" in [cache] block of keystone.conf). > > > > So I would go with rspec testing for specific resources but I would > write them in 'openstack' module. Those tests should check that needed > (nova/cinder/keystone/glance)_config resources have needed values in the > puppet catalog. Since we're not going to sync 'openstack' module with the > upstream, such tests will remain intact until we change them, and they > won't be affected by other modules sync/merge (keystone, cinder, nova, etc). > > I totally agree, but we need to remember to introduce tests in separate > commits, otherwise loosing commit ID we would also lose tests ;) > > Also I?m just wondering how do we keep upstream modules in our repo? They > are not submodules, so how is it organized? > > Regards, > -- > Tomasz 'Zen' Napierala > Sr. OpenStack Engineer > tnapierala at mirantis.com > > > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Thu Dec 11 16:55:33 2014 From: alee at redhat.com (Ade Lee) Date: Thu, 11 Dec 2014 11:55:33 -0500 Subject: [openstack-dev] [barbican] PKCS#11 configuration In-Reply-To: References: Message-ID: <1418316933.5473.11.camel@aleeredhat.laptop> Which HSM do you have? On Thu, 2014-12-11 at 07:24 -0800, Ivan Wallis wrote: > Hi, > > > I am trying to configure Barbican with an external HSM that has a > PKCS#11 provider and just wondering if someone can point me in the > right direction on how to configure this type of environment? > > > Regards, > Ivan > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From anteaya at anteaya.info Thu Dec 11 17:03:28 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Thu, 11 Dec 2014 10:03:28 -0700 Subject: [openstack-dev] [nova][cinder][infra] Ceph CI status update In-Reply-To: <20141211163643.GA10911@helmut> References: <20141211163643.GA10911@helmut> Message-ID: <5489CE60.2040102@anteaya.info> On 12/11/2014 09:36 AM, Jon Bernard wrote: > Heya, quick Ceph CI status update. Once the test_volume_boot_pattern > was marked as skipped, only the revert_resize test was failing. I have > submitted a patch to nova for this [1], and that yields an all green > ceph ci run [2]. So at the moment, and with my revert patch, we're in > good shape. > > I will fix up that patch today so that it can be properly reviewed and > hopefully merged. From there I'll submit a patch to infra to move the > job to the check queue as non-voting, and we can go from there. > > [1] https://review.openstack.org/#/c/139693/ > [2] http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html > > Cheers, > Please add the name of your CI account to this table: https://wiki.openstack.org/wiki/ThirdPartySystems As outlined in the third party CI requirements: http://ci.openstack.org/third_party.html#requirements Please post system status updates to your individual CI wikipage that is linked to this table. The mailing list is not the place to post status updates for third party CI systems. If you have questions about any of the above, please attend one of the two third party meetings and ask any and all questions until you are satisfied. https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting Thank you, Anita. From anteaya at anteaya.info Thu Dec 11 17:03:28 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Thu, 11 Dec 2014 10:03:28 -0700 Subject: [openstack-dev] [nova][cinder][infra] Ceph CI status update In-Reply-To: <20141211163643.GA10911@helmut> References: <20141211163643.GA10911@helmut> Message-ID: <5489CE60.1050701@anteaya.info> On 12/11/2014 09:36 AM, Jon Bernard wrote: > Heya, quick Ceph CI status update. Once the test_volume_boot_pattern > was marked as skipped, only the revert_resize test was failing. I have > submitted a patch to nova for this [1], and that yields an all green > ceph ci run [2]. So at the moment, and with my revert patch, we're in > good shape. > > I will fix up that patch today so that it can be properly reviewed and > hopefully merged. From there I'll submit a patch to infra to move the > job to the check queue as non-voting, and we can go from there. > > [1] https://review.openstack.org/#/c/139693/ > [2] http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html > > Cheers, > Please add the name of your CI account to this table: https://wiki.openstack.org/wiki/ThirdPartySystems As outlined in the third party CI requirements: http://ci.openstack.org/third_party.html#requirements Please post system status updates to your individual CI wikipage that is linked to this table. The mailing list is not the place to post status updates for third party CI systems. If you have questions about any of the above, please attend one of the two third party meetings and ask any and all questions until you are satisfied. https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting Thank you, Anita. From dklyle0 at gmail.com Thu Dec 11 17:09:49 2014 From: dklyle0 at gmail.com (David Lyle) Date: Thu, 11 Dec 2014 10:09:49 -0700 Subject: [openstack-dev] [Horizon] Proposal to add IPMI meters from Ceilometer in Horizon In-Reply-To: References: Message-ID: Please submit the blueprint and set the target for the milestone you are targeting. That will add it the the blueprint review process for Horizon. Seems like a minor change, so at this time, I don't foresee any issues with approving it. David On Thu, Dec 11, 2014 at 12:34 AM, Xin, Xiaohui wrote: > Hi, > > In Juno Release, the IPMI meters in Ceilometer have been implemented. > > We know that most of the meters implemented in Ceilometer can be observed > in Horizon side. > > User admin can use the ?Admin? dashboard -> ?System? Panel Group -> > ?Resource Usage? Panel to show the ?Resources Usage Overview?. > > There are a lot of Ceilometer Metrics there now, each metric can be > metered. > > Since IPMI meters have already been there, we?d like to add such Metric > items for it in Horizon to get metered information. > > > > Is there anyone who oppose this proposal? If not, we?d like to add a > blueprint in Horizon for it soon. > > > > Thanks > > Xiaohui > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dklyle0 at gmail.com Thu Dec 11 17:22:34 2014 From: dklyle0 at gmail.com (David Lyle) Date: Thu, 11 Dec 2014 10:22:34 -0700 Subject: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard In-Reply-To: References: <547DD08A.7000402@redhat.com> Message-ID: I'm probably not understanding the nuance of the question but moving the _scripts.html file to openstack_dashboard creates some circular dependencies, does it not? templates/base.html in the horizon side of the repo includes _scripts.html and insures that the javascript needed by the existing horizon framework is present. _conf.html seems like a better candidate for moving as it's more closely tied to the application code. David On Wed, Dec 10, 2014 at 7:20 PM, Thai Q Tran wrote: > Sorry for duplicate mail, forgot the subject. > > -----Thai Q Tran/Silicon Valley/IBM wrote: ----- > To: "OpenStack Development Mailing List \(not for usage questions\)" < > openstack-dev at lists.openstack.org> > From: Thai Q Tran/Silicon Valley/IBM > Date: 12/10/2014 03:37PM > Subject: Moving _conf and _scripts to dashboard > > The way we are structuring our javascripts today is complicated. All of > our static javascripts reside in /horizon/static and are imported through > _conf.html and _scripts.html. Notice that there are already some panel > specific javascripts like: horizon.images.js, horizon.instances.js, > horizon.users.js. They do not belong in horizon. They belong in > openstack_dashboard because they are specific to a panel. > > Why am I raising this issue now? In Angular, we need controllers written > in javascript for each panel. As we angularize more and more panels, we > need to store them in a way that make sense. To me, it make sense for us to > move _conf and _scripts to openstack_dashboard. Or if this is not possible, > then provide a mechanism to override them in openstack_dashboard. > > Thoughts? > Thai > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pieter.c.kruithof-jr at hp.com Thu Dec 11 17:52:49 2014 From: pieter.c.kruithof-jr at hp.com (Kruithof, Piet) Date: Thu, 11 Dec 2014 17:52:49 +0000 Subject: [openstack-dev] We are looking for ten individuals to participate in a usability study sponsored by the Horizon team Message-ID: We are looking for ten individuals to participate in a usability study sponsored by the Horizon team. The purpose of the study is to evaluate the proposed Launch Instance workflow. The study will be moderated remotely via Google Hangouts. Participant description: individuals who use cloud services as a consumer (IaaS, PaaS, SaaS, etc). In this study, we are not interested in admins or operators. Time to complete study: ~1.5 hours Requirements: Please complete the following survey to be considered https://docs.google.com/spreadsheet/embeddedform?formkey=dHl2Qi1QVzdDeVNXUEIyR2h3LUttcGc6MA **Participants will be entered into a drawing for an HP 10 Tablet. ** Feel free to forward the link to anyone else who might be interested ? experience with Horizon is not a requirement. College students are welcome to participate. As always, the results will be shared with the community. Thanks, Piet Kruithof Sr. UX Architect ? HP Helion Cloud From jaypipes at gmail.com Thu Dec 11 17:58:21 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 11 Dec 2014 12:58:21 -0500 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> Message-ID: <5489DB3D.8020402@gmail.com> On 12/11/2014 04:02 AM, joehuang wrote: > [joehuang] The major challenge for VDF use case is cross OpenStack > networking for tenants. The tenant's VM/Volume may be allocated in > different data centers geographically, but virtual network > (L2/L3/FW/VPN/LB) should be built for each tenant automatically and > isolated between tenants. Keystone federation can help authorization > automation, but the cross OpenStack network automation challenge is > still there. Using prosperity orchestration layer can solve the > automation issue, but VDF don't like prosperity API in the > north-bound, because no ecosystem is available. And other issues, for > example, how to distribute image, also cannot be solved by Keystone > federation. What is "prosperity orchestration layer" and "prosperity API"? > [joehuang] This is the ETSI requirement and use cases specification > for NFV. ETSI is the home of the Industry Specification Group for > NFV. In Figure 14 (virtualization of EPC) of this document, you can > see that the operator's cloud including many data centers to provide > connection service to end user by inter-connected VNFs. The > requirements listed in > (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about > the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW > etc) to run over cloud, eg. migrate the traditional telco. APP from > prosperity hardware to cloud. Not all NFV requirements have been > covered yet. Forgive me there are so many telco terms here. What is "prosperity hardware"? Thanks, -jay From morgan.fainberg at gmail.com Thu Dec 11 18:05:35 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Thu, 11 Dec 2014 12:05:35 -0600 Subject: [openstack-dev] [Keystone][python-keystoneclient][pycadf] Abandoning of inactive reviews Message-ID: This is a notification that at the start of next week, all projects under the Identity Program are going to see a cleanup of old/lingering open reviews. I will be reviewing all reviews. If there is a negative score (this would be, -1 or -2 from jenkins, -1 or -2 from a code reviewer, or -1 workflow) and the review has not seen an update in over 60days (more than ?rebase?, commenting/responding to comments is an update) I will be administratively abandoning the change. This will include reviews on: Keystone Keystone-specs python-keystoneclient keystonemiddleware pycadf python-keystoneclient-kerberos python-keystoneclient-federation Please take a look at your open reviews and get an update/response to negative scores to keep reviews active. You will always be able to un-abandon a review (as the author) or ask a Keystone-core member to unabandon a change.? Cheers, Morgan Fainberg --? Morgan Fainberg -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl at ecbaldwin.net Thu Dec 11 18:21:21 2014 From: carl at ecbaldwin.net (Carl Baldwin) Date: Thu, 11 Dec 2014 11:21:21 -0700 Subject: [openstack-dev] [neutron] mid-cycle update In-Reply-To: References: Message-ID: We also spent a half day progressing the Ipam work and made a plan to move forward. Carl On Dec 10, 2014 4:16 PM, "Kyle Mestery" wrote: > The Neutron mid-cycle [1] is now complete, I wanted to let everyone know > how it went. Thanks to all who attended, we got a lot done. I admit to > being skeptical of mid-cycles, especially given the cross project meeting a > month back on the topic. But this particular one was very useful. We had > defined tasks to complete, and we made a lot of progress! What we > accomplished was: > > 1. We finished splitting out neutron advanced services and get things > working again post-split. > 2. We had a team refactoring the L3 agent who now have a batch of commits > to merge post services-split. > 3. We worked on refactoring the core API and WSGI layer, and produced > multiple specs on this topic and some POC code. > 4. We had someone working on IPV6 tempest tests for the gate who made good > progress here. > 5. We had multiple people working on plugin decomposition who are close to > getting this working. > > Overall, it was a great sprint! Thanks to Adobe for hosting, Utah is a > beautiful state. > > Looking forward to the rest of Kilo! > > Kyle > > [1] https://wiki.openstack.org/wiki/Sprints/NeutronKiloSprint > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From slukjanov at mirantis.com Thu Dec 11 18:30:18 2014 From: slukjanov at mirantis.com (Sergey Lukjanov) Date: Thu, 11 Dec 2014 22:30:18 +0400 Subject: [openstack-dev] [sahara] team meeting Dec 11 1800 UTC In-Reply-To: References: Message-ID: http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-12-11-17.59.html http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-12-11-17.59.log.html On Wed, Dec 10, 2014 at 6:21 PM, Sergey Lukjanov wrote: > Hi folks, > > We'll be having the Sahara team meeting as usual in > #openstack-meeting-alt channel. > > Agenda: > https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings > > > http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20141211T18 > > -- > Sincerely yours, > Sergey Lukjanov > Sahara Technical Lead > (OpenStack Data Processing) > Principal Software Engineer > Mirantis Inc. > -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Thu Dec 11 19:26:58 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Thu, 11 Dec 2014 11:26:58 -0800 Subject: [openstack-dev] [nova] Kilo specs review day In-Reply-To: <5489B879.7030006@rackspace.com> References: <5489B879.7030006@rackspace.com> Message-ID: On Thu, Dec 11, 2014 at 7:30 AM, Andrew Laski wrote: > > On 12/10/2014 04:41 PM, Michael Still wrote: > >> Hi, >> >> at the design summit we said that we would not approve specifications >> after the kilo-1 deadline, which is 18 December. Unfortunately, we?ve >> had a lot of specifications proposed this cycle (166 to my count), and >> haven?t kept up with the review workload. >> >> Therefore, I propose that Friday this week be a specs review day. We >> need to burn down the queue of specs needing review, as well as >> abandoning those which aren?t getting regular updates based on our >> review comments. >> >> I?d appreciate nova-specs-core doing reviews on Friday, but its always >> super helpful when non-cores review as well. A +1 for a developer or >> operator gives nova-specs-core a good signal of what might be ready to >> approve, and that helps us optimize our review time. >> >> For reference, the specs to review may be found at: >> >> https://review.openstack.org/#/q/project:openstack/nova- >> specs+status:open,n,z >> >> Thanks heaps, >> Michael >> >> > It will be nice to have a good push before we hit the deadline. > > I would like to remind priority owners to update their list of any > outstanding specs at https://etherpad.openstack. > org/p/kilo-nova-priorities-tracking so they can be targeted during the > review day. In preparation, I put together a nova-specs dashboard: https://review.openstack.org/141137 https://review.openstack.org/#/dashboard/?foreach=project%3A%5Eopenstack%2Fnova-specs+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%252cjenkins+NOT+label%3ACode-Review%3E%3D-2%252cself+branch%3Amaster&title=Nova+Specs&&Your+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself&Needs+final+%2B2=label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Anova-specs-core+label%3ACode-Review%3C%3D-1%29+limit%3A100&Passed+Jenkins%2C+Positive+Nova-Core+Feedback=NOT+label%3ACode-Review%3E%3D2+%28reviewerin%3Anova-core+label%3ACode-Review%3E%3D1%29+NOT%28reviewerin%3Anova-core+label%3ACode-Review%3C%3D-1%29+limit%3A100&Passed+Jenkins%2C+No+Positive+Nova-Core+Feedback%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3C%3D-1+NOT+label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Anova-core+label%3ACode-Review%3E%3D1%29+limit%3A100&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+7+days%29=NOT+label%3ACode-Review%3C%3D2+age%3A7d&Some+negative+feedback%2C+might+still+be+worth+commenting=label%3ACode-Review%3D-1+NOT+label%3ACode-Review%3D-2+limit%3A100&Dead+Specs=label%3ACode-Review%3C%3D-2 > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m4d.coder at gmail.com Thu Dec 11 19:36:59 2014 From: m4d.coder at gmail.com (W Chan) Date: Thu, 11 Dec 2014 11:36:59 -0800 Subject: [openstack-dev] [Mistral] Action context passed to all action executions by default Message-ID: Renat, Here's the blueprint. https://blueprints.launchpad.net/mistral/+spec/mistral-runtime-context I'm proposing to add *args and **kwargs to the __init__ methods of all actions. The action context can be passed as a dict in the kwargs. The global context and the env context can be provided here as well. Maybe put all these different context under a kwarg called context? For example, ctx = { "env": {...}, "global": {...}, "runtime": { "execution_id": ..., "task_id": ..., ... } } action = SomeMistralAction(context=ctx) WDYT? Winson -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.laski at rackspace.com Thu Dec 11 19:55:59 2014 From: andrew.laski at rackspace.com (Andrew Laski) Date: Thu, 11 Dec 2014 14:55:59 -0500 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> Message-ID: <5489F6CF.7000602@rackspace.com> On 12/11/2014 04:02 AM, joehuang wrote: > Hello, Russell, > > Many thanks for your reply. See inline comments. > > -----Original Message----- > From: Russell Bryant [mailto:rbryant at redhat.com] > Sent: Thursday, December 11, 2014 5:22 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward > >>> On Fri, Dec 5, 2014 at 8:23 AM, joehuang wrote: >>>> Dear all & TC & PTL, >>>> >>>> In the 40 minutes cross-project summit session ?Approaches for >>>> scaling out?[1], almost 100 peoples attended the meeting, and the >>>> conclusion is that cells can not cover the use cases and >>>> requirements which the OpenStack cascading solution[2] aim to >>>> address, the background including use cases and requirements is also >>>> described in the mail. >> I must admit that this was not the reaction I came away with the discussion with. >> There was a lot of confusion, and as we started looking closer, many (or perhaps most) >> people speaking up in the room did not agree that the requirements being stated are >> things we want to try to satisfy. > [joehuang] Could you pls. confirm your opinion: 1) cells can not cover the use cases and requirements which the OpenStack cascading solution aim to address. 2) Need further discussion whether to satisfy the use cases and requirements. Correct, cells does not cover all of the use cases that cascading aims to address. But it was expressed that the use cases that are not covered may not be cases that we want addressed. > On 12/05/2014 06:47 PM, joehuang wrote: >>>> Hello, Davanum, >>>> >>>> Thanks for your reply. >>>> >>>> Cells can't meet the demand for the use cases and requirements described in the mail. >> You're right that cells doesn't solve all of the requirements you're discussing. >> Cells addresses scale in a region. My impression from the summit session >> and other discussions is that the scale issues addressed by cells are considered >> a priority, while the "global API" bits are not. > [joehuang] Agree cells is in the first class priority. > >>>> 1. Use cases >>>> a). Vodafone use case[4](OpenStack summit speech video from 9'02" >>>> to 12'30" ), establishing globally addressable tenants which result >>>> in efficient services deployment. >> Keystone has been working on federated identity. >> That part makes sense, and is already well under way. > [joehuang] The major challenge for VDF use case is cross OpenStack networking for tenants. The tenant's VM/Volume may be allocated in different data centers geographically, but virtual network (L2/L3/FW/VPN/LB) should be built for each tenant automatically and isolated between tenants. Keystone federation can help authorization automation, but the cross OpenStack network automation challenge is still there. > Using prosperity orchestration layer can solve the automation issue, but VDF don't like prosperity API in the north-bound, because no ecosystem is available. And other issues, for example, how to distribute image, also cannot be solved by Keystone federation. > >>>> b). Telefonica use case[5], create virtual DC( data center) cross >>>> multiple physical DCs with seamless experience. >> If we're talking about multiple DCs that are effectively local to each other >> with high bandwidth and low latency, that's one conversation. >> My impression is that you want to provide a single OpenStack API on top of >> globally distributed DCs. I honestly don't see that as a problem we should >> be trying to tackle. I'd rather continue to focus on making OpenStack work >> *really* well split into regions. >> I think some people are trying to use cells in a geographically distributed way, >> as well. I'm not sure that's a well understood or supported thing, though. >> Perhaps the folks working on the new version of cells can comment further. > [joehuang] 1) Splited region way cannot provide cross OpenStack networking automation for tenant. 2) exactly, the motivation for cascading is "single OpenStack API on top of globally distributed DCs". Of course, cascading can also be used for DCs close to each other with high bandwidth and low latency. 3) Folks comment from cells are welcome. > . Cells can handle a single API on top of globally distributed DCs. I have spoken with a group that is doing exactly that. But it requires that the API is a trusted part of the OpenStack deployments in those distributed DCs. > >>>> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, >>>> 8#. For NFV cloud, it?s in nature the cloud will be distributed but >>>> inter-connected in many data centers. >> I'm afraid I don't understand this one. In many conversations about NFV, I haven't heard this before. > [joehuang] This is the ETSI requirement and use cases specification for NFV. ETSI is the home of the Industry Specification Group for NFV. In Figure 14 (virtualization of EPC) of this document, you can see that the operator's cloud including many data centers to provide connection service to end user by inter-connected VNFs. The requirements listed in (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW etc) to run over cloud, eg. migrate the traditional telco. APP from prosperity hardware to cloud. Not all NFV requirements have been covered yet. Forgive me there are so many telco terms here. > >>>> 2.requirements >>>> a). The operator has multiple sites cloud; each site can use one or >>>> multiple vendor?s OpenStack distributions. >> Is this a technical problem, or is a business problem of vendors not >> wanting to support a mixed environment that you're trying to work >> around with a technical solution? > [joehuang] Pls. refer to VDF use case, the multi-vendor policy has been stated very clearly: 1) Local relationships: Operating Companies also have long standing relationships to their own choice of vendors; 2) Multi-Vendor :Each site can use one or multiple vendors which leads to better use of local resources and capabilities. Technical solution must be provided for multi-vendor integration and verification, It's usually ETSI standard in the past for mobile network. But how to do that in multi-vendor's cloud infrastructure? Cascading provide a way to use OpenStack API as the integration interface. > >>> b). Each site with its own requirements and upgrade schedule while >>> maintaining standard OpenStack API c). The multi-site cloud must >>> provide unified resource management with global Open API exposed, for >>> example create virtual DC cross multiple physical DCs with seamless >>> experience. >>> Although a prosperity orchestration layer could be developed for the >>> multi-site cloud, but it's prosperity API in the north bound >>> interface. The cloud operators want the ecosystem friendly global >>> open API for the mutli-site cloud for global access. >> I guess the question is, do we see a "global API" as something we want >> to accomplish. What you're talking about is huge, and I'm not even sure >> how you would expect it to work in some cases (like networking). > [joehuang] Yes, the most challenge part is networking. In the PoC, L2 networking cross OpenStack is to leverage the L2 population mechanism.The L2proxy for DC1 in the cascading layer will detect the new VM1(in DC1)'s port is up, and then ML2 L2 population will be activated, the VM1's tunneling endpoint- host IP or L2GW IP in DC1, will be populated to L2proxy for DC2, and L2proxy for DC2 will create a external port in the DC2 Neutron with the VM1's tunneling endpoint- host IP or L2GW IP in DC1. The external port will be attached to the L2GW or only external port created, L2 population(if not L2GW used) inside DC2 can be activated to notify all VMs located in DC2 for the same L2 network. For L3 networking finished in the PoC is to use extra route over GRE to serve local VLAN/VxLAN networks located in different DCs. Of course, other L3 networking method can be developed, for example, through VPN service. There are 4 or 5 BPs talking about edge network gateway to connect OpenStack tenant network to outside network, all these technologies can be leveraged to do cross OpenStack networking for different scenario. To experience the cross OpenStack networking, please try PoC source code: https://github.com/stackforge/tricircle > >> In any case, to be as clear as possible, I'm not convinced this is something >> we should be working on. I'm going to need to see much more >> overwhelming support for the idea before helping to figure out any further steps. > [joehuang] If you or any other have any doubts, please feel free to ignite a discussion thread. For time difference reason, we (working in China) are not able to join most of IRC meeting, so mail-list is a good way for discussion. > > Russell Bryant > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Best Regards > > Chaoyi Huang ( joehuang ) > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mikal at stillhq.com Thu Dec 11 19:57:18 2014 From: mikal at stillhq.com (Michael Still) Date: Fri, 12 Dec 2014 06:57:18 +1100 Subject: [openstack-dev] [nova] Kilo specs review day In-Reply-To: References: <5489B879.7030006@rackspace.com> Message-ID: The dashboard is really cool, although I had to fix the spelling error... Michael On Fri, Dec 12, 2014 at 6:26 AM, Joe Gordon wrote: > > > On Thu, Dec 11, 2014 at 7:30 AM, Andrew Laski > wrote: >> >> >> On 12/10/2014 04:41 PM, Michael Still wrote: >>> >>> Hi, >>> >>> at the design summit we said that we would not approve specifications >>> after the kilo-1 deadline, which is 18 December. Unfortunately, we?ve >>> had a lot of specifications proposed this cycle (166 to my count), and >>> haven?t kept up with the review workload. >>> >>> Therefore, I propose that Friday this week be a specs review day. We >>> need to burn down the queue of specs needing review, as well as >>> abandoning those which aren?t getting regular updates based on our >>> review comments. >>> >>> I?d appreciate nova-specs-core doing reviews on Friday, but its always >>> super helpful when non-cores review as well. A +1 for a developer or >>> operator gives nova-specs-core a good signal of what might be ready to >>> approve, and that helps us optimize our review time. >>> >>> For reference, the specs to review may be found at: >>> >>> >>> https://review.openstack.org/#/q/project:openstack/nova-specs+status:open,n,z >>> >>> Thanks heaps, >>> Michael >>> >> >> It will be nice to have a good push before we hit the deadline. >> >> I would like to remind priority owners to update their list of any >> outstanding specs at >> https://etherpad.openstack.org/p/kilo-nova-priorities-tracking so they can >> be targeted during the review day. > > > > In preparation, I put together a nova-specs dashboard: > > https://review.openstack.org/141137 > > https://review.openstack.org/#/dashboard/?foreach=project%3A%5Eopenstack%2Fnova-specs+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%252cjenkins+NOT+label%3ACode-Review%3E%3D-2%252cself+branch%3Amaster&title=Nova+Specs&&Your+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself&Needs+final+%2B2=label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Anova-specs-core+label%3ACode-Review%3C%3D-1%29+limit%3A100&Passed+Jenkins%2C+Positive+Nova-Core+Feedback=NOT+label%3ACode-Review%3E%3D2+%28reviewerin%3Anova-core+label%3ACode-Review%3E%3D1%29+NOT%28reviewerin%3Anova-core+label%3ACode-Review%3C%3D-1%29+limit%3A100&Passed+Jenkins%2C+No+Positive+Nova-Core+Feedback%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3C%3D-1+NOT+label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Anova-core+label%3ACode-Review%3E%3D1%29+limit%3A100&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+7+days%29=NOT+label%3ACode-Review%3C%3D2+age%3A7d&Some+negative+feedback%2C+might+still+be+worth+commenting=label%3ACode-Review%3D-1+NOT+label%3ACode-Review%3D-2+limit%3A100&Dead+Specs=label%3ACode-Review%3C%3D-2 > >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Rackspace Australia From tqtran at us.ibm.com Thu Dec 11 20:22:42 2014 From: tqtran at us.ibm.com (Thai Q Tran) Date: Thu, 11 Dec 2014 13:22:42 -0700 Subject: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard In-Reply-To: References: , <547DD08A.7000402@redhat.com> Message-ID: An HTML attachment was scrubbed... URL: From vishvananda at gmail.com Thu Dec 11 20:47:47 2014 From: vishvananda at gmail.com (Vishvananda Ishaya) Date: Thu, 11 Dec 2014 12:47:47 -0800 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: <20141211104137.GD23831@redhat.com> References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> <20141211104137.GD23831@redhat.com> Message-ID: On Dec 11, 2014, at 2:41 AM, Daniel P. Berrange wrote: > On Thu, Dec 11, 2014 at 09:37:31AM +0800, henry hly wrote: >> On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells wrote: >>> On 10 December 2014 at 01:31, Daniel P. Berrange >>> wrote: >>>> >>>> >>>> So the problem of Nova review bandwidth is a constant problem across all >>>> areas of the code. We need to solve this problem for the team as a whole >>>> in a much broader fashion than just for people writing VIF drivers. The >>>> VIF drivers are really small pieces of code that should be straightforward >>>> to review & get merged in any release cycle in which they are proposed. >>>> I think we need to make sure that we focus our energy on doing this and >>>> not ignoring the problem by breaking stuff off out of tree. >>> >>> >>> The problem is that we effectively prevent running an out of tree Neutron >>> driver (which *is* perfectly legitimate) if it uses a VIF plugging mechanism >>> that isn't in Nova, as we can't use out of tree code and we won't accept in >>> code ones for out of tree drivers. >> >> The question is, do we really need such flexibility for so many nova vif types? >> >> I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example, >> nova shouldn't known too much details about switch backend, it should >> only care about the VIF itself, how the VIF is plugged to switch >> belongs to Neutron half. >> >> However I'm not saying to move existing vif driver out, those open >> backend have been used widely. But from now on the tap and vhostuser >> mode should be encouraged: one common vif driver to many long-tail >> backend. > > Yes, I really think this is a key point. When we introduced the VIF type > mechanism we never intended for there to be soo many different VIF types > created. There is a very small, finite number of possible ways to configure > the libvirt guest XML and it was intended that the VIF types pretty much > mirror that. This would have given us about 8 distinct VIF type maximum. > > I think the reason for the larger than expected number of VIF types, is > that the drivers are being written to require some arbitrary tools to > be invoked in the plug & unplug methods. It would really be better if > those could be accomplished in the Neutron code than the Nova code, via > a host agent run & provided by the Neutron mechanism. This would let > us have a very small number of VIF types and so avoid the entire problem > that this thread is bringing up. > > Failing that though, I could see a way to accomplish a similar thing > without a Neutron launched agent. If one of the VIF type binding > parameters were the name of a script, we could run that script on > plug & unplug. So we'd have a finite number of VIF types, and each > new Neutron mechanism would merely have to provide a script to invoke > > eg consider the existing midonet & iovisor VIF types as an example. > Both of them use the libvirt "ethernet" config, but have different > things running in their plug methods. If we had a mechanism for > associating a "plug script" with a vif type, we could use a single > VIF type for both. > > eg iovisor port binding info would contain > > vif_type=ethernet > vif_plug_script=/usr/bin/neutron-iovisor-vif-plug > > while midonet would contain > > vif_type=ethernet > vif_plug_script=/usr/bin/neutron-midonet-vif-plug +1 This is a great suggestion. Vish > > > And so you see implementing a new Neutron mechanism in this way would > not require *any* changes in Nova whatsoever. The work would be entirely > self-contained within the scope of Neutron. It is simply a packaging > task to get the vif script installed on the compute hosts, so that Nova > can execute it. > > This is essentially providing a flexible VIF plugin system for Nova, > without having to have it plug directly into the Nova codebase with > the API & RPC stability constraints that implies. > > Regards, > Daniel > -- > |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| > |: http://libvirt.org -o- http://virt-manager.org :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vishvananda at gmail.com Thu Dec 11 21:01:27 2014 From: vishvananda at gmail.com (Vishvananda Ishaya) Date: Thu, 11 Dec 2014 13:01:27 -0800 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <5489BFBB.50802@cisco.com> References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> Message-ID: On Dec 11, 2014, at 8:00 AM, Henry Gessau wrote: > On Thu, Dec 11, 2014, Mark McClain wrote: >> >>> On Dec 11, 2014, at 8:43 AM, Jay Pipes >> > wrote: >>> >>> I'm generally in favor of making name attributes opaque, utf-8 strings that >>> are entirely user-defined and have no constraints on them. I consider the >>> name to be just a tag that the user places on some resource. It is the >>> resource's ID that is unique. >>> >>> I do realize that Nova takes a different approach to *some* resources, >>> including the security group name. >>> >>> End of the day, it's probably just a personal preference whether names >>> should be unique to a tenant/user or not. >>> >>> Maru had asked me my opinion on whether names should be unique and I >>> answered my personal opinion that no, they should not be, and if Neutron >>> needed to ensure that there was one and only one default security group for >>> a tenant, that a way to accomplish such a thing in a race-free way, without >>> use of SELECT FOR UPDATE, was to use the approach I put into the pastebin on >>> the review above. >>> >> >> I agree with Jay. We should not care about how a user names the resource. >> There other ways to prevent this race and Jay?s suggestion is a good one. > > However we should open a bug against Horizon because the user experience there > is terrible with duplicate security group names. The reason security group names are unique is that the ec2 api supports source rule specifications by tenant_id (user_id in amazon) and name, so not enforcing uniqueness means that invocation in the ec2 api will either fail or be non-deterministic in some way. Vish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vishvananda at gmail.com Thu Dec 11 21:02:46 2014 From: vishvananda at gmail.com (Vishvananda Ishaya) Date: Thu, 11 Dec 2014 13:02:46 -0800 Subject: [openstack-dev] Reason for mem/vcpu ratio in default flavors In-Reply-To: <5489C7E8.2010103@redhat.com> References: <5489C7E8.2010103@redhat.com> Message-ID: <65891C8D-C4E6-4463-AAE7-17CEFFF75AF5@gmail.com> Probably just a historical artifact of values that we thought were reasonable for our machines at NASA. Vish On Dec 11, 2014, at 8:35 AM, David Kranz wrote: > Perhaps this is a historical question, but I was wondering how the default OpenStack flavor size ratio of 2/1 was determined? According to http://aws.amazon.com/ec2/instance-types/, ec2 defines the flavors for General Purpose (M3) at about 3.7/1, with Compute Intensive (C3) at about 1.9/1 and Memory Intensive (R3) at about 7.6/1. > > -David > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jaypipes at gmail.com Thu Dec 11 21:04:05 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 11 Dec 2014 16:04:05 -0500 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> Message-ID: <548A06C5.2060900@gmail.com> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: > > On Dec 11, 2014, at 8:00 AM, Henry Gessau wrote: > >> On Thu, Dec 11, 2014, Mark McClain wrote: >>> >>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes >>> > wrote: >>>> >>>> I'm generally in favor of making name attributes opaque, utf-8 strings that >>>> are entirely user-defined and have no constraints on them. I consider the >>>> name to be just a tag that the user places on some resource. It is the >>>> resource's ID that is unique. >>>> >>>> I do realize that Nova takes a different approach to *some* resources, >>>> including the security group name. >>>> >>>> End of the day, it's probably just a personal preference whether names >>>> should be unique to a tenant/user or not. >>>> >>>> Maru had asked me my opinion on whether names should be unique and I >>>> answered my personal opinion that no, they should not be, and if Neutron >>>> needed to ensure that there was one and only one default security group for >>>> a tenant, that a way to accomplish such a thing in a race-free way, without >>>> use of SELECT FOR UPDATE, was to use the approach I put into the pastebin on >>>> the review above. >>>> >>> >>> I agree with Jay. We should not care about how a user names the resource. >>> There other ways to prevent this race and Jay?s suggestion is a good one. >> >> However we should open a bug against Horizon because the user experience there >> is terrible with duplicate security group names. > > The reason security group names are unique is that the ec2 api supports source > rule specifications by tenant_id (user_id in amazon) and name, so not enforcing > uniqueness means that invocation in the ec2 api will either fail or be > non-deterministic in some way. So we should couple our API evolution to EC2 API then? -jay From harlowja at outlook.com Thu Dec 11 21:07:42 2014 From: harlowja at outlook.com (Joshua Harlow) Date: Thu, 11 Dec 2014 13:07:42 -0800 Subject: [openstack-dev] [oslo] deprecation 'pattern' library?? In-Reply-To: References: Message-ID: Ya, I to was surprised by the general lack of this kind of library on pypi. One would think u know that people deprecate stuff, but maybe this isn't the norm for python... Why deprecate when u can just make v2.0 ;) -Josh Davanum Srinivas wrote: > Surprisingly "deprecator" is still available on pypi > > On Thu, Dec 11, 2014 at 2:04 AM, Julien Danjou wrote: >> On Wed, Dec 10 2014, Joshua Harlow wrote: >> >> >> [?] >> >>> Or in general any other comments/ideas about providing such a deprecation >>> pattern library? >> +1 >> >>> * debtcollector >> made me think of "loanshark" :)" >> >> -- >> Julien Danjou >> -- Free Software hacker >> -- http://julien.danjou.info >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > From vishvananda at gmail.com Thu Dec 11 21:07:52 2014 From: vishvananda at gmail.com (Vishvananda Ishaya) Date: Thu, 11 Dec 2014 13:07:52 -0800 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <548A06C5.2060900@gmail.com> References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> <548A06C5.2060900@gmail.com> Message-ID: <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> On Dec 11, 2014, at 1:04 PM, Jay Pipes wrote: > On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: >> >> On Dec 11, 2014, at 8:00 AM, Henry Gessau wrote: >> >>> On Thu, Dec 11, 2014, Mark McClain wrote: >>>> >>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes >>>> > wrote: >>>>> >>>>> I'm generally in favor of making name attributes opaque, utf-8 strings that >>>>> are entirely user-defined and have no constraints on them. I consider the >>>>> name to be just a tag that the user places on some resource. It is the >>>>> resource's ID that is unique. >>>>> >>>>> I do realize that Nova takes a different approach to *some* resources, >>>>> including the security group name. >>>>> >>>>> End of the day, it's probably just a personal preference whether names >>>>> should be unique to a tenant/user or not. >>>>> >>>>> Maru had asked me my opinion on whether names should be unique and I >>>>> answered my personal opinion that no, they should not be, and if Neutron >>>>> needed to ensure that there was one and only one default security group for >>>>> a tenant, that a way to accomplish such a thing in a race-free way, without >>>>> use of SELECT FOR UPDATE, was to use the approach I put into the pastebin on >>>>> the review above. >>>>> >>>> >>>> I agree with Jay. We should not care about how a user names the resource. >>>> There other ways to prevent this race and Jay?s suggestion is a good one. >>> >>> However we should open a bug against Horizon because the user experience there >>> is terrible with duplicate security group names. >> >> The reason security group names are unique is that the ec2 api supports source >> rule specifications by tenant_id (user_id in amazon) and name, so not enforcing >> uniqueness means that invocation in the ec2 api will either fail or be >> non-deterministic in some way. > > So we should couple our API evolution to EC2 API then? > > -jay No I was just pointing out the historical reason for uniqueness, and hopefully encouraging someone to find the best behavior for the ec2 api if we are going to keep the incompatibility there. Also I personally feel the ux is better with unique names, but it is only a slight preference. Vish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jaypipes at gmail.com Thu Dec 11 21:16:34 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 11 Dec 2014 16:16:34 -0500 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> Message-ID: <548A09B2.3040909@gmail.com> On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote: > On Dec 11, 2014, at 1:04 PM, Jay Pipes wrote: >> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: >>> >>> On Dec 11, 2014, at 8:00 AM, Henry Gessau wrote: >>> >>>> On Thu, Dec 11, 2014, Mark McClain wrote: >>>>> >>>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes >>>>> > wrote: >>>>>> >>>>>> I'm generally in favor of making name attributes opaque, utf-8 strings that >>>>>> are entirely user-defined and have no constraints on them. I consider the >>>>>> name to be just a tag that the user places on some resource. It is the >>>>>> resource's ID that is unique. >>>>>> >>>>>> I do realize that Nova takes a different approach to *some* resources, >>>>>> including the security group name. >>>>>> >>>>>> End of the day, it's probably just a personal preference whether names >>>>>> should be unique to a tenant/user or not. >>>>>> >>>>>> Maru had asked me my opinion on whether names should be unique and I >>>>>> answered my personal opinion that no, they should not be, and if Neutron >>>>>> needed to ensure that there was one and only one default security group for >>>>>> a tenant, that a way to accomplish such a thing in a race-free way, without >>>>>> use of SELECT FOR UPDATE, was to use the approach I put into the pastebin on >>>>>> the review above. >>>>>> >>>>> >>>>> I agree with Jay. We should not care about how a user names the resource. >>>>> There other ways to prevent this race and Jay?s suggestion is a good one. >>>> >>>> However we should open a bug against Horizon because the user experience there >>>> is terrible with duplicate security group names. >>> >>> The reason security group names are unique is that the ec2 api supports source >>> rule specifications by tenant_id (user_id in amazon) and name, so not enforcing >>> uniqueness means that invocation in the ec2 api will either fail or be >>> non-deterministic in some way. >> >> So we should couple our API evolution to EC2 API then? >> >> -jay > > No I was just pointing out the historical reason for uniqueness, and hopefully > encouraging someone to find the best behavior for the ec2 api if we are going > to keep the incompatibility there. Also I personally feel the ux is better > with unique names, but it is only a slight preference. Sorry for snapping, you made a fair point. -jay From sean at dague.net Thu Dec 11 22:27:00 2014 From: sean at dague.net (Sean Dague) Date: Thu, 11 Dec 2014 17:27:00 -0500 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <548A09B2.3040909@gmail.com> References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> Message-ID: <548A1A34.40105@dague.net> On 12/11/2014 04:16 PM, Jay Pipes wrote: > On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote: >> On Dec 11, 2014, at 1:04 PM, Jay Pipes wrote: >>> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: >>>> >>>> On Dec 11, 2014, at 8:00 AM, Henry Gessau wrote: >>>> >>>>> On Thu, Dec 11, 2014, Mark McClain wrote: >>>>>> >>>>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes >>>>>> > wrote: >>>>>>> >>>>>>> I'm generally in favor of making name attributes opaque, utf-8 >>>>>>> strings that >>>>>>> are entirely user-defined and have no constraints on them. I >>>>>>> consider the >>>>>>> name to be just a tag that the user places on some resource. It >>>>>>> is the >>>>>>> resource's ID that is unique. >>>>>>> >>>>>>> I do realize that Nova takes a different approach to *some* >>>>>>> resources, >>>>>>> including the security group name. >>>>>>> >>>>>>> End of the day, it's probably just a personal preference whether >>>>>>> names >>>>>>> should be unique to a tenant/user or not. >>>>>>> >>>>>>> Maru had asked me my opinion on whether names should be unique and I >>>>>>> answered my personal opinion that no, they should not be, and if >>>>>>> Neutron >>>>>>> needed to ensure that there was one and only one default security >>>>>>> group for >>>>>>> a tenant, that a way to accomplish such a thing in a race-free >>>>>>> way, without >>>>>>> use of SELECT FOR UPDATE, was to use the approach I put into the >>>>>>> pastebin on >>>>>>> the review above. >>>>>>> >>>>>> >>>>>> I agree with Jay. We should not care about how a user names the >>>>>> resource. >>>>>> There other ways to prevent this race and Jay?s suggestion is a >>>>>> good one. >>>>> >>>>> However we should open a bug against Horizon because the user >>>>> experience there >>>>> is terrible with duplicate security group names. >>>> >>>> The reason security group names are unique is that the ec2 api >>>> supports source >>>> rule specifications by tenant_id (user_id in amazon) and name, so >>>> not enforcing >>>> uniqueness means that invocation in the ec2 api will either fail or be >>>> non-deterministic in some way. >>> >>> So we should couple our API evolution to EC2 API then? >>> >>> -jay >> >> No I was just pointing out the historical reason for uniqueness, and >> hopefully >> encouraging someone to find the best behavior for the ec2 api if we >> are going >> to keep the incompatibility there. Also I personally feel the ux is >> better >> with unique names, but it is only a slight preference. > > Sorry for snapping, you made a fair point. Yeh, honestly, I agree with Vish. I do feel that the UX of that constraint is useful. Otherwise you get into having to show people UUIDs in a lot more places. While those are good for consistency, they are kind of terrible to show to people. -Sean -- Sean Dague http://dague.net From mgagne at iweb.com Thu Dec 11 23:05:01 2014 From: mgagne at iweb.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=) Date: Thu, 11 Dec 2014 18:05:01 -0500 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <54899F92.2060900@gmail.com> References: <54899F92.2060900@gmail.com> Message-ID: <548A231D.6070302@iweb.com> On 2014-12-11 8:43 AM, Jay Pipes wrote: > > I'm generally in favor of making name attributes opaque, utf-8 strings > that are entirely user-defined and have no constraints on them. I > consider the name to be just a tag that the user places on some > resource. It is the resource's ID that is unique. > > I do realize that Nova takes a different approach to *some* resources, > including the security group name. > > End of the day, it's probably just a personal preference whether names > should be unique to a tenant/user or not. > We recently had an issue in production where a user had 2 "default" security groups (for reasons we have yet to identify). For the sack of completeness, we are running Nova+Neutron Icehouse. When no security group is provided, Nova will default to the "default" security group. However due to the fact 2 security groups had the same name, nova-compute got confused, put the instance in ERROR state and logged this traceback [1]: NoUniqueMatch: Multiple security groups found matching 'default'. Use an ID to be more specific. I do understand that people might wish to create security groups with the same name. However I think the following things are very wrong: - the instance request should be blocked before it ends up on a compute node with nova-compute. It shouldn't be the job of nova-compute to find out issues about duplicated names. It should be the job of nova-api. Don't waste your time scheduling and spawning an instance that will never spawn with success. - From an end user perspective, this means "nova boot" returns no error and it's only later that the user is informed of the confusion with security group names. - Why does it have to crash with a traceback? IMO, traceback means "we didn't think about this use case, here is more information on how to find the source". As an operator, I don't care about the traceback if it's a known limitation of Nova/Neutron. Don't pollute my logs with "normal exceptions". (Log rationalization anyone?) Whatever comes out of this discussion about security group name uniqueness, I would like those points to be addressed as I feel those aren't great users/operators experience. [1] http://paste.openstack.org/show/149618/ -- Mathieu From joe.gordon0 at gmail.com Fri Dec 12 00:17:20 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Thu, 11 Dec 2014 16:17:20 -0800 Subject: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward In-Reply-To: <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> Message-ID: On Thu, Dec 11, 2014 at 1:02 AM, joehuang wrote: > Hello, Russell, > > Many thanks for your reply. See inline comments. > > -----Original Message----- > From: Russell Bryant [mailto:rbryant at redhat.com] > Sent: Thursday, December 11, 2014 5:22 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit > recap and move forward > > >> On Fri, Dec 5, 2014 at 8:23 AM, joehuang wrote: > >>> Dear all & TC & PTL, > >>> > >>> In the 40 minutes cross-project summit session "Approaches for > >>> scaling out"[1], almost 100 peoples attended the meeting, and the > >>> conclusion is that cells can not cover the use cases and > >>> requirements which the OpenStack cascading solution[2] aim to > >>> address, the background including use cases and requirements is also > >>> described in the mail. > > >I must admit that this was not the reaction I came away with the > discussion with. > >There was a lot of confusion, and as we started looking closer, many (or > perhaps most) > >people speaking up in the room did not agree that the requirements being > stated are > >things we want to try to satisfy. > > [joehuang] Could you pls. confirm your opinion: 1) cells can not cover the > use cases and requirements which the OpenStack cascading solution aim to > address. 2) Need further discussion whether to satisfy the use cases and > requirements. > > On 12/05/2014 06:47 PM, joehuang wrote: > >>> Hello, Davanum, > >>> > >>> Thanks for your reply. > >>> > >>> Cells can't meet the demand for the use cases and requirements > described in the mail. > > >You're right that cells doesn't solve all of the requirements you're > discussing. > >Cells addresses scale in a region. My impression from the summit session > > and other discussions is that the scale issues addressed by cells are > considered > > a priority, while the "global API" bits are not. > > [joehuang] Agree cells is in the first class priority. > > >>> 1. Use cases > >>> a). Vodafone use case[4](OpenStack summit speech video from 9'02" > >>> to 12'30" ), establishing globally addressable tenants which result > >>> in efficient services deployment. > > > Keystone has been working on federated identity. > >That part makes sense, and is already well under way. > > [joehuang] The major challenge for VDF use case is cross OpenStack > networking for tenants. The tenant's VM/Volume may be allocated in > different data centers geographically, but virtual network > (L2/L3/FW/VPN/LB) should be built for each tenant automatically and > isolated between tenants. Keystone federation can help authorization > automation, but the cross OpenStack network automation challenge is still > there. > Using prosperity orchestration layer can solve the automation issue, but > VDF don't like prosperity API in the north-bound, because no ecosystem is > available. And other issues, for example, how to distribute image, also > cannot be solved by Keystone federation. > > >>> b). Telefonica use case[5], create virtual DC( data center) cross > >>> multiple physical DCs with seamless experience. > > >If we're talking about multiple DCs that are effectively local to each > other > >with high bandwidth and low latency, that's one conversation. > >My impression is that you want to provide a single OpenStack API on top of > >globally distributed DCs. I honestly don't see that as a problem we > should > >be trying to tackle. I'd rather continue to focus on making OpenStack > work > >*really* well split into regions. > > I think some people are trying to use cells in a geographically > distributed way, > > as well. I'm not sure that's a well understood or supported thing, > though. > > Perhaps the folks working on the new version of cells can comment > further. > > [joehuang] 1) Splited region way cannot provide cross OpenStack networking > automation for tenant. 2) exactly, the motivation for cascading is "single > OpenStack API on top of globally distributed DCs". Of course, cascading can > also be used for DCs close to each other with high bandwidth and low > latency. 3) Folks comment from cells are welcome. > . > > >>> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, > >>> 8#. For NFV cloud, it's in nature the cloud will be distributed but > >>> inter-connected in many data centers. > > >I'm afraid I don't understand this one. In many conversations about NFV, > I haven't heard this before. > > [joehuang] This is the ETSI requirement and use cases specification for > NFV. ETSI is the home of the Industry Specification Group for NFV. In > Figure 14 (virtualization of EPC) of this document, you can see that the > operator's cloud including many data centers to provide connection service > to end user by inter-connected VNFs. The requirements listed in ( > https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about the > requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW etc) to run > over cloud, eg. migrate the traditional telco. APP from prosperity hardware > to cloud. Not all NFV requirements have been covered yet. Forgive me there > are so many telco terms here. > > >> > >>> 2.requirements > >>> a). The operator has multiple sites cloud; each site can use one or > >>> multiple vendor's OpenStack distributions. > > >Is this a technical problem, or is a business problem of vendors not > >wanting to support a mixed environment that you're trying to work > >around with a technical solution? > > [joehuang] Pls. refer to VDF use case, the multi-vendor policy has been > stated very clearly: 1) Local relationships: Operating Companies also have > long standing relationships to their own choice of vendors; 2) Multi-Vendor > :Each site can use one or multiple vendors which leads to better use of > local resources and capabilities. Technical solution must be provided for > multi-vendor integration and verification, It's usually ETSI standard in > the past for mobile network. But how to do that in multi-vendor's cloud > infrastructure? Cascading provide a way to use OpenStack API as the > integration interface. > How would something like flavors work across multiple vendors. The OpenStack API doesn't have any hard coded names and sizes for flavors. So a flavor such as m1.tiny may actually be very different vendor to vendor. > > >> b). Each site with its own requirements and upgrade schedule while > >> maintaining standard OpenStack API c). The multi-site cloud must > >> provide unified resource management with global Open API exposed, for > >> example create virtual DC cross multiple physical DCs with seamless > >> experience. > > >> Although a prosperity orchestration layer could be developed for the > >> multi-site cloud, but it's prosperity API in the north bound > >> interface. The cloud operators want the ecosystem friendly global > >> open API for the mutli-site cloud for global access. > > >I guess the question is, do we see a "global API" as something we want > >to accomplish. What you're talking about is huge, and I'm not even sure > >how you would expect it to work in some cases (like networking). > > [joehuang] Yes, the most challenge part is networking. In the PoC, L2 > networking cross OpenStack is to leverage the L2 population mechanism.The > L2proxy for DC1 in the cascading layer will detect the new VM1(in DC1)'s > port is up, and then ML2 L2 population will be activated, the VM1's > tunneling endpoint- host IP or L2GW IP in DC1, will be populated to L2proxy > for DC2, and L2proxy for DC2 will create a external port in the DC2 Neutron > with the VM1's tunneling endpoint- host IP or L2GW IP in DC1. The external > port will be attached to the L2GW or only external port created, L2 > population(if not L2GW used) inside DC2 can be activated to notify all VMs > located in DC2 for the same L2 network. For L3 networking finished in the > PoC is to use extra route over GRE to serve local VLAN/VxLAN networks > located in different DCs. Of course, other L3 networking method can be > developed, for example, through VPN service. There are 4 or 5 BPs talking > about edge network gateway to connect OpenStack tenant network to outside > network, all these technologies can be leveraged to do cross OpenStack > networking for different scenario. To experience the cross OpenStack > networking, please try PoC source code: > https://github.com/stackforge/tricircle > >In any case, to be as clear as possible, I'm not convinced this is > something > >we should be working on. I'm going to need to see much more > >overwhelming support for the idea before helping to figure out any > further steps. > > [joehuang] If you or any other have any doubts, please feel free to ignite > a discussion thread. For time difference reason, we (working in China) are > not able to join most of IRC meeting, so mail-list is a good way for > discussion. > > Russell Bryant > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Best Regards > > Chaoyi Huang ( joehuang ) > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Fri Dec 12 00:20:50 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Thu, 11 Dec 2014 16:20:50 -0800 Subject: [openstack-dev] [NFV][Telco] pxe-boot In-Reply-To: <548869F5.1040801@dektech.com.au> References: <20141125223355.Horde.8UWcgskXSBVvU-RJrCgbtw7@mail.dektech.com.au> <693765041.19659732.1417044729676.JavaMail.zimbra@redhat.com> <547ED500.1070008@dektech.com.au> <548869F5.1040801@dektech.com.au> Message-ID: On Wed, Dec 10, 2014 at 7:42 AM, Pasquale Porreca < pasquale.porreca at dektech.com.au> wrote: > Well, one of the main reason to choose an open source product is to avoid > vendor lock-in. I think it is not > advisable to embed in the software running in an instance a call to > OpenStack specific services. > I'm sorry I don't follow the logic here, can you elaborate. > > On 12/10/14 00:20, Joe Gordon wrote: > > > On Wed, Dec 3, 2014 at 1:16 AM, Pasquale Porreca < > pasquale.porreca at dektech.com.au> wrote: > >> The use case we were thinking about is a Network Function (e.g. IMS >> Nodes) implementation in which the high availability is based on OpenSAF. >> In this scenario there is an Active/Standby cluster of 2 System Controllers >> (SC) plus several Payloads (PL) that boot from network, controlled by the >> SC. The logic of which service to deploy on each payload is inside the SC. >> >> In OpenStack both SCs and PLs will be instances running in the cloud, >> anyway the PLs should still boot from network under the control of the SC. >> In fact to use Glance to store the image for the PLs and keep the control >> of the PLs in the SC, the SC should trigger the boot of the PLs with >> requests to Nova/Glance, but an application running inside an instance >> should not directly interact with a cloud infrastructure service like >> Glance or Nova. >> > > Why not? This is a fairly common practice. > > > -- > Pasquale Porreca > > DEK Technologies > Via dei Castelli Romani, 22 > 00040 Pomezia (Roma) > > Mobile +39 3394823805 > Skype paskporr > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Fri Dec 12 00:23:53 2014 From: clint at fewbar.com (Clint Byrum) Date: Thu, 11 Dec 2014 16:23:53 -0800 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <54899F92.2060900@gmail.com> References: <54899F92.2060900@gmail.com> Message-ID: <1418342649-sup-7892@fewbar.com> Excerpts from Jay Pipes's message of 2014-12-11 05:43:46 -0800: > On 12/11/2014 07:22 AM, Anna Kamyshnikova wrote: > > Hello everyone! > > > > In neutron there is a rather old bug [1] about adding uniqueness for > > security group name and tenant id. I found this idea reasonable and > > started working on fix for this bug [2]. I think it is good to add a > > uniqueconstraint because: > > > > 1) In nova there is such constraint for security groups > > https://github.com/openstack/nova/blob/stable/juno/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py#L1155-L1157. > > So I think that it is rather disruptive that it is impossible to create > > security group with the same name in nova, but possible in neutron. > > 2) Users get confused having security groups with the same name. > > > > In comment for proposed change Assaf Muller and Maru Newby object for > > such solution and suggested another option, so I think we need more eyes > > on this change. > > > > I would like to ask you to share your thoughts on this topic. > > [1] - https://bugs.launchpad.net/neutron/+bug/1194579 > > [2] - https://review.openstack.org/135006 > > I'm generally in favor of making name attributes opaque, utf-8 strings > that are entirely user-defined and have no constraints on them. I > consider the name to be just a tag that the user places on some > resource. It is the resource's ID that is unique. > > I do realize that Nova takes a different approach to *some* resources, > including the security group name. > > End of the day, it's probably just a personal preference whether names > should be unique to a tenant/user or not. > The problem with this approach is that it requires the user to have an external mechanism to achieve idempotency. By allowing an opaque string that the user submits to you to be guaranteed to be unique, you allow the user to write "dumber" code around creation in an unreliable fashion. So instead of while True: try: item = clientlib.find(name='foo')[0] break except NotFound: try: item = clientlib.create(name='foo') break except UniqueConflict: item = clientlib.find(name='foo')[0] break You can keep retrying forever because you know only one thing with that name will ever exist. Without unique names, you have to write weird stuff like this to do a retry. while len(clientlib.find(name='foo')) < 1: try: item = clientlib.create(name='foo') list = clientlib.searchfor(name='foo') for found_item in list: if found_item.id != item.id: clientlib.delete(found_item.id) Name can certainly remain not-unique and free-form, but don't discount the value of a unique value that the user specifies. From rochelle.grober at huawei.com Fri Dec 12 00:32:26 2014 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Fri, 12 Dec 2014 00:32:26 +0000 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <548A231D.6070302@iweb.com> References: <54899F92.2060900@gmail.com> <548A231D.6070302@iweb.com> Message-ID: First, I agree that it's much friendlier to have unique security group names and not have to use UUIDs since when there is a need for more than a "default", the Tennant admin will want to be able to easily track info related to it, plus in the GUI, if it allows a new one to be created, it should disallow "default", but should allow modification of that SG. Also, the GUI could easily suggest adding a number or letter to the end of the new name if the one suggested by the user is already in use. So, GUI, logs, and policy issues all rolled into this discussion. Now to my cause.... Log rationalization! YES! So, I would classify this as a bug in the logging component of Nova. As Mathieu states, this is a known condition, so this should be an ERROR or perhaps WARN that includes which SG name is a duplicate (the NoUniqueMatch statement does this, identifying 'default'), and the "Use an ID to be more specific" as a helpful pointer. If a bug has not been filed yet, could you file one, with the pointer to the paste file and tag it "log" or "log impact"? And I'd love if you put me on the list of people who should be informed (rockyg). Thanks for considering the enduser(s) impact of non-unique names. --Rocky -----Original Message----- From: Mathieu Gagn? [mailto:mgagne at iweb.com] Sent: Thursday, December 11, 2014 3:05 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group On 2014-12-11 8:43 AM, Jay Pipes wrote: > > I'm generally in favor of making name attributes opaque, utf-8 strings > that are entirely user-defined and have no constraints on them. I > consider the name to be just a tag that the user places on some > resource. It is the resource's ID that is unique. > > I do realize that Nova takes a different approach to *some* resources, > including the security group name. > > End of the day, it's probably just a personal preference whether names > should be unique to a tenant/user or not. > We recently had an issue in production where a user had 2 "default" security groups (for reasons we have yet to identify). For the sack of completeness, we are running Nova+Neutron Icehouse. When no security group is provided, Nova will default to the "default" security group. However due to the fact 2 security groups had the same name, nova-compute got confused, put the instance in ERROR state and logged this traceback [1]: NoUniqueMatch: Multiple security groups found matching 'default'. Use an ID to be more specific. I do understand that people might wish to create security groups with the same name. However I think the following things are very wrong: - the instance request should be blocked before it ends up on a compute node with nova-compute. It shouldn't be the job of nova-compute to find out issues about duplicated names. It should be the job of nova-api. Don't waste your time scheduling and spawning an instance that will never spawn with success. - From an end user perspective, this means "nova boot" returns no error and it's only later that the user is informed of the confusion with security group names. - Why does it have to crash with a traceback? IMO, traceback means "we didn't think about this use case, here is more information on how to find the source". As an operator, I don't care about the traceback if it's a known limitation of Nova/Neutron. Don't pollute my logs with "normal exceptions". (Log rationalization anyone?) Whatever comes out of this discussion about security group name uniqueness, I would like those points to be addressed as I feel those aren't great users/operators experience. [1] http://paste.openstack.org/show/149618/ -- Mathieu _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Fri Dec 12 00:59:53 2014 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 11 Dec 2014 19:59:53 -0500 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <5489363B.2060008@hp.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <5489363B.2060008@hp.com> Message-ID: <548A3E09.2040408@redhat.com> On 11/12/14 01:14, Anant Patil wrote: > On 04-Dec-14 10:49, Zane Bitter wrote: >> On 01/12/14 02:02, Anant Patil wrote: >>> On GitHub:https://github.com/anantpatil/heat-convergence-poc >> >> I'm trying to review this code at the moment, and finding some stuff I >> don't understand: >> >> https://github.com/anantpatil/heat-convergence-poc/blob/master/heat/engine/stack.py#L911-L916 >> >> This appears to loop through all of the resources *prior* to kicking off >> any actual updates to check if the resource will change. This is >> impossible to do in general, since a resource may obtain a property >> value from an attribute of another resource and there is no way to know >> whether an update to said other resource would cause a change in the >> attribute value. >> >> In addition, no attempt to catch UpdateReplace is made. Although that >> looks like a simple fix, I'm now worried about the level to which this >> code has been tested. >> > We were working on new branch and as we discussed on Skype, we have > handled all these cases. Please have a look at our current branch: > https://github.com/anantpatil/heat-convergence-poc/tree/graph-version > > When a new resource is taken for convergence, its children are loaded > and the resource definition is re-parsed. The frozen resource definition > will have all the "get_attr" resolved. > >> >> I'm also trying to wrap my head around how resources are cleaned up in >> dependency order. If I understand correctly, you store in the >> ResourceGraph table the dependencies between various resource names in >> the current template (presumably there could also be some left around >> from previous templates too?). For each resource name there may be a >> number of rows in the Resource table, each with an incrementing version. >> As far as I can tell though, there's nowhere that the dependency graph >> for _previous_ templates is persisted? So if the dependency order >> changes in the template we have no way of knowing the correct order to >> clean up in any more? (There's not even a mechanism to associate a >> resource version with a particular template, which might be one avenue >> by which to recover the dependencies.) >> >> I think this is an important case we need to be able to handle, so I >> added a scenario to my test framework to exercise it and discovered that >> my implementation was also buggy. Here's the fix: >> https://github.com/zaneb/heat-convergence-prototype/commit/786f367210ca0acf9eb22bea78fd9d51941b0e40 >> > > Thanks for pointing this out Zane. We too had a buggy implementation for > handling inverted dependency. I had a hard look at our algorithm where > we were continuously merging the edges from new template into the edges > from previous updates. It was an optimized way of traversing the graph > in both forward and reverse order with out missing any resources. But, > when the dependencies are inverted, this wouldn't work. > > We have changed our algorithm. The changes in edges are noted down in > DB, only the delta of edges from previous template is calculated and > kept. At any given point of time, the graph table has all the edges from > current template and delta from previous templates. Each edge has > template ID associated with it. The thing is, the cleanup dependencies aren't really about the template. The real resources really depend on other real resources. You can't delete a Volume before its VolumeAttachment, not because it says so in the template but because it will fail if you try. The template can give us a rough guide in advance to what those dependencies will be, but if that's all we keep then we are discarding information. There may be multiple versions of a resource corresponding to one template version. Even worse, the actual dependencies of a resource change on a smaller time scale than an entire stack update (this is the reason the current implementation updates the template one resource at a time as we go). Given that our Resource entries in the DB are in 1:1 correspondence with actual resources (we create a new one whenever we need to replace the underlying resource), I found it makes the most conceptual and practical sense to store the requirements in the resource itself, and update them at the time they actually change in the real world (bonus: introduces no new locking issues and no extra DB writes). I settled on this after a legitimate attempt at trying other options, but they didn't work out: https://github.com/zaneb/heat-convergence-prototype/commit/a62958342e8583f74e2aca90f6239ad457ba984d > For resource clean up, we start from the > first template (template which was completed and updates were made on > top of it, empty template otherwise), and move towards the current > template in the order in which the updates were issued, and for each > template the graph (edges if found for the template) is traversed in > reverse order and resources are cleaned-up. I'm pretty sure this is backwards - you'll need to clean up newer resources first because they may reference resources from older templates. Also if you have a stubborn old resource that won't delete you don't want that to block cleanups of anything newer. You're also serialising some of the work unnecessarily because you've discarded the information about dependencies that cross template versions, forcing you to clean up only one template version at a time. > The process ends up with > current template being traversed in reverse order and resources being > cleaned up. All the update-replaced resources from the older templates > (older updates in concurrent updates) are cleaned up in the order in > which they are suppose to be. > > Resources are now tied to template, they have a template_id instead of > version. As we traverse the graph, we know which template we are working > on, and can take the relevant action on resource. > > For rollback, another update is issued with the last completed template > (it is designed to have an empty template as last completed template by > default). The current template being worked on becomes predecessor for > the new incoming template. In case of rollback, the last completed > template becomes incoming new template, the current becomes the new > template's predecessor and the successor of last completed template will > have no predecessor. All these changes are available in the > 'graph-version' branch. (The branch name is a misnomer though!) > > I think it is simpler to think about stack and concurrent updates when > we associate resources and edges with template, and stack with current > template and its predecessors (if any). It doesn't seem simple to me because it's trying to reconstruct reality from a lossy version of history. The simplest way to think about it, in my opinion is this: - When updating resources, respect their dependencies as given in the template - When checking resources to clean up, respect their actual, current real-world dependencies, and check replacement resources before the resources that they replaced. - Don't check a resource for clean up until it has been updated to the latest template. > I also think that we should decouple Resource from Stack. This is really > a hindrance when workers work on individual resources. The resource > should be abstracted enough from stack for the worker to work on the > resource alone. The worker should load the required resource plug-in and > start converging. I think that's a worthy goal, and it would be really nice if we could load a Resource completely independently of its Stack, and I know this has always been a design goal of yours (hence you're caching the resource definition in the Resource rather than getting it from the template). That said, I am convinced it's an unachievable goal, and I believe we should give up on it. - We'll always need to load _some_ central thing (e.g. to find out if the current traversal is still the valid one), so it might as well be the Stack. - Existing plugin abominations like HARestarter expect a working Stack object to be provided so it can go hunting for other resources. I think the best we can do is try to make heat.engine.stack.Stack as lazy as possible so that it only does extra work when strictly required, and just accept that the stack will always be loaded from the database. I am also strongly in favour of treating the idea of caching the unresolved resource definition in the Resource table as a straight performance optimisation that is completely separate to the convergence work. It's going to be inevitably ugly because there is no template-format-independent way to serialise a resource definition (while resource definition objects themselves are designed to be inherently template-format-independent). Once phase 1 is complete we can decide whether it's worth it based on measuring the actual performance improvement. (Note that we _already_ store the _resolved_ properties of the resource, which is what the observer will be comparing against for phase 2, so there should be no reason for the observer to need to load the stack.) > The READEME.rst is really helpful for bringing up the minimal devstack > and test the PoC. I also has some notes on design. > [snip] > > Zane, I have few questions: > 1. Our current implementation is based on notifications from worker so > that the engine can take up next set of tasks. I don't see this in your > case. I think we should be doing this. It gels well with observer > notification mechanism. When the observer comes, it would send a > converge notification. Both, the provisioning of stack and the > continuous observation, happens with notifications (async message > passing). I see that the workers in your case pick up the parent when/if > it is done and schedules it or updates the sync point. I'm not quite sure what you're asking here, so forgive me if I'm misunderstanding. What I think you're saying is that where my prototype propagates notifications thus: worker -> worker -> worker (where -> is an async message) you would prefer it to do: worker -> engine -> worker -> worker Is that right? To me the distinction seems somewhat academic, given that we've decided that the engine and the worker will be the same process. I don't see a disadvantage to doing right away stuff that we know needs to be done right away. Obviously we should factor the code out tidily into a separate method where we can _also_ expose it as a notification that can be triggered by the continuous observer. You mentioned above that you thought the workers should not ever load the Stack, and I think that's probably the reason you favour this approach: the 'worker' would always load just the Resource and the 'engine' (even though they're really the same) would always load just the Stack, right? However, as I mentioned above, I think we'll want/have to load the Stack in the worker anyway, so eliminating the extra asynchronous call eliminates the performance penalty for having to do so. > 2. The dependency graph travels everywhere. IMHO, we can keep the graph > in DB and let the workers work on a resource, and engine decide which > one to be scheduled next by looking at the graph. There wouldn't be a > need for a lock here, in the engine, the DB transactions should take > care of concurrent DB updates. Our new PoC follows this model. I'm fine with keeping the graph in the DB instead of having it flow with the notifications. > 3. The request ID is passed down to check_*_complete. Would the check > method be interrupted if new request arrives? IMHO, the check method > should not get interrupted. It should return back when the resource has > reached a concrete state, either failed or completed. I agree, it should not be interrupted. I've started to think of phase 1 and phase 2 like this: 1) Make locks more granular: stack-level lock becomes resource-level 2) Get rid of locks altogether So in phase 1 we'll lock the resources and like you said, it will return back when it has reached a concrete state. In phase 2 we'll be able to just update the goal state for the resource and the observe/converge process will be able to automagically find the best way to that state regardless of whether it was in the middle of transitioning to another state or not. Or something. But that's for the future :) > 4. Lot of synchronization issues which we faced in our PoC cannot be > encountered with the framework. How do we evaluate what happens when > synchronization issues are encountered (like stack lock kind of issues > which we are replacing with DB transaction). Right, yeah, this is obviously the big known limitation of the simulator. I don't have a better answer other than to Think Very Hard about it. Designing software means solving for hundreds of constraints - too many for a human to hold in their brain at the same time. The purpose of prototyping is to fix enough of the responses to those constraints in a concrete form to allow reasoning about the remaining ones to become tractable. If you fix solutions for *all* of the constraints, then what you've built is by definition not a prototype but the final product. One technique available to us is to encapsulate the parts of the algorithm that are subject to synchronisation issues behind abstractions that offer stronger guarantees. Then in order to have confidence in the design we need only satisfy ourselves that we have analysed the guarantees correctly and that a concrete implementation offering those same guarantees is possible. For example, the SyncPoints are shown to work under the assumption that they are not subject to race conditions, and the SyncPoint code is small enough that we can easily see that it can be implemented in an atomic fashion using the same DB primitives already proven to work by StackLock. Therefore we can have a very high confidence (but not proof) that the overall algorithm will work when implemented in the final product. Having Thought Very Hard about it, I'm as confident as I can be that I'm not relying on any synchronisation properties that can't be implemented using select-for-update on a single database row. There will of course be surprises at implementation time, but I hope that won't be one of them and anticipate that any changes required to the plan will be localised and not wide-ranging. (This is in contrast BTW to my centralised-graph branch, linked above, where it became very obvious that it would require some sort of external locking - so there is reason to think that this process can reveal architectural problems related to synchronisation where they are present.) cheers, Zane. From zbitter at redhat.com Fri Dec 12 01:07:04 2014 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 11 Dec 2014 20:07:04 -0500 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> Message-ID: <548A3FB8.9030007@redhat.com> On 11/12/14 08:26, Murugan, Visnusaran wrote: >>> [Murugan, Visnusaran] >>> In case of rollback where we have to cleanup earlier version of resources, >> we could get the order from old template. We'd prefer not to have a graph >> table. >> >> In theory you could get it by keeping old templates around. But that means >> keeping a lot of templates, and it will be hard to keep track of when you want >> to delete them. It also means that when starting an update you'll need to >> load every existing previous version of the template in order to calculate the >> dependencies. It also leaves the dependencies in an ambiguous state when a >> resource fails, and although that can be worked around it will be a giant pain >> to implement. >> > > Agree that looking to all templates for a delete is not good. But baring > Complexity, we feel we could achieve it by way of having an update and a > delete stream for a stack update operation. I will elaborate in detail in the > etherpad sometime tomorrow :) > >> I agree that I'd prefer not to have a graph table. After trying a couple of >> different things I decided to store the dependencies in the Resource table, >> where we can read or write them virtually for free because it turns out that >> we are always reading or updating the Resource itself at exactly the same >> time anyway. >> > > Not sure how this will work in an update scenario when a resource does not change > and its dependencies do. We'll always update the requirements, even when the properties don't change. > Also taking care of deleting resources in order will > be an issue. It works fine. > This implies that there will be different versions of a resource which > will even complicate further. No it doesn't, other than the different versions we already have due to UpdateReplace. >>>> This approach reduces DB queries by waiting for completion notification >> on a topic. The drawback I see is that delete stack stream will be huge as it >> will have the entire graph. We can always dump such data in >> ResourceLock.data Json and pass a simple flag "load_stream_from_db" to >> converge RPC call as a workaround for delete operation. >>> >>> This seems to be essentially equivalent to my 'SyncPoint' proposal[1], with >> the key difference that the data is stored in-memory in a Heat engine rather >> than the database. >>> >>> I suspect it's probably a mistake to move it in-memory for similar >>> reasons to the argument Clint made against synchronising the marking off >> of dependencies in-memory. The database can handle that and the problem >> of making the DB robust against failures of a single machine has already been >> solved by someone else. If we do it in-memory we are just creating a single >> point of failure for not much gain. (I guess you could argue it doesn't matter, >> since if any Heat engine dies during the traversal then we'll have to kick off >> another one anyway, but it does limit our options if that changes in the >> future.) [Murugan, Visnusaran] Resource completes, removes itself from >> resource_lock and notifies engine. Engine will acquire parent lock and initiate >> parent only if all its children are satisfied (no child entry in resource_lock). >> This will come in place of Aggregator. >> >> Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly what I did. >> The three differences I can see are: >> >> 1) I think you are proposing to create all of the sync points at the start of the >> traversal, rather than on an as-needed basis. This is probably a good idea. I >> didn't consider it because of the way my prototype evolved, but there's now >> no reason I can see not to do this. >> If we could move the data to the Resource table itself then we could even >> get it for free from an efficiency point of view. > > +1. But we will need engine_id to be stored somewhere for recovery purpose (easy to be queried format). Yeah, so I'm starting to think you're right, maybe the/a Lock table is the right thing to use there. We could probably do it within the resource table using the same select-for-update to set the engine_id, but I agree that we might be starting to jam too much into that one table. > Sync points are created as-needed. Single resource is enough to restart that entire stream. > I think there is a disconnect in our understanding. I will detail it as well in the etherpad. OK, that would be good. >> 2) You're using a single list from which items are removed, rather than two >> lists (one static, and one to which items are added) that get compared. >> Assuming (1) then this is probably a good idea too. > > Yeah. We have a single list per active stream which work by removing > Complete/satisfied resources from it. I went to change this and then remembered why I did it this way: the sync point is also storing data about the resources that are triggering it. Part of this is the RefID and attributes, and we could replace that by storing that data in the Resource itself and querying it rather than having it passed in via the notification. But the other part is the ID/key of those resources, which we _need_ to know in order to update the requirements in case one of them has been replaced and thus the graph doesn't reflect it yet. (Or, for that matter, we need it to know where to go looking for the RefId and/or attributes if they're in the DB.) So we have to store some data, we can't just remove items from the required list (although we could do that as well). >> 3) You're suggesting to notify the engine unconditionally and let the engine >> decide if the list is empty. That's probably not a good idea - not only does it >> require extra reads, it introduces a race condition that you then have to solve >> (it can be solved, it's just more work). >> Since the update to remove a child from the list is atomic, it's best to just >> trigger the engine only if the list is now empty. >> > > No. Notify only if stream has something to be processed. The newer > Approach based on db lock will be that the last resource will initiate its parent. > This is opposite to what our Aggregator model had suggested. OK, I think we're on the same page on this one then. >>> It's not clear to me how the 'streams' differ in practical terms from >>> just passing a serialisation of the Dependencies object, other than >>> being incomprehensible to me ;). The current Dependencies >>> implementation >>> (1) is a very generic implementation of a DAG, (2) works and has plenty of >> unit tests, (3) has, with I think one exception, a pretty straightforward API, >> (4) has a very simple serialisation, returned by the edges() method, which >> can be passed back into the constructor to recreate it, and (5) has an API that >> is to some extent relied upon by resources, and so won't likely be removed >> outright in any event. >>> Whatever code we need to handle dependencies ought to just build on >> this existing implementation. >>> [Murugan, Visnusaran] Our thought was to reduce payload size >> (template/graph). Just planning for worst case scenario (million resource >> stack) We could always dump them in ResourceLock.data to be loaded by >> Worker. >> >> If there's a smaller representation of a graph than a list of edges then I don't >> know what it is. The proposed stream structure certainly isn't it, unless you >> mean as an alternative to storing the entire graph once for each resource. A >> better alternative is to store it once centrally - in my current implementation >> it is passed down through the trigger messages, but since only one traversal >> can be in progress at a time it could just as easily be stored in the Stack table >> of the database at the slight cost of an extra write. >> > > Agree that edge is the smallest representation of a graph. But it does not give > us a complete picture without doing a DB lookup. Our assumption was to store > streams in IN_PROGRESS resource_lock.data column. This could be in resource table > instead. That's true, but I think in practice at any point where we need to look at this we will always have already loaded the Stack from the DB for some other reason, so we actually can get it for free. (See detailed discussion in my reply to Anant.) >> I'm not opposed to doing that, BTW. In fact, I'm really interested in your input >> on how that might help make recovery from failure more robust. I know >> Anant mentioned that not storing enough data to recover when a node dies >> was his big concern with my current approach. >> > > With streams, We feel recovery will be easier. All we need is a trigger :) > >> I can see that by both creating all the sync points at the start of the traversal >> and storing the dependency graph in the database instead of letting it flow >> through the RPC messages, we would be able to resume a traversal where it >> left off, though I'm not sure what that buys us. >> >> And I guess what you're suggesting is that by having an explicit lock with the >> engine ID specified, we can detect when a resource is stuck in IN_PROGRESS >> due to an engine going down? That's actually pretty interesting. >> > > Yeah :) > >>> Based on our call on Thursday, I think you're taking the idea of the Lock >> table too literally. The point of referring to locks is that we can use the same >> concepts as the Lock table relies on to do atomic updates on a particular row >> of the database, and we can use those atomic updates to prevent race >> conditions when implementing SyncPoints/Aggregators/whatever you want >> to call them. It's not that we'd actually use the Lock table itself, which >> implements a mutex and therefore offers only a much slower and more >> stateful way of doing what we want (lock mutex, change data, unlock >> mutex). >>> [Murugan, Visnusaran] Are you suggesting something like a select-for- >> update in resource table itself without having a lock table? >> >> Yes, that's exactly what I was suggesting. > > DB is always good for sync. But we need to be careful not to overdo it. Yeah, I see what you mean now, it's starting to _feel_ like there'd be too many things mixed together in the Resource table. Are you aware of some concrete harm that might cause though? What happens if we overdo it? Is select-for-update on a huge row more expensive than the whole overhead of manipulating the Lock? Just trying to figure out if intuition is leading me astray here. > Will update etherpad by tomorrow. OK, thanks. cheers, Zane. From rochelle.grober at huawei.com Fri Dec 12 01:14:48 2014 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Fri, 12 Dec 2014 01:14:48 +0000 Subject: [openstack-dev] [Cross Project][Ops][Log] Logging Working Group: need to establish our regular meeting(s) Message-ID: Hi guys, I apologize for taking so long to get to this, but I think once we start meeting, our momentum will build. I'm cross posting this to dev and operators so anyone who is interested can participate. I've set up a Doodle to vote on the first set of times (these happen to work for me and for a dev in Europe, but if we don't get enough positive votes, we'll try again). I will also cross post the meeting schedule and add it to the meetings wiki page when we have decided on the meeting. I've stuck with Monday, Wednesday and Thursday to keep the meetings during the work week. I know we are close to the holidays, so I'll give this a week to get votes unless I get heavy turnout early. The doodle poll: https://doodle.com/7tkwu65s8b7vt5ex I'm working on summarizing the etherpads from the summit and will be posting that as a document either on a wiki page or a google doc. The first session came out with some possible Kilo actions that would help logging. I think our first meeting should focus on: Agenda: * Logging bugs against logging - how to and how to advertise to the rest of Operators group * Working with devs: Kilo possible dev help, getting info from devs on what/when to review specs/code * Where and what form we document our progress, information, etc * Where to focus efforts on Standards (docs, specs, review lists, project liaisons, etc) * Review progress (bugs, specs, docs, whatever) So, please vote on the doodle and please let's start the discussion. I will post this separately to dev and operators so that the operators can discuss this without spamming the developers until we have something they would want to comment on. --Rocky Grober -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiaohui.xin at intel.com Fri Dec 12 01:24:25 2014 From: xiaohui.xin at intel.com (Xin, Xiaohui) Date: Fri, 12 Dec 2014 01:24:25 +0000 Subject: [openstack-dev] [Horizon] Proposal to add IPMI meters from Ceilometer in Horizon In-Reply-To: References: Message-ID: Got it. Thanks! We will soon add the blueprint for IPMI meters in Horizon. Thanks Xiaohui From: David Lyle [mailto:dklyle0 at gmail.com] Sent: Friday, December 12, 2014 1:10 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Horizon] Proposal to add IPMI meters from Ceilometer in Horizon Please submit the blueprint and set the target for the milestone you are targeting. That will add it the the blueprint review process for Horizon. Seems like a minor change, so at this time, I don't foresee any issues with approving it. David On Thu, Dec 11, 2014 at 12:34 AM, Xin, Xiaohui > wrote: Hi, In Juno Release, the IPMI meters in Ceilometer have been implemented. We know that most of the meters implemented in Ceilometer can be observed in Horizon side. User admin can use the ?Admin? dashboard -> ?System? Panel Group -> ?Resource Usage? Panel to show the ?Resources Usage Overview?. There are a lot of Ceilometer Metrics there now, each metric can be metered. Since IPMI meters have already been there, we?d like to add such Metric items for it in Horizon to get metered information. Is there anyone who oppose this proposal? If not, we?d like to add a blueprint in Horizon for it soon. Thanks Xiaohui _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From joehuang at huawei.com Fri Dec 12 01:42:51 2014 From: joehuang at huawei.com (joehuang) Date: Fri, 12 Dec 2014 01:42:51 +0000 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <5489DB3D.8020402@gmail.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489DB3D.8020402@gmail.com> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541FD457@szxema505-mbs.china.huawei.com> Hi, Jay, Good question, see inline comments, pls. Best Regards Chaoyi Huang ( Joe Huang ) -----Original Message----- From: Jay Pipes [mailto:jaypipes at gmail.com] Sent: Friday, December 12, 2014 1:58 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward On 12/11/2014 04:02 AM, joehuang wrote: >> [joehuang] The major challenge for VDF use case is cross OpenStack >> networking for tenants. The tenant's VM/Volume may be allocated in >> different data centers geographically, but virtual network >> (L2/L3/FW/VPN/LB) should be built for each tenant automatically and >> isolated between tenants. Keystone federation can help authorization >> automation, but the cross OpenStack network automation challenge is >> still there. Using prosperity orchestration layer can solve the >> automation issue, but VDF don't like prosperity API in the >> north-bound, because no ecosystem is available. And other issues, for >> example, how to distribute image, also cannot be solved by Keystone >> federation. >What is "prosperity orchestration layer" and "prosperity API"? [joehuang] suppose that there are two OpenStack instances in the cloud, and vendor A developed an orchestration layer called CMPa (cloud management platform a), vendor B's orchestration layer CMPb. CMPa will define boot VM interface as CreateVM( Num, NameList, VMTemplate), CMPb may like to define boot VM interface as bootVM( Name, projectID, flavorID, volumeSize, location, networkID). After the customer asked more and more function to the cloud, the API set of CMPa will be quite different from that of CMPb, and different from OpenStack API. Now, all apps which consume OpenStack API like Heat, will not be able to run above the prosperity software CMPa/CMPb. All OpenStack API APPs ecosystem will be lost in the customer's cloud. >> [joehuang] This is the ETSI requirement and use cases specification >> for NFV. ETSI is the home of the Industry Specification Group for NFV. >> In Figure 14 (virtualization of EPC) of this document, you can see >> that the operator's cloud including many data centers to provide >> connection service to end user by inter-connected VNFs. The >> requirements listed in >> (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about >> the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW >> etc) to run over cloud, eg. migrate the traditional telco. APP from >> prosperity hardware to cloud. Not all NFV requirements have been >> covered yet. Forgive me there are so many telco terms here. >What is "prosperity hardware"? [joehuang] For example, Huawei's IMS can only run over Huawei's ATCA hardware, even you bought Nokia ATCA, the IMS from Huawei will not be able to work over Nokia ATCA. The telco APP is sold with hardware together. (More comments on ETSI: ETSI is also the standard organization for GSM, 3G, 4G.) Thanks, -jay _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From harlowja at outlook.com Fri Dec 12 01:47:21 2014 From: harlowja at outlook.com (Joshua Harlow) Date: Thu, 11 Dec 2014 17:47:21 -0800 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <5E7A3D1BF5FD014E86E5F971CF446EFF541FD457@szxema505-mbs.china.huawei.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489DB3D.8020402@gmail.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD457@szxema505-mbs.china.huawei.com> Message-ID: So I think u mean 'proprietary'? http://www.merriam-webster.com/dictionary/proprietary -Josh joehuang wrote: > Hi, Jay, > > Good question, see inline comments, pls. > > Best Regards > Chaoyi Huang ( Joe Huang ) > > -----Original Message----- > From: Jay Pipes [mailto:jaypipes at gmail.com] > Sent: Friday, December 12, 2014 1:58 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward > > On 12/11/2014 04:02 AM, joehuang wrote: >>> [joehuang] The major challenge for VDF use case is cross OpenStack >>> networking for tenants. The tenant's VM/Volume may be allocated in >>> different data centers geographically, but virtual network >>> (L2/L3/FW/VPN/LB) should be built for each tenant automatically and >>> isolated between tenants. Keystone federation can help authorization >>> automation, but the cross OpenStack network automation challenge is >>> still there. Using prosperity orchestration layer can solve the >>> automation issue, but VDF don't like prosperity API in the >>> north-bound, because no ecosystem is available. And other issues, for >>> example, how to distribute image, also cannot be solved by Keystone >>> federation. > >> What is "prosperity orchestration layer" and "prosperity API"? > > [joehuang] suppose that there are two OpenStack instances in the cloud, and vendor A developed an orchestration layer called CMPa (cloud management platform a), vendor B's orchestration layer CMPb. CMPa will define boot VM interface as CreateVM( Num, NameList, VMTemplate), CMPb may like to define boot VM interface as bootVM( Name, projectID, flavorID, volumeSize, location, networkID). After the customer asked more and more function to the cloud, the API set of CMPa will be quite different from that of CMPb, and different from OpenStack API. Now, all apps which consume OpenStack API like Heat, will not be able to run above the prosperity software CMPa/CMPb. All OpenStack API APPs ecosystem will be lost in the customer's cloud. > >>> [joehuang] This is the ETSI requirement and use cases specification >>> for NFV. ETSI is the home of the Industry Specification Group for NFV. >>> In Figure 14 (virtualization of EPC) of this document, you can see >>> that the operator's cloud including many data centers to provide >>> connection service to end user by inter-connected VNFs. The >>> requirements listed in >>> (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about >>> the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW >>> etc) to run over cloud, eg. migrate the traditional telco. APP from >>> prosperity hardware to cloud. Not all NFV requirements have been >>> covered yet. Forgive me there are so many telco terms here. > >> What is "prosperity hardware"? > > [joehuang] For example, Huawei's IMS can only run over Huawei's ATCA hardware, even you bought Nokia ATCA, the IMS from Huawei will not be able to work over Nokia ATCA. The telco APP is sold with hardware together. (More comments on ETSI: ETSI is also the standard organization for GSM, 3G, 4G.) > > Thanks, > -jay > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From travis.tripp at hp.com Fri Dec 12 02:02:30 2014 From: travis.tripp at hp.com (Tripp, Travis S) Date: Fri, 12 Dec 2014 02:02:30 +0000 Subject: [openstack-dev] [horizon] REST and Django In-Reply-To: References: <547DD08A.7000402@redhat.com> Message-ID: Tihomir, Your comments in the patch were the actually the clearest to me about ease of customizing without requiring upstream changes and really made me think more about your points. Here are a couple of bullet points for consideration. * Will we take on auto-discovery of API extensions in two spots (python for legacy and JS for new)? * As teams move towards deprecating / modifying APIs who will be responsible for ensuring the JS libraries stay up to date and keep tabs on every single project? Right now in Glance, for example, they are working on some fixes to v2 API (soon to become v2.3) that will allow them to deprecate v1 so that Nova can migrate from v1. Part of this includes making simultaneous improvements to the client library so that the switch can happen more transparently to client users. This testing and maintenance of the service team already takes on. * The service API documentation almost always lags (helped by specs now) and the service team takes on the burden of exposing a programmatic way to access which is tested and easily consumable via the python clients which removes some guesswork from using the service. * This is going to be an incremental approach with legacy support requirements anyway, I think. So, incorporating python side changes won?t just go away. * A tangent that needs to be considered IMO since I?m working on some elastic search things right now. Which of these would be better if we introduce a server side caching mechanism or a new source of data such as elastic search to improve performance? * Would the client just be able to handle changing whether or not it used cache with a header and in either case the server side appropriately uses the cache? (e.g. Cache-Control: no-cache) I?m not sure I fully understood your example about Cinder. Was it the cinderclient that held up delivery of that horizon support or there cinder API or both? If the API isn?t in, then it would hold up delivery of the feature in any case. If it is just about delivering new functionality, all that would be required in Richard?s approach is to drop in a new file of decorated classes / functions from his utility with the API?s you want? None of the API calls have anything to do with how your view actually replaces the upstream view. These are all just about accessing the data. Finally, I mentioned the following in the patch related to your example below about the client making two calls to do an update, but wanted to mention here to see if it is an approach that was purposeful (I don?t know the history): ?>>Do we really need the lines:? >> project = api.keystone.tenant_get(request, id) >> kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None) ? I agree that if you already have all the data it is really bad to have to do another call. I do think there is room for discussing the reasoning, though. As far as I can tell, they do this so that if you are updating an entity, you have to be very specific about the fields you are changing. I actually see this as potentially a protectionary measure against data loss and a sometimes very nice to have feature. It perhaps was intended to *help* guard against race conditions *sometimes*. Here's an example: Admin user Joe has an Domain open and stares at it for 15 minutes while he updates the description. Admin user Bob is asked to go ahead and enable it. He opens the record, edits it, and then saves it. Joe finished perfecting the description and saves it. Doing this action would mean that the Domain is enabled and the description gets updated. Last man in still wins if he updates the same fields, but if they update different fields then both of their changes will take affect without them stomping on each other. Whether that is good or bad may depend on the situation? From: Tihomir Trifonov > Reply-To: OpenStack List > Date: Thursday, December 11, 2014 at 7:53 AM To: OpenStack List > Subject: Re: [openstack-dev] [horizon] REST and Django ?? Client just needs to know which URL to hit in order to invoke a certain API, and does not need to know the procedure name or parameters ordering. ?That's where the difference is. I think the client has to know the procedure name and parameters. Otherwise? we have a translation factory pattern, that converts one naming convention to another. And you won't be able to call any service API if there is no code in the middleware to translate it to the service API procedure name and parameters. To avoid this - we can use a transparent proxy model - direct mapping of a client call to service API naming, which can be done if the client invokes the methods with the names in the service API, so that the middleware will just pass parameters, and will not translate. Instead of: updating user data: => => we may use: => => ?The idea here is that if we have keystone 4.0 client, ?we will have to just add it to the clients [] list and nothing more is required at the middleware level. Just create the frontend code to use the new Keystone 4.0 methods. Otherwise we will have to add all new/different signatures of 4.0 against 2.0/3.0 in the middleware in order to use Keystone 4.0. There is also a great example of using a pluggable/new feature in Horizon. Do you remember the volume types support patch? The patch was pending in Gerrit for few months - first waiting the cinder support for volume types to go upstream, then waiting few more weeks for review. I am not sure, but as far as I remember, the Horizon patch even missed a release milestone and was introduced in the next release. If we have a transparent middleware - this will be no more an issue. As long as someone has written the frontend modules(which should be easy to add and customize), and they install the required version of the service API - they will not need updated Horizon to start using the feature. Maybe I am not the right person to give examples here, but how many of you had some kind of Horizon customization being locally merged/patched in your local distros/setups, until the patch is being pushed upstream? I will say it again. Nova, Keystone, Cinder, Glance etc. already have stable public APIs. Why do we want to add the translation middleware and to introduce another level of REST API? This layer will often hide new features, added to the service APIs and will delay their appearance in Horizon. That's simply not needed. I believe it is possible to just wrap the authentication in the middleware REST, but not to translate anything as RPC methods/parameters. ?And one more example: ?@rest_utils.ajax() def put(self, request, id): """Update a single project. The POST data should be an application/json object containing the parameters to update: "name" (string), "description" (string), "domain_id" (string) and "enabled" (boolean, defaults to true). Additional, undefined parameters may also be provided, but you'll have to look deep into keystone to figure out what they might be. This method returns HTTP 204 (no content) on success. """ project = api.keystone.tenant_get(request, id) kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None) api.keystone.tenant_update(request, project, **kwargs) ?Do we really need the lines:? project = api.keystone.tenant_get(request, id) kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None) ? ? ?Since we update the project on the client, it is obvious that we already fetched the project data. So we can simply send: POST /keystone/3.0/tenant_update Content-Type: application/json {"id": cached.id, "domain_id": cached.domain_id, "name": "new name", "description": "new description", "enabled": cached.enabled} Fewer requests, faster application. On Wed, Dec 10, 2014 at 8:39 PM, Thai Q Tran > wrote: ?? I think we're arguing for the same thing, but maybe slightly different approach. I think we can both agree that a middle-layer is required, whether we intend to use it as a proxy or REST endpoints. Regardless of the approach, the client needs to relay what API it wants to invoke, and you can do that either via RPC or REST. I personally prefer the REST approach because it shields the client. Client just needs to know which URL to hit in order to invoke a certain API, and does not need to know the procedure name or parameters ordering. Having said all of that, I do believe we should keep it as thin as possible. I do like the idea of having separate classes for different API versions. What we have today is a thin REST layer that acts like a proxy. You hit a certain URL, and the middle layer forwards the API invokation. The only exception to this rule is support for batch deletions. -----Tihomir Trifonov > wrote: ----- To: "OpenStack Development Mailing List (not for usage questions)" > From: Tihomir Trifonov > Date: 12/10/2014 03:04AM Subject: Re: [openstack-dev] [horizon] REST and Django Richard, thanks for the reply, I agree that the given example is not a real REST. But we already have the REST API - that's Keystone, Nova, Cinder, Glance, Neutron etc, APIs. So what we plan to do here? To add a new REST layer to communicate with other REST API? Do we really need Frontend-REST-REST architecture ? My opinion is that we don't need another REST layer as we currently are trying to go away from the Django layer, which is the same - another processing layer. Although we call it REST proxy or whatever - it doesn't need to be a real REST, but just an aggregation proxy that combines and forwards some requests with adding minimal processing overhead. What makes sense for me is to keep the authentication in this layer as it is now - push a cookie to the frontend, but the REST layer will extract the auth tokens from the session storage and prepare the auth context for the REST API request to OS services. This way we will not expose the tokens to the JS frontend, and will have strict control over the authentication. The frontend will just send data requests, they will be wrapped with auth context and forwarded. Regarding the existing issues with versions in the API - for me the existing approach is wrong. All these fixes were made as workarounds. What should have been done is to create abstractions for each version and to use a separate class for each version. This was partially done for the keystoneclient in api/keystone.py, but not for the forms/views, where we still have if-else for versions. What I suggest here is to have different(concrete) views/forms for each version, and to use them according the context. If the Keystone backend is v2.0 - then in the Frontend use keystone2() object, otherwise use keystone3() object. This of course needs some more coding, but is much cleaner in terms of customization and testing. For me the current hacks with 'if keystone.version == 3.0' are wrong at many levels. And this can be solved now. The problem till now was that we had one frontend that had to be backed by different versions of backend components. Now we can have different frontends that map to specific backend. That's how I understand the power of Angular with it's views and directives. That's where I see the real benefit of using full-featured frontend. Also imagine how easy will be then to deprecate a component version, compared to what we need to do now for the same. Otherwise we just rewrite the current Django middleware with another DjangoRest middleware and don't change anything, we don't fix the problems. We just move them to another place. I still think that in Paris we talked about a new generation of the Dashboard, a different approach on building the frontend for OpenStack. What I've heard there from users/operators of Horizon was that it was extremely hard to add customizations and new features to the Dashboard, as all these needed to go through upstream changes and to wait until next release cycle to get them. Do we still want to address these concerns and how? Please, correct me if I got things wrong. On Wed, Dec 10, 2014 at 11:56 AM, Richard Jones > wrote: Sorry I didn't respond to this earlier today, I had intended to. What you're describing isn't REST, and the principles of REST are what have been guiding the design of the new API so far. I see a lot of value in using REST approaches, mostly around clarity of the interface. While the idea of a very thin proxy seemed like a great idea at one point, my conversations at the summit convinced me that there was value in both using the client interfaces present in the openstack_dashboard/api code base (since they abstract away many issues in the apis including across versions) and also value in us being able to clean up (for example, using "project_id" rather than "project" in the user API we've already implemented) and extend those interfaces (to allow batched operations). We want to be careful about what we expose in Horizon to the JS clients through this API. That necessitates some amount of code in Horizon. About half of the current API for keysone represents that control (the other half is docstrings :) Richard On Tue Dec 09 2014 at 9:37:47 PM Tihomir Trifonov > wrote: Sorry for the late reply, just few thoughts on the matter. IMO the REST middleware should be as thin as possible. And I mean thin in terms of processing - it should not do pre/post processing of the requests, but just unpack/pack. So here is an example: instead of making AJAX calls that contain instructions: ?? POST --json --data {"action": "delete", "data": [ {"name": "item1"}, {"name": "item2"}, {"name": "item3" ]} I think a better approach is just to pack/unpack batch commands, and leave execution to the frontend/backend and not middleware: ?? POST --json --data {" ?batch ": ?[ {? " ? action" ? : "delete"? , ?"payload": ? {"name": "item1"} ?, {? " ? action" ? : "delete"? , ? "payload": ? {"name": "item ?2 "} ?, {? " ? action" ? : "delete"? , ? "payload": ? {"name": "item ?3 "} ? ] ? ? ? } ?The idea is that the middleware should not know the actual data. It should ideally just unpack the data: ??responses = [] for cmd in ? ? ? ? request.POST['batch']:? ? ??responses ?.append(? ? getattr(controller, cmd['action'] ?)(** cmd['?payload'] ?)?) ?return responses? ?and the frontend(JS) will just send batches of simple commands, and will receive a list of responses for each command in the batch. The error handling will be done in the frontend?(JS) as well. ? For the more complex example of 'put()' where we have dependent objects: project = api.keystone.tenant_get(request, id) kwargs = self._tenant_kwargs_from_DATA(request.DATA, enabled=None) api.keystone.tenant_update(request, project, **kwargs) In practice the project data should be already present in the frontend(assuming that we already loaded it to render the project form/view), so ? ? POST --json --data {" ?batch ": ?[ {? " ? action" ? : "tenant_update"? , ?"payload": ? {"project": js_project_object.id, "name": "some name", "prop1": "some prop", "prop2": "other prop, etc."} ? ? ] ? ? ? }? So in general we don't need to recreate the full state on each REST call, if we make the Frontent full-featured application. This way - the frontend will construct the object, will hold the cached value, and will just send the needed requests as single ones or in batches, will receive the response from the API backend, and will render the results. The whole processing logic will be held in the Frontend(JS), while the middleware will just act as proxy(un/packer). This way we will maintain just the logic in the frontend, and will not need to duplicate some logic in the middleware. On Tue, Dec 2, 2014 at 4:45 PM, Adam Young > wrote: On 12/02/2014 12:39 AM, Richard Jones wrote: On Mon Dec 01 2014 at 4:18:42 PM Thai Q Tran > wrote: I agree that keeping the API layer thin would be ideal. I should add that having discrete API calls would allow dynamic population of table. However, I will make a case where it might be necessary to add additional APIs. Consider that you want to delete 3 items in a given table. If you do this on the client side, you would need to perform: n * (1 API request + 1 AJAX request) If you have some logic on the server side that batch delete actions: n * (1 API request) + 1 AJAX request Consider the following: n = 1, client = 2 trips, server = 2 trips n = 3, client = 6 trips, server = 4 trips n = 10, client = 20 trips, server = 11 trips n = 100, client = 200 trips, server 101 trips As you can see, this does not scale very well.... something to consider... This is not something Horizon can fix. Horizon can make matters worse, but cannot make things better. If you want to delete 3 users, Horizon still needs to make 3 distinct calls to Keystone. To fix this, we need either batch calls or a standard way to do multiples of the same operation. The unified API effort it the right place to drive this. Yep, though in the above cases the client is still going to be hanging, waiting for those server-backend calls, with no feedback until it's all done. I would hope that the client-server call overhead is minimal, but I guess that's probably wishful thinking when in the land of random Internet users hitting some provider's Horizon :) So yeah, having mulled it over myself I agree that it's useful to have batch operations implemented in the POST handler, the most common operation being DELETE. Maybe one day we could transition to a batch call with user feedback using a websocket connection. Richard [cid:994faa67a8e28335_0.0.1.1]Richard Jones ---11/27/2014 05:38:53 PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S > wrote: From: Richard Jones > To: "Tripp, Travis S" >, OpenStack List > Date: 11/27/2014 05:38 PM Subject: Re: [openstack-dev] [horizon] REST and Django ________________________________ On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S > wrote: Hi Richard, You are right, we should put this out on the main ML, so copying thread out to there. ML: FYI that this started after some impromptu IRC discussions about a specific patch led into an impromptu google hangout discussion with all the people on the thread below. Thanks Travis! As I mentioned in the review[1], Thai and I were mainly discussing the possible performance implications of network hops from client to horizon server and whether or not any aggregation should occur server side. In other words, some views require several APIs to be queried before any data can displayed and it would eliminate some extra network requests from client to server if some of the data was first collected on the server side across service APIs. For example, the launch instance wizard will need to collect data from quite a few APIs before even the first step is displayed (I?ve listed those out in the blueprint [2]). The flip side to that (as you also pointed out) is that if we keep the API?s fine grained then the wizard will be able to optimize in one place the calls for data as it is needed. For example, the first step may only need half of the API calls. It also could lead to perceived performance increases just due to the wizard making a call for different data independently and displaying it as soon as it can. Indeed, looking at the current launch wizard code it seems like you wouldn't need to load all that data for the wizard to be displayed, since only some subset of it would be necessary to display any given panel of the wizard. I tend to lean towards your POV and starting with discrete API calls and letting the client optimize calls. If there are performance problems or other reasons then doing data aggregation on the server side could be considered at that point. I'm glad to hear it. I'm a fan of optimising when necessary, and not beforehand :) Of course if anybody is able to do some performance testing between the two approaches then that could affect the direction taken. I would certainly like to see us take some measurements when performance issues pop up. Optimising without solid metrics is bad idea :) Richard [1] https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py [2] https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign -Travis From: Richard Jones > Date: Wednesday, November 26, 2014 at 11:55 PM To: Travis Tripp >, Thai Q Tran/Silicon Valley/IBM >, David Lyle >, Maxime Vidori >, "Wroblewski, Szymon" >, "Wood, Matthew David (HP Cloud - Horizon)" >, "Chen, Shaoquan" >, "Farina, Matt (HP Cloud)" >, Cindy Lu/Silicon Valley/IBM >, Justin Pomeroy/Rochester/IBM >, Neill Cox > Subject: Re: REST and Django I'm not sure whether this is the appropriate place to discuss this, or whether I should be posting to the list under [Horizon] but I think we need to have a clear idea of what goes in the REST API and what goes in the client (angular) code. In my mind, the thinner the REST API the better. Indeed if we can get away with proxying requests through without touching any *client code, that would be great. Coding additional logic into the REST API means that a developer would need to look in two places, instead of one, to determine what was happening for a particular call. If we keep it thin then the API presented to the client developer is very, very similar to the API presented by the services. Minimum surprise. Your thoughts? Richard On Wed Nov 26 2014 at 2:40:52 PM Richard Jones > wrote: Thanks for the great summary, Travis. I've completed the work I pledged this morning, so now the REST API change set has: - no rest framework dependency - AJAX scaffolding in openstack_dashboard.api.rest.utils - code in openstack_dashboard/api/rest/ - renamed the API from "identity" to "keystone" to be consistent - added a sample of testing, mostly for my own sanity to check things were working https://review.openstack.org/#/c/136676 Richard On Wed Nov 26 2014 at 12:18:25 PM Tripp, Travis S > wrote: Hello all, Great discussion on the REST urls today! I think that we are on track to come to a common REST API usage pattern. To provide quick summary: We all agreed that going to a straight REST pattern rather than through tables was a good idea. We discussed using direct get / post in Django views like what Max originally used[1][2] and Thai also started[3] with the identity table rework or to go with djangorestframework [5] like what Richard was prototyping with[4]. The main things we would use from Django Rest Framework were built in JSON serialization (avoid boilerplate), better exception handling, and some request wrapping. However, we all weren?t sure about the need for a full new framework just for that. At the end of the conversation, we decided that it was a cleaner approach, but Richard would see if he could provide some utility code to do that much for us without requiring the full framework. David voiced that he doesn?t want us building out a whole framework on our own either. So, Richard will do some investigation during his day today and get back to us. Whatever the case, we?ll get a patch in horizon for the base dependency (framework or Richard?s utilities) that both Thai?s work and the launch instance work is dependent upon. We?ll build REST style API?s using the same pattern. We will likely put the rest api?s in horizon/openstack_dashboard/api/rest/. [1] https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/keypair.py [2] https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/launch.py [3] https://review.openstack.org/#/c/133767/8/openstack_dashboard/dashboards/identity/users/views.py [4] https://review.openstack.org/#/c/136676/4/openstack_dashboard/rest_api/identity.py [5] http://www.django-rest-framework.org/ Thanks, Travis_______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards, Tihomir Trifonov _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards, Tihomir Trifonov _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards, Tihomir Trifonov -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.994faa67a8e28335_0.0.1.1.gif Type: image/gif Size: 105 bytes Desc: Image.994faa67a8e28335_0.0.1.1.gif URL: From joehuang at huawei.com Fri Dec 12 02:17:01 2014 From: joehuang at huawei.com (joehuang) Date: Fri, 12 Dec 2014 02:17:01 +0000 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <5489F6CF.7000602@rackspace.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541FD479@szxema505-mbs.china.huawei.com> Hello, Andrew, Thanks for your confirmation. See inline comments, pls. -----Original Message----- From: Andrew Laski [mailto:andrew.laski at rackspace.com] Sent: Friday, December 12, 2014 3:56 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward On 12/11/2014 04:02 AM, joehuang wrote: > Hello, Russell, > > Many thanks for your reply. See inline comments. > > -----Original Message----- > From: Russell Bryant [mailto:rbryant at redhat.com] > Sent: Thursday, December 11, 2014 5:22 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? > summit recap and move forward > >>> On Fri, Dec 5, 2014 at 8:23 AM, joehuang wrote: >>>> Dear all & TC & PTL, >>>> >>>> In the 40 minutes cross-project summit session ?Approaches for >>>> scaling out?[1], almost 100 peoples attended the meeting, and the >>>> conclusion is that cells can not cover the use cases and >>>> requirements which the OpenStack cascading solution[2] aim to >>>> address, the background including use cases and requirements is >>>> also described in the mail. >> I must admit that this was not the reaction I came away with the discussion with. >> There was a lot of confusion, and as we started looking closer, many >> (or perhaps most) people speaking up in the room did not agree that >> the requirements being stated are things we want to try to satisfy. >> [joehuang] Could you pls. confirm your opinion: 1) cells can not cover the use cases and requirements which the OpenStack cascading solution aim to address. 2) Need further discussion whether to satisfy the use cases and requirements. >Correct, cells does not cover all of the use cases that cascading aims to address. But it was expressed that the use cases that are not covered may not be cases that we want addressed. [joehuang] Ok, Need further discussion to address the cases or not. > On 12/05/2014 06:47 PM, joehuang wrote: >>>> Hello, Davanum, >>>> >>>> Thanks for your reply. >>>> >>>> Cells can't meet the demand for the use cases and requirements described in the mail. >> You're right that cells doesn't solve all of the requirements you're discussing. >> Cells addresses scale in a region. My impression from the summit >> session and other discussions is that the scale issues addressed by >> cells are considered a priority, while the "global API" bits are not. > [joehuang] Agree cells is in the first class priority. > >>>> 1. Use cases >>>> a). Vodafone use case[4](OpenStack summit speech video from 9'02" >>>> to 12'30" ), establishing globally addressable tenants which result >>>> in efficient services deployment. >> Keystone has been working on federated identity. >> That part makes sense, and is already well under way. > [joehuang] The major challenge for VDF use case is cross OpenStack networking for tenants. The tenant's VM/Volume may be allocated in different data centers geographically, but virtual network (L2/L3/FW/VPN/LB) should be built for each tenant automatically and isolated between tenants. Keystone federation can help authorization automation, but the cross OpenStack network automation challenge is still there. > Using prosperity orchestration layer can solve the automation issue, but VDF don't like prosperity API in the north-bound, because no ecosystem is available. And other issues, for example, how to distribute image, also cannot be solved by Keystone federation. > >>>> b). Telefonica use case[5], create virtual DC( data center) cross >>>> multiple physical DCs with seamless experience. >> If we're talking about multiple DCs that are effectively local to >> each other with high bandwidth and low latency, that's one conversation. >> My impression is that you want to provide a single OpenStack API on >> top of globally distributed DCs. I honestly don't see that as a >> problem we should be trying to tackle. I'd rather continue to focus >> on making OpenStack work >> *really* well split into regions. >> I think some people are trying to use cells in a geographically >> distributed way, as well. I'm not sure that's a well understood or supported thing, though. >> Perhaps the folks working on the new version of cells can comment further. >> [joehuang] 1) Splited region way cannot provide cross OpenStack networking automation for tenant. 2) exactly, the motivation for cascading is "single OpenStack API on top of globally distributed DCs". Of course, cascading can also be used for DCs close to each other with high bandwidth and low latency. 3) Folks comment from cells are welcome. >> . >Cells can handle a single API on top of globally distributed DCs. I have spoken with a group that is doing exactly that. But it requires that the API is a trusted part of the OpenStack deployments in those distributed DCs. [joehuang] Could you pls. make it more clear for the deployment mode of cells when used for globally distributed DCs with single API. Do you mean cinder/neutron/glance/ceilometer will be shared by all cells, and use RPC for inter-dc communication, and only support one vendor's OpenStack distribution? How to do the cross data center integration and troubleshooting with RPC if the driver/agent/backend(storage/network/sever) from different vendor. > >>>> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, >>>> 8#. For NFV cloud, it?s in nature the cloud will be distributed but >>>> inter-connected in many data centers. >> I'm afraid I don't understand this one. In many conversations about NFV, I haven't heard this before. > [joehuang] This is the ETSI requirement and use cases specification for NFV. ETSI is the home of the Industry Specification Group for NFV. In Figure 14 (virtualization of EPC) of this document, you can see that the operator's cloud including many data centers to provide connection service to end user by inter-connected VNFs. The requirements listed in (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW etc) to run over cloud, eg. migrate the traditional telco. APP from prosperity hardware to cloud. Not all NFV requirements have been covered yet. Forgive me there are so many telco terms here. > >>>> 2.requirements >>>> a). The operator has multiple sites cloud; each site can use one or >>>> multiple vendor?s OpenStack distributions. >> Is this a technical problem, or is a business problem of vendors not >> wanting to support a mixed environment that you're trying to work >> around with a technical solution? > [joehuang] Pls. refer to VDF use case, the multi-vendor policy has been stated very clearly: 1) Local relationships: Operating Companies also have long standing relationships to their own choice of vendors; 2) Multi-Vendor :Each site can use one or multiple vendors which leads to better use of local resources and capabilities. Technical solution must be provided for multi-vendor integration and verification, It's usually ETSI standard in the past for mobile network. But how to do that in multi-vendor's cloud infrastructure? Cascading provide a way to use OpenStack API as the integration interface. > >>> b). Each site with its own requirements and upgrade schedule while >>> maintaining standard OpenStack API c). The multi-site cloud must >>> provide unified resource management with global Open API exposed, >>> for example create virtual DC cross multiple physical DCs with >>> seamless experience. >>> Although a prosperity orchestration layer could be developed for the >>> multi-site cloud, but it's prosperity API in the north bound >>> interface. The cloud operators want the ecosystem friendly global >>> open API for the mutli-site cloud for global access. >> I guess the question is, do we see a "global API" as something we >> want to accomplish. What you're talking about is huge, and I'm not >> even sure how you would expect it to work in some cases (like networking). > [joehuang] Yes, the most challenge part is networking. In the PoC, L2 > networking cross OpenStack is to leverage the L2 population > mechanism.The L2proxy for DC1 in the cascading layer will detect the > new VM1(in DC1)'s port is up, and then ML2 L2 population will be > activated, the VM1's tunneling endpoint- host IP or L2GW IP in DC1, > will be populated to L2proxy for DC2, and L2proxy for DC2 will create > a external port in the DC2 Neutron with the VM1's tunneling endpoint- > host IP or L2GW IP in DC1. The external port will be attached to the > L2GW or only external port created, L2 population(if not L2GW used) > inside DC2 can be activated to notify all VMs located in DC2 for the > same L2 network. For L3 networking finished in the PoC is to use extra > route over GRE to serve local VLAN/VxLAN networks located in different > DCs. Of course, other L3 networking method can be developed, for > example, through VPN service. There are 4 or 5 BPs talking about edge > network gateway to connect OpenStack tenant network to outside > network, all these technologies can be leveraged to do cross OpenStack > networking for different scenario. To experience the cross OpenStack > networking, please try PoC source code: > https://github.com/stackforge/tricircle > >> In any case, to be as clear as possible, I'm not convinced this is >> something we should be working on. I'm going to need to see much >> more overwhelming support for the idea before helping to figure out any further steps. > [joehuang] If you or any other have any doubts, please feel free to ignite a discussion thread. For time difference reason, we (working in China) are not able to join most of IRC meeting, so mail-list is a good way for discussion. > > Russell Bryant > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Best Regards > > Chaoyi Huang ( joehuang ) > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Best Regards Chaoyi Huang ( joehuang ) From joehuang at huawei.com Fri Dec 12 02:25:37 2014 From: joehuang at huawei.com (joehuang) Date: Fri, 12 Dec 2014 02:25:37 +0000 Subject: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward In-Reply-To: References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541FD490@szxema505-mbs.china.huawei.com> Hello, Joe Thank you for your good question. Question: How would something like flavors work across multiple vendors. The OpenStack API doesn't have any hard coded names and sizes for flavors. So a flavor such as m1.tiny may actually be very different vendor to vendor. Answer: The flavor is defined by Cloud Operator from the cascading OpenStack. And Nova-proxy ( which is the driver for "Nova as hypervisor" ) will sync the flavor to the cascaded OpenStack when it was first used in the cascaded OpenStack. If flavor was changed before a new VM is booted, the changed flavor will also be updated to the cascaded OpenStack just before the new VM booted request. Through this synchronization mechanism, all flavor used in multi-vendor's cascaded OpenStack will be kept the same as what used in the cascading level, provide a consistent view for flavor. Best Regards Chaoyi Huang ( joehuang ) From: Joe Gordon [mailto:joe.gordon0 at gmail.com] Sent: Friday, December 12, 2014 8:17 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward On Thu, Dec 11, 2014 at 1:02 AM, joehuang > wrote: Hello, Russell, Many thanks for your reply. See inline comments. -----Original Message----- From: Russell Bryant [mailto:rbryant at redhat.com] Sent: Thursday, December 11, 2014 5:22 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward >> On Fri, Dec 5, 2014 at 8:23 AM, joehuang > wrote: >>> Dear all & TC & PTL, >>> >>> In the 40 minutes cross-project summit session "Approaches for >>> scaling out"[1], almost 100 peoples attended the meeting, and the >>> conclusion is that cells can not cover the use cases and >>> requirements which the OpenStack cascading solution[2] aim to >>> address, the background including use cases and requirements is also >>> described in the mail. >I must admit that this was not the reaction I came away with the discussion with. >There was a lot of confusion, and as we started looking closer, many (or perhaps most) >people speaking up in the room did not agree that the requirements being stated are >things we want to try to satisfy. [joehuang] Could you pls. confirm your opinion: 1) cells can not cover the use cases and requirements which the OpenStack cascading solution aim to address. 2) Need further discussion whether to satisfy the use cases and requirements. On 12/05/2014 06:47 PM, joehuang wrote: >>> Hello, Davanum, >>> >>> Thanks for your reply. >>> >>> Cells can't meet the demand for the use cases and requirements described in the mail. >You're right that cells doesn't solve all of the requirements you're discussing. >Cells addresses scale in a region. My impression from the summit session > and other discussions is that the scale issues addressed by cells are considered > a priority, while the "global API" bits are not. [joehuang] Agree cells is in the first class priority. >>> 1. Use cases >>> a). Vodafone use case[4](OpenStack summit speech video from 9'02" >>> to 12'30" ), establishing globally addressable tenants which result >>> in efficient services deployment. > Keystone has been working on federated identity. >That part makes sense, and is already well under way. [joehuang] The major challenge for VDF use case is cross OpenStack networking for tenants. The tenant's VM/Volume may be allocated in different data centers geographically, but virtual network (L2/L3/FW/VPN/LB) should be built for each tenant automatically and isolated between tenants. Keystone federation can help authorization automation, but the cross OpenStack network automation challenge is still there. Using prosperity orchestration layer can solve the automation issue, but VDF don't like prosperity API in the north-bound, because no ecosystem is available. And other issues, for example, how to distribute image, also cannot be solved by Keystone federation. >>> b). Telefonica use case[5], create virtual DC( data center) cross >>> multiple physical DCs with seamless experience. >If we're talking about multiple DCs that are effectively local to each other >with high bandwidth and low latency, that's one conversation. >My impression is that you want to provide a single OpenStack API on top of >globally distributed DCs. I honestly don't see that as a problem we should >be trying to tackle. I'd rather continue to focus on making OpenStack work >*really* well split into regions. > I think some people are trying to use cells in a geographically distributed way, > as well. I'm not sure that's a well understood or supported thing, though. > Perhaps the folks working on the new version of cells can comment further. [joehuang] 1) Splited region way cannot provide cross OpenStack networking automation for tenant. 2) exactly, the motivation for cascading is "single OpenStack API on top of globally distributed DCs". Of course, cascading can also be used for DCs close to each other with high bandwidth and low latency. 3) Folks comment from cells are welcome. . >>> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, >>> 8#. For NFV cloud, it's in nature the cloud will be distributed but >>> inter-connected in many data centers. >I'm afraid I don't understand this one. In many conversations about NFV, I haven't heard this before. [joehuang] This is the ETSI requirement and use cases specification for NFV. ETSI is the home of the Industry Specification Group for NFV. In Figure 14 (virtualization of EPC) of this document, you can see that the operator's cloud including many data centers to provide connection service to end user by inter-connected VNFs. The requirements listed in (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW etc) to run over cloud, eg. migrate the traditional telco. APP from prosperity hardware to cloud. Not all NFV requirements have been covered yet. Forgive me there are so many telco terms here. >> >>> 2.requirements >>> a). The operator has multiple sites cloud; each site can use one or >>> multiple vendor's OpenStack distributions. >Is this a technical problem, or is a business problem of vendors not >wanting to support a mixed environment that you're trying to work >around with a technical solution? [joehuang] Pls. refer to VDF use case, the multi-vendor policy has been stated very clearly: 1) Local relationships: Operating Companies also have long standing relationships to their own choice of vendors; 2) Multi-Vendor :Each site can use one or multiple vendors which leads to better use of local resources and capabilities. Technical solution must be provided for multi-vendor integration and verification, It's usually ETSI standard in the past for mobile network. But how to do that in multi-vendor's cloud infrastructure? Cascading provide a way to use OpenStack API as the integration interface. How would something like flavors work across multiple vendors. The OpenStack API doesn't have any hard coded names and sizes for flavors. So a flavor such as m1.tiny may actually be very different vendor to vendor. >> b). Each site with its own requirements and upgrade schedule while >> maintaining standard OpenStack API c). The multi-site cloud must >> provide unified resource management with global Open API exposed, for >> example create virtual DC cross multiple physical DCs with seamless >> experience. >> Although a prosperity orchestration layer could be developed for the >> multi-site cloud, but it's prosperity API in the north bound >> interface. The cloud operators want the ecosystem friendly global >> open API for the mutli-site cloud for global access. >I guess the question is, do we see a "global API" as something we want >to accomplish. What you're talking about is huge, and I'm not even sure >how you would expect it to work in some cases (like networking). [joehuang] Yes, the most challenge part is networking. In the PoC, L2 networking cross OpenStack is to leverage the L2 population mechanism.The L2proxy for DC1 in the cascading layer will detect the new VM1(in DC1)'s port is up, and then ML2 L2 population will be activated, the VM1's tunneling endpoint- host IP or L2GW IP in DC1, will be populated to L2proxy for DC2, and L2proxy for DC2 will create a external port in the DC2 Neutron with the VM1's tunneling endpoint- host IP or L2GW IP in DC1. The external port will be attached to the L2GW or only external port created, L2 population(if not L2GW used) inside DC2 can be activated to notify all VMs located in DC2 for the same L2 network. For L3 networking finished in the PoC is to use extra route over GRE to serve local VLAN/VxLAN networks located in different DCs. Of course, other L3 networking method can be developed, for example, through VPN service. There are 4 or 5 BPs talking about edge network gateway to connect OpenStack tenant network to outside network, all these technologies can be leveraged to do cross OpenStack networking for different scenario. To experience the cross OpenStack networking, please try PoC source code: https://github.com/stackforge/tricircle >In any case, to be as clear as possible, I'm not convinced this is something >we should be working on. I'm going to need to see much more >overwhelming support for the idea before helping to figure out any further steps. [joehuang] If you or any other have any doubts, please feel free to ignite a discussion thread. For time difference reason, we (working in China) are not able to join most of IRC meeting, so mail-list is a good way for discussion. Russell Bryant _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Best Regards Chaoyi Huang ( joehuang ) _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From henry4hly at gmail.com Fri Dec 12 02:28:55 2014 From: henry4hly at gmail.com (henry hly) Date: Fri, 12 Dec 2014 10:28:55 +0800 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: <20141211152452.GO23831@redhat.com> References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> <20141211104137.GD23831@redhat.com> <20141211152452.GO23831@redhat.com> Message-ID: +100! So, for the vif-type-vhostuser, a general script path name replace the vif-detail "vhost_user_ovs_plug", because it's not the responsibility of nova to understand it. On Thu, Dec 11, 2014 at 11:24 PM, Daniel P. Berrange wrote: > On Thu, Dec 11, 2014 at 04:15:00PM +0100, Maxime Leroy wrote: >> On Thu, Dec 11, 2014 at 11:41 AM, Daniel P. Berrange >> wrote: >> > On Thu, Dec 11, 2014 at 09:37:31AM +0800, henry hly wrote: >> >> On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells wrote: >> >> > On 10 December 2014 at 01:31, Daniel P. Berrange >> >> > wrote: >> >> >> >> >> >> >> [..] >> >> The question is, do we really need such flexibility for so many nova vif types? >> >> >> >> I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example, >> >> nova shouldn't known too much details about switch backend, it should >> >> only care about the VIF itself, how the VIF is plugged to switch >> >> belongs to Neutron half. >> >> >> >> However I'm not saying to move existing vif driver out, those open >> >> backend have been used widely. But from now on the tap and vhostuser >> >> mode should be encouraged: one common vif driver to many long-tail >> >> backend. >> > >> > Yes, I really think this is a key point. When we introduced the VIF type >> > mechanism we never intended for there to be soo many different VIF types >> > created. There is a very small, finite number of possible ways to configure >> > the libvirt guest XML and it was intended that the VIF types pretty much >> > mirror that. This would have given us about 8 distinct VIF type maximum. >> > >> > I think the reason for the larger than expected number of VIF types, is >> > that the drivers are being written to require some arbitrary tools to >> > be invoked in the plug & unplug methods. It would really be better if >> > those could be accomplished in the Neutron code than the Nova code, via >> > a host agent run & provided by the Neutron mechanism. This would let >> > us have a very small number of VIF types and so avoid the entire problem >> > that this thread is bringing up. >> > >> > Failing that though, I could see a way to accomplish a similar thing >> > without a Neutron launched agent. If one of the VIF type binding >> > parameters were the name of a script, we could run that script on >> > plug & unplug. So we'd have a finite number of VIF types, and each >> > new Neutron mechanism would merely have to provide a script to invoke >> > >> > eg consider the existing midonet & iovisor VIF types as an example. >> > Both of them use the libvirt "ethernet" config, but have different >> > things running in their plug methods. If we had a mechanism for >> > associating a "plug script" with a vif type, we could use a single >> > VIF type for both. >> > >> > eg iovisor port binding info would contain >> > >> > vif_type=ethernet >> > vif_plug_script=/usr/bin/neutron-iovisor-vif-plug >> > >> > while midonet would contain >> > >> > vif_type=ethernet >> > vif_plug_script=/usr/bin/neutron-midonet-vif-plug >> > >> >> Having less VIF types, then using scripts to plug/unplug the vif in >> nova is a good idea. So, +1 for the idea. >> >> If you want, I can propose a new spec for this. Do you think we have >> enough time to approve this new spec before the 18th December? >> >> Anyway I think we still need to have a vif_driver plugin mechanism: >> For example, if your external l2/ml2 plugin needs a specific type of >> nic (i.e. a new method get_config to provide specific parameters to >> libvirt for the nic) that is not supported in the nova tree. > > As I said above, there's a really small finite set of libvirt configs > we need to care about. We don't need to have a plugin system for that. > It is no real burden to support them in tree > > > Regards, > Daniel > -- > |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| > |: http://libvirt.org -o- http://virt-manager.org :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From joehuang at huawei.com Fri Dec 12 02:29:37 2014 From: joehuang at huawei.com (joehuang) Date: Fri, 12 Dec 2014 02:29:37 +0000 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489DB3D.8020402@gmail.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD457@szxema505-mbs.china.huawei.com> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541FD49F@szxema505-mbs.china.huawei.com> Hello, Joshua, Sorry, my fault. You are right. I owe you two dollars. Best regards Chaoyi Huang ( joehuang ) -----Original Message----- From: Joshua Harlow [mailto:harlowja at outlook.com] Sent: Friday, December 12, 2014 9:47 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward So I think u mean 'proprietary'? http://www.merriam-webster.com/dictionary/proprietary -Josh joehuang wrote: > Hi, Jay, > > Good question, see inline comments, pls. > > Best Regards > Chaoyi Huang ( Joe Huang ) > > -----Original Message----- > From: Jay Pipes [mailto:jaypipes at gmail.com] > Sent: Friday, December 12, 2014 1:58 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? > summit recap and move forward > > On 12/11/2014 04:02 AM, joehuang wrote: >>> [joehuang] The major challenge for VDF use case is cross OpenStack >>> networking for tenants. The tenant's VM/Volume may be allocated in >>> different data centers geographically, but virtual network >>> (L2/L3/FW/VPN/LB) should be built for each tenant automatically and >>> isolated between tenants. Keystone federation can help authorization >>> automation, but the cross OpenStack network automation challenge is >>> still there. Using prosperity orchestration layer can solve the >>> automation issue, but VDF don't like prosperity API in the >>> north-bound, because no ecosystem is available. And other issues, >>> for example, how to distribute image, also cannot be solved by >>> Keystone federation. > >> What is "prosperity orchestration layer" and "prosperity API"? > > [joehuang] suppose that there are two OpenStack instances in the cloud, and vendor A developed an orchestration layer called CMPa (cloud management platform a), vendor B's orchestration layer CMPb. CMPa will define boot VM interface as CreateVM( Num, NameList, VMTemplate), CMPb may like to define boot VM interface as bootVM( Name, projectID, flavorID, volumeSize, location, networkID). After the customer asked more and more function to the cloud, the API set of CMPa will be quite different from that of CMPb, and different from OpenStack API. Now, all apps which consume OpenStack API like Heat, will not be able to run above the prosperity software CMPa/CMPb. All OpenStack API APPs ecosystem will be lost in the customer's cloud. > >>> [joehuang] This is the ETSI requirement and use cases specification >>> for NFV. ETSI is the home of the Industry Specification Group for NFV. >>> In Figure 14 (virtualization of EPC) of this document, you can see >>> that the operator's cloud including many data centers to provide >>> connection service to end user by inter-connected VNFs. The >>> requirements listed in >>> (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about >>> the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW >>> etc) to run over cloud, eg. migrate the traditional telco. APP from >>> prosperity hardware to cloud. Not all NFV requirements have been >>> covered yet. Forgive me there are so many telco terms here. > >> What is "prosperity hardware"? > > [joehuang] For example, Huawei's IMS can only run over Huawei's ATCA > hardware, even you bought Nokia ATCA, the IMS from Huawei will not be > able to work over Nokia ATCA. The telco APP is sold with hardware > together. (More comments on ETSI: ETSI is also the standard > organization for GSM, 3G, 4G.) > > Thanks, > -jay > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From r1chardj0n3s at gmail.com Fri Dec 12 02:30:03 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Fri, 12 Dec 2014 02:30:03 +0000 Subject: [openstack-dev] [horizon] REST and Django References: <547DD08A.7000402@redhat.com> Message-ID: On Fri Dec 12 2014 at 1:06:08 PM Tripp, Travis S wrote: > ?>>Do we really need the lines:? > > >> project = api.keystone.tenant_get(request, id) > >> kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None) > ? > I agree that if you already have all the data it is really bad to have to > do another call. I do think there is room for discussing the reasoning, > though. > As far as I can tell, they do this so that if you are updating an entity, > you have to be very specific about the fields you are changing. I actually > see this as potentially a protectionary measure against data loss and a > sometimes very nice to have feature. It perhaps was intended to *help* > guard against race conditions *sometimes*. > Yep, it looks like I broke this API by implementing it the way I did, and I'll alter the API so that you pass both the "current" object (according to the client) and the parameters to alter. Thanks everyone for the great reviewing! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From hs.chen at huawei.com Fri Dec 12 03:19:14 2014 From: hs.chen at huawei.com (Chenliang (L)) Date: Fri, 12 Dec 2014 03:19:14 +0000 Subject: [openstack-dev] [heat-docker]Does the heat-docker supports auto-scaling and monitoring the Docker container? Message-ID: Hi, Now We can deploying Docker containers in an OpenStack environment using Heat. But I feel confused. Could someone can tell me does the heat-docker supports monitoring the Docker container in a stack and how to monitor it? Does it supports auto-scaling the Docker container? Best Regards, -- Liang Chen From joe.gordon0 at gmail.com Fri Dec 12 03:28:53 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Thu, 11 Dec 2014 19:28:53 -0800 Subject: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward In-Reply-To: <5E7A3D1BF5FD014E86E5F971CF446EFF541FD490@szxema505-mbs.china.huawei.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD490@szxema505-mbs.china.huawei.com> Message-ID: On Thu, Dec 11, 2014 at 6:25 PM, joehuang wrote: > Hello, Joe > > > > Thank you for your good question. > > > > Question: > > How would something like flavors work across multiple vendors. The > OpenStack API doesn't have any hard coded names and sizes for flavors. So a > flavor such as m1.tiny may actually be very different vendor to vendor. > > > > Answer: > > The flavor is defined by Cloud Operator from the cascading OpenStack. And > Nova-proxy ( which is the driver for ?Nova as hypervisor? ) will sync the > flavor to the cascaded OpenStack when it was first used in the cascaded > OpenStack. If flavor was changed before a new VM is booted, the changed > flavor will also be updated to the cascaded OpenStack just before the new > VM booted request. Through this synchronization mechanism, all flavor used > in multi-vendor?s cascaded OpenStack will be kept the same as what used in > the cascading level, provide a consistent view for flavor. > I don't think this is sufficient. If the underlying hardware the between multiple vendors is different setting the same values for a flavor will result in different performance characteristics. For example, nova allows for setting VCPUs, but nova doesn't provide an easy way to define how powerful a VCPU is. Also flavors are commonly hardware dependent, take what rackspace offers: http://www.rackspace.com/cloud/public-pricing#cloud-servers Rackspace has "I/O Optimized" flavors * High-performance, RAID 10-protected SSD storage * Option of booting from Cloud Block Storage (additional charges apply for Cloud Block Storage) * Redundant 10-Gigabit networking * Disk I/O scales with the number of data disks up to ~80,000 4K random read IOPS and ~70,000 4K random write IOPS.* How would cascading support something like this? > > > Best Regards > > > > Chaoyi Huang ( joehuang ) > > > > *From:* Joe Gordon [mailto:joe.gordon0 at gmail.com] > *Sent:* Friday, December 12, 2014 8:17 AM > *To:* OpenStack Development Mailing List (not for usage questions) > > *Subject:* Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - > summit recap and move forward > > > > > > > > On Thu, Dec 11, 2014 at 1:02 AM, joehuang wrote: > > Hello, Russell, > > Many thanks for your reply. See inline comments. > > -----Original Message----- > From: Russell Bryant [mailto:rbryant at redhat.com] > Sent: Thursday, December 11, 2014 5:22 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit > recap and move forward > > >> On Fri, Dec 5, 2014 at 8:23 AM, joehuang wrote: > >>> Dear all & TC & PTL, > >>> > >>> In the 40 minutes cross-project summit session ?Approaches for > >>> scaling out?[1], almost 100 peoples attended the meeting, and the > >>> conclusion is that cells can not cover the use cases and > >>> requirements which the OpenStack cascading solution[2] aim to > >>> address, the background including use cases and requirements is also > >>> described in the mail. > > >I must admit that this was not the reaction I came away with the > discussion with. > >There was a lot of confusion, and as we started looking closer, many (or > perhaps most) > >people speaking up in the room did not agree that the requirements being > stated are > >things we want to try to satisfy. > > [joehuang] Could you pls. confirm your opinion: 1) cells can not cover the > use cases and requirements which the OpenStack cascading solution aim to > address. 2) Need further discussion whether to satisfy the use cases and > requirements. > > On 12/05/2014 06:47 PM, joehuang wrote: > >>> Hello, Davanum, > >>> > >>> Thanks for your reply. > >>> > >>> Cells can't meet the demand for the use cases and requirements > described in the mail. > > >You're right that cells doesn't solve all of the requirements you're > discussing. > >Cells addresses scale in a region. My impression from the summit session > > and other discussions is that the scale issues addressed by cells are > considered > > a priority, while the "global API" bits are not. > > [joehuang] Agree cells is in the first class priority. > > >>> 1. Use cases > >>> a). Vodafone use case[4](OpenStack summit speech video from 9'02" > >>> to 12'30" ), establishing globally addressable tenants which result > >>> in efficient services deployment. > > > Keystone has been working on federated identity. > >That part makes sense, and is already well under way. > > [joehuang] The major challenge for VDF use case is cross OpenStack > networking for tenants. The tenant's VM/Volume may be allocated in > different data centers geographically, but virtual network > (L2/L3/FW/VPN/LB) should be built for each tenant automatically and > isolated between tenants. Keystone federation can help authorization > automation, but the cross OpenStack network automation challenge is still > there. > Using prosperity orchestration layer can solve the automation issue, but > VDF don't like prosperity API in the north-bound, because no ecosystem is > available. And other issues, for example, how to distribute image, also > cannot be solved by Keystone federation. > > >>> b). Telefonica use case[5], create virtual DC( data center) cross > >>> multiple physical DCs with seamless experience. > > >If we're talking about multiple DCs that are effectively local to each > other > >with high bandwidth and low latency, that's one conversation. > >My impression is that you want to provide a single OpenStack API on top of > >globally distributed DCs. I honestly don't see that as a problem we > should > >be trying to tackle. I'd rather continue to focus on making OpenStack > work > >*really* well split into regions. > > I think some people are trying to use cells in a geographically > distributed way, > > as well. I'm not sure that's a well understood or supported thing, > though. > > Perhaps the folks working on the new version of cells can comment > further. > > [joehuang] 1) Splited region way cannot provide cross OpenStack networking > automation for tenant. 2) exactly, the motivation for cascading is "single > OpenStack API on top of globally distributed DCs". Of course, cascading can > also be used for DCs close to each other with high bandwidth and low > latency. 3) Folks comment from cells are welcome. > . > > >>> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, > >>> 8#. For NFV cloud, it?s in nature the cloud will be distributed but > >>> inter-connected in many data centers. > > >I'm afraid I don't understand this one. In many conversations about NFV, > I haven't heard this before. > > [joehuang] This is the ETSI requirement and use cases specification for > NFV. ETSI is the home of the Industry Specification Group for NFV. In > Figure 14 (virtualization of EPC) of this document, you can see that the > operator's cloud including many data centers to provide connection service > to end user by inter-connected VNFs. The requirements listed in ( > https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about the > requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW etc) to run > over cloud, eg. migrate the traditional telco. APP from prosperity hardware > to cloud. Not all NFV requirements have been covered yet. Forgive me there > are so many telco terms here. > > >> > >>> 2.requirements > >>> a). The operator has multiple sites cloud; each site can use one or > >>> multiple vendor?s OpenStack distributions. > > >Is this a technical problem, or is a business problem of vendors not > >wanting to support a mixed environment that you're trying to work > >around with a technical solution? > > [joehuang] Pls. refer to VDF use case, the multi-vendor policy has been > stated very clearly: 1) Local relationships: Operating Companies also have > long standing relationships to their own choice of vendors; 2) Multi-Vendor > :Each site can use one or multiple vendors which leads to better use of > local resources and capabilities. Technical solution must be provided for > multi-vendor integration and verification, It's usually ETSI standard in > the past for mobile network. But how to do that in multi-vendor's cloud > infrastructure? Cascading provide a way to use OpenStack API as the > integration interface. > > > > How would something like flavors work across multiple vendors. The > OpenStack API doesn't have any hard coded names and sizes for flavors. So a > flavor such as m1.tiny may actually be very different vendor to vendor. > > > > > >> b). Each site with its own requirements and upgrade schedule while > >> maintaining standard OpenStack API c). The multi-site cloud must > >> provide unified resource management with global Open API exposed, for > >> example create virtual DC cross multiple physical DCs with seamless > >> experience. > > >> Although a prosperity orchestration layer could be developed for the > >> multi-site cloud, but it's prosperity API in the north bound > >> interface. The cloud operators want the ecosystem friendly global > >> open API for the mutli-site cloud for global access. > > >I guess the question is, do we see a "global API" as something we want > >to accomplish. What you're talking about is huge, and I'm not even sure > >how you would expect it to work in some cases (like networking). > > [joehuang] Yes, the most challenge part is networking. In the PoC, L2 > networking cross OpenStack is to leverage the L2 population mechanism.The > L2proxy for DC1 in the cascading layer will detect the new VM1(in DC1)'s > port is up, and then ML2 L2 population will be activated, the VM1's > tunneling endpoint- host IP or L2GW IP in DC1, will be populated to L2proxy > for DC2, and L2proxy for DC2 will create a external port in the DC2 Neutron > with the VM1's tunneling endpoint- host IP or L2GW IP in DC1. The external > port will be attached to the L2GW or only external port created, L2 > population(if not L2GW used) inside DC2 can be activated to notify all VMs > located in DC2 for the same L2 network. For L3 networking finished in the > PoC is to use extra route over GRE to serve local VLAN/VxLAN networks > located in different DCs. Of course, other L3 networking method can be > developed, for example, through VPN service. There are 4 or 5 BPs talking > about edge network gateway to connect OpenStack tenant network to outside > network, all these technologies can be leveraged to do cross OpenStack > networking for different scenario. To experience the cross OpenStack > networking, please try PoC source code: > https://github.com/stackforge/tricircle > > >In any case, to be as clear as possible, I'm not convinced this is > something > >we should be working on. I'm going to need to see much more > >overwhelming support for the idea before helping to figure out any > further steps. > > [joehuang] If you or any other have any doubts, please feel free to ignite > a discussion thread. For time difference reason, we (working in China) are > not able to join most of IRC meeting, so mail-list is a good way for > discussion. > > Russell Bryant > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Best Regards > > Chaoyi Huang ( joehuang ) > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Fri Dec 12 03:41:08 2014 From: dms at danplanet.com (Dan Smith) Date: Thu, 11 Dec 2014 19:41:08 -0800 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <5E7A3D1BF5FD014E86E5F971CF446EFF541FD479@szxema505-mbs.china.huawei.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD479@szxema505-mbs.china.huawei.com> Message-ID: <548A63D4.6080901@danplanet.com> > [joehuang] Could you pls. make it more clear for the deployment mode > of cells when used for globally distributed DCs with single API. Do > you mean cinder/neutron/glance/ceilometer will be shared by all > cells, and use RPC for inter-dc communication, and only support one > vendor's OpenStack distribution? How to do the cross data center > integration and troubleshooting with RPC if the > driver/agent/backend(storage/network/sever) from different vendor. Correct, cells only applies to single-vendor distributed deployments. In both its current and future forms, it uses private APIs for communication between the components, and thus isn't suited for a multi-vendor environment. Just MHO, but building functionality into existing or new components to allow deployments from multiple vendors to appear as a single API endpoint isn't something I have much interest in. --Dan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From jay.lau.513 at gmail.com Fri Dec 12 03:59:01 2014 From: jay.lau.513 at gmail.com (Jay Lau) Date: Fri, 12 Dec 2014 11:59:01 +0800 Subject: [openstack-dev] [heat-docker]Does the heat-docker supports auto-scaling and monitoring the Docker container? In-Reply-To: References: Message-ID: So you are using heat docker driver but not nova docker driver, right? If you are using nova docker driver, then the container was treated as VM and you can do monitoring and auto scaling with heat. But with heat docker driver, it talk to docker host directly which you need to define in HEAT template and there is no monitoring for such case. Also the manual scale "heat resource-singal" to scale up your stack manually; for auto scale, IMHO, you may want to integrate with some 3rd party monitor and do some development work to reach this. Thanks. 2014-12-12 11:19 GMT+08:00 Chenliang (L) : > > Hi, > Now We can deploying Docker containers in an OpenStack environment using > Heat. But I feel confused. > Could someone can tell me does the heat-docker supports monitoring the > Docker container in a stack and how to monitor it? > Does it supports auto-scaling the Docker container? > > > Best Regards, > -- Liang Chen > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Thanks, Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryu at midokura.com Fri Dec 12 04:21:36 2014 From: ryu at midokura.com (Ryu Ishimoto) Date: Fri, 12 Dec 2014 13:21:36 +0900 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: <20141211104137.GD23831@redhat.com> References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> <20141211104137.GD23831@redhat.com> Message-ID: On Thu, Dec 11, 2014 at 7:41 PM, Daniel P. Berrange wrote: > > Yes, I really think this is a key point. When we introduced the VIF type > mechanism we never intended for there to be soo many different VIF types > created. There is a very small, finite number of possible ways to configure > the libvirt guest XML and it was intended that the VIF types pretty much > mirror that. This would have given us about 8 distinct VIF type maximum. > > I think the reason for the larger than expected number of VIF types, is > that the drivers are being written to require some arbitrary tools to > be invoked in the plug & unplug methods. It would really be better if > those could be accomplished in the Neutron code than the Nova code, via > a host agent run & provided by the Neutron mechanism. This would let > us have a very small number of VIF types and so avoid the entire problem > that this thread is bringing up. > > Failing that though, I could see a way to accomplish a similar thing > without a Neutron launched agent. If one of the VIF type binding > parameters were the name of a script, we could run that script on > plug & unplug. So we'd have a finite number of VIF types, and each > new Neutron mechanism would merely have to provide a script to invoke > > eg consider the existing midonet & iovisor VIF types as an example. > Both of them use the libvirt "ethernet" config, but have different > things running in their plug methods. If we had a mechanism for > associating a "plug script" with a vif type, we could use a single > VIF type for both. > > eg iovisor port binding info would contain > > vif_type=ethernet > vif_plug_script=/usr/bin/neutron-iovisor-vif-plug > > while midonet would contain > > vif_type=ethernet > vif_plug_script=/usr/bin/neutron-midonet-vif-plug > > > And so you see implementing a new Neutron mechanism in this way would > not require *any* changes in Nova whatsoever. The work would be entirely > self-contained within the scope of Neutron. It is simply a packaging > task to get the vif script installed on the compute hosts, so that Nova > can execute it. > > This is essentially providing a flexible VIF plugin system for Nova, > without having to have it plug directly into the Nova codebase with > the API & RPC stability constraints that implies. > > +1 Port binding mechanism could vary among different networking technologies, which is not nova's concern, so this proposal makes sense. Note that some vendors already provide port binding scripts that are currently executed directly from nova's vif.py ('mm-ctl' of midonet and 'ifc_ctl' for iovisor are two such examples), and this proposal makes it unnecessary to have these hard-coded in nova. The only question I have is, how would nova figure out the arguments for these scripts? Should nova dictate what they are? Ryu -------------- next part -------------- An HTML attachment was scrubbed... URL: From Abhijeet.Malawade at nttdata.com Fri Dec 12 05:54:04 2014 From: Abhijeet.Malawade at nttdata.com (Malawade, Abhijeet) Date: Fri, 12 Dec 2014 05:54:04 +0000 Subject: [openstack-dev] [python-cinderclient] Return request ID to caller Message-ID: <588CEB80D6885A41804FDE568C7D81BE56F3CAEE@MAIL703.KDS.KEANE.COM> HI, I want your thoughts on blueprint 'Log Request ID Mappings' for cross projects. BP: https://blueprints.launchpad.net/nova/+spec/log-request-id-mappings It will enable operators to get request id's mappings easily and will be useful in analysing logs effectively. For logging 'Request ID Mappings', client needs to return 'x-openstack-request-id' to the caller. Currently python-cinderclient do not return 'x-openstack-request-id' back to the caller. As of now, I could think of below two solutions to return 'request-id' back from cinder-client to the caller. 1. Return tuple containing response header and response body from all cinder-client methods. (response header contains 'x-openstack-request-id'). Advantages: A. In future, if the response headers are modified then it will be available to the caller without making any changes to the python-cinderclient code. Disadvantages: A. Affects all services using python-cinderclient library as the return type of each method is changed to tuple. B. Need to refactor all methods exposed by the python-cinderclient library. Also requires changes in the cross projects wherever python-cinderclient calls are being made. Ex. :- From Nova, you will need to call cinder-client 'get' method like below :- resp_header, volume = cinderclient(context).volumes.get(volume_id) x-openstack-request-id = resp_header.get('x-openstack-request-id', None) Here cinder-client will return both response header and volume. From response header, you can get 'x-openstack-request-id'. 2. The optional parameter 'return_req_id' of type list will be passed to each of the cinder-client method. If this parameter is passed then cinder-client will append ''x-openstack-request-id' received from cinder api to this list. This is already implemented in glance-client (for V1 api only) Blueprint : https://blueprints.launchpad.net/python-glanceclient/+spec/return-req-id Review link : https://review.openstack.org/#/c/68524/7 Advantages: A. Requires changes in the cross projects only at places wherever python-cinderclient calls are being made requiring 'x-openstack-request-id'. Dis-advantages: A. Need to refactor all methods exposed by the python-cinderclient library. Ex. :- >From Nova, you will need to pass return_req_id parameter as a list. kwargs['return_req_id'] = [] item = cinderclient(context).volumes.get(volume_id, **kwargs) if kwargs.get('return_req_id'): x-openstack-request-id = kwargs['return_req_id'].pop() python-cinderclient will add 'x-openstack-request-id' to the 'return_req_id' list if it is provided in kwargs. IMO, solution #2 is better than #1 for the reasons quoted above. Takashi NATSUME has already proposed a patch for solution #2. Please review patch https://review.openstack.org/#/c/104482/. Would appreciate if you can think of any other better solution than #2. Thank you. Abhijeet ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis.tripp at hp.com Fri Dec 12 06:08:13 2014 From: travis.tripp at hp.com (Tripp, Travis S) Date: Fri, 12 Dec 2014 06:08:13 +0000 Subject: [openstack-dev] [horizon] REST and Django In-Reply-To: References: <547DD08A.7000402@redhat.com> Message-ID: I just re-read and I apologize for the hastily written email I previously sent. I?ll try to salvage it with a bit of a revision below (please ignore the previous email). On 12/11/14, 7:02 PM, "Tripp, Travis S" wrote (REVISED): >Tihomir, > >Your comments in the patch were very helpful for me to understand your >concerns about the ease of customizing without requiring upstream >changes. It also reminded me that I?ve also previously questioned the >python middleman. > >However, here are a couple of bullet points for Devil?s Advocate >consideration. > > > * Will we take on auto-discovery of API extensions in two spots >(python for legacy and JS for new)? > * The Horizon team will have to keep an even closer eye on every >single project and be ready to react if there are changes to the API that >break things. Right now in Glance, for example, they are working on some >fixes to the v2 API (soon to become v2.3) that will allow them to >deprecate v1 somewhat transparently to users of the client library. > * The service API documentation almost always lags (although, helped >by specs now) and the service team takes on the burden of exposing a >programmatic way to access the API. This is tested and easily consumable >via the python clients, which removes some guesswork from using the >service. > * This is going to be an incremental approach with legacy support >requirements anyway. So, incorporating python side changes won?t just go >away. > * Which approach would be better if we introduce a server side >caching mechanism or a new source of data such as elastic search to >improve performance? Would the client side code have to be changed >dramatically to take advantage of those improvements or could it be done >transparently on the server side if we own the exposed API? > >I?m not sure I fully understood your example about Cinder. Was it the >cinder client that held up delivery of horizon support, the cinder API or >both? If the API isn?t in, then it would hold up delivery of the feature >in any case. There still would be timing pressures to react and build a >new view that supports it. For customization, with Richard?s approach new >views could be supported by just dropping in a new REST API decorated >module with the APIs you want, including direct pass through support if >desired to new APIs. Downstream customizations / Upstream changes to >views seem a bit like a bit of a related, but different issue to me as >long as their is an easy way to drop in new API support. > >Finally, regarding the client making two calls to do an update: > >?>>Do we really need the lines:? > >>> project = api.keystone.tenant_get(request, id) >>> kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None) >? >I agree that if you already have all the data it may be bad to have to do >another call. I do think there is room for discussing the reasoning, >though. >As far as I can tell, they do this so that if you are updating an entity, >you have to be very specific about the fields you are changing. I >actually see this as potentially a protectionary measure against data >loss and sometimes a very nice to have feature. It perhaps was intended >to *help* guard against race conditions (no locking and no transactions >with many users simultaneously accessing the data). > >Here's an example: Admin user Joe has a Domain open and stares at it for >15 minutes while he updates just the description. Admin user Bob is asked >to go ahead and enable it. He opens the record, edits it, and then saves >it. Joe finished perfecting the description and saves it. They could in >effect both edit the same domain independently. Last man in still wins if >he updates the same fields, but if they update different fields then both >of their changes will take affect without them stomping on each other. Or >maybe it is intended to encourage client users to compare their current >and previous to see if they should issue a warning if the data changed >between getting and updating the data. Or maybe like you said, it is just >overhead API calls. > >From: Tihomir Trifonov > >Reply-To: OpenStack List >g>> >Date: Thursday, December 11, 2014 at 7:53 AM >To: OpenStack List >g>> >Subject: Re: [openstack-dev] [horizon] REST and Django > >?? >Client just needs to know which URL to hit in order to invoke a certain >API, and does not need to know the procedure name or parameters ordering. > > >?That's where the difference is. I think the client has to know the >procedure name and parameters. Otherwise? we have a translation factory >pattern, that converts one naming convention to another. And you won't be >able to call any service API if there is no code in the middleware to >translate it to the service API procedure name and parameters. To avoid >this - we can use a transparent proxy model - direct mapping of a client >call to service API naming, which can be done if the client invokes the >methods with the names in the service API, so that the middleware will >just pass parameters, and will not translate. Instead of: > > >updating user data: > > => /keystone/update/ > => > >we may use: > > => > >=> > > >?The idea here is that if we have keystone 4.0 client, ?we will have to >just add it to the clients [] list and nothing more is required at the >middleware level. Just create the frontend code to use the new Keystone >4.0 methods. Otherwise we will have to add all new/different signatures >of 4.0 against 2.0/3.0 in the middleware in order to use Keystone 4.0. > >There is also a great example of using a pluggable/new feature in >Horizon. Do you remember the volume types support patch? The patch was >pending in Gerrit for few months - first waiting the cinder support for >volume types to go upstream, then waiting few more weeks for review. I am >not sure, but as far as I remember, the Horizon patch even missed a >release milestone and was introduced in the next release. > >If we have a transparent middleware - this will be no more an issue. As >long as someone has written the frontend modules(which should be easy to >add and customize), and they install the required version of the service >API - they will not need updated Horizon to start using the feature. >Maybe I am not the right person to give examples here, but how many of >you had some kind of Horizon customization being locally merged/patched >in your local distros/setups, until the patch is being pushed upstream? > >I will say it again. Nova, Keystone, Cinder, Glance etc. already have >stable public APIs. Why do we want to add the translation middleware and >to introduce another level of REST API? This layer will often hide new >features, added to the service APIs and will delay their appearance in >Horizon. That's simply not needed. I believe it is possible to just wrap >the authentication in the middleware REST, but not to translate anything >as RPC methods/parameters. > > >?And one more example: > >?@rest_utils.ajax() >def put(self, request, id): > """Update a single project. > > The POST data should be an application/json object containing the > parameters to update: "name" (string), "description" (string), > "domain_id" (string) and "enabled" (boolean, defaults to true). > Additional, undefined parameters may also be provided, but you'll >have > to look deep into keystone to figure out what they might be. > > This method returns HTTP 204 (no content) on success. > """ > project = api.keystone.tenant_get(request, id) > kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None) > api.keystone.tenant_update(request, project, **kwargs) > >?Do we really need the lines:? > >project = api.keystone.tenant_get(request, id) >kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None) >? >? ?Since we update the project on the client, it is obvious that we >already fetched the project data. So we can simply send: > > >POST /keystone/3.0/tenant_update > >Content-Type: application/json > >{"id": cached.id, "domain_id": cached.domain_id, >"name": "new name", "description": "new description", "enabled": >cached.enabled} > >Fewer requests, faster application. > > From rakhmerov at mirantis.com Fri Dec 12 06:16:19 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Fri, 12 Dec 2014 12:16:19 +0600 Subject: [openstack-dev] [Mistral] Action context passed to all action executions by default In-Reply-To: References: Message-ID: > Maybe put all these different context under a kwarg called context? > > For example, > > ctx = { > "env": {...}, > "global": {...}, > "runtime": { > "execution_id": ..., > "task_id": ..., > ... > } > } > > action = SomeMistralAction(context=ctx) IMO, that is a nice idea. I like it and would go further with it unless someone else has any other thoughts. Renat From harlowja at outlook.com Fri Dec 12 06:16:54 2014 From: harlowja at outlook.com (Joshua Harlow) Date: Thu, 11 Dec 2014 22:16:54 -0800 Subject: [openstack-dev] [oslo] deprecation 'pattern' library?? In-Reply-To: References: Message-ID: Filed spec @ https://review.openstack.org/#/c/141220/ Comments welcome :-) -Josh Joshua Harlow wrote: > Ya, > > I to was surprised by the general lack of this kind of library on pypi. > > One would think u know that people deprecate stuff, but maybe this isn't > the norm for python... Why deprecate when u can just make v2.0 ;) > > -Josh > > Davanum Srinivas wrote: >> Surprisingly "deprecator" is still available on pypi >> >> On Thu, Dec 11, 2014 at 2:04 AM, Julien Danjou wrote: >>> On Wed, Dec 10 2014, Joshua Harlow wrote: >>> >>> >>> [?] >>> >>>> Or in general any other comments/ideas about providing such a >>>> deprecation >>>> pattern library? >>> +1 >>> >>>> * debtcollector >>> made me think of "loanshark" :)" >>> >>> -- >>> Julien Danjou >>> -- Free Software hacker >>> -- http://julien.danjou.info >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From henry4hly at gmail.com Fri Dec 12 06:29:25 2014 From: henry4hly at gmail.com (henry hly) Date: Fri, 12 Dec 2014 14:29:25 +0800 Subject: [openstack-dev] =?utf-8?b?W2FsbF0gW3RjXSBbUFRMXSBDYXNjYWRpbmcg?= =?utf-8?q?vs=2E_Cells_=E2=80=93_summit_recap_and_move_forward?= In-Reply-To: <548A63D4.6080901@danplanet.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD479@szxema505-mbs.china.huawei.com> <548A63D4.6080901@danplanet.com> Message-ID: On Fri, Dec 12, 2014 at 11:41 AM, Dan Smith wrote: >> [joehuang] Could you pls. make it more clear for the deployment mode >> of cells when used for globally distributed DCs with single API. Do >> you mean cinder/neutron/glance/ceilometer will be shared by all >> cells, and use RPC for inter-dc communication, and only support one >> vendor's OpenStack distribution? How to do the cross data center >> integration and troubleshooting with RPC if the >> driver/agent/backend(storage/network/sever) from different vendor. > > Correct, cells only applies to single-vendor distributed deployments. In > both its current and future forms, it uses private APIs for > communication between the components, and thus isn't suited for a > multi-vendor environment. > > Just MHO, but building functionality into existing or new components to > allow deployments from multiple vendors to appear as a single API > endpoint isn't something I have much interest in. > > --Dan > Even with the same distribution, cell still face many challenges across multiple DC connected with WAN. Considering OAM, it's easier to manage autonomous systems connected with external northband interface across remote sites, than a single monolithic system connected with internal RPC message. Although Cell did some separation and modulation (not to say it's still internal RPC across WAN), they leaves cinder, neutron, ceilometer. Shall we wait for all these projects to re-factor with Cell-like hierarchy structure, or adopt a more loose coupled way, to distribute them into autonomous units at the basis of the whole Openstack (except Keystone which can handle multiple region naturally)? As we can see, compared with Cell, much less work is needed to build a Cascading solution, No patch is needed except Neutron (waiting some upcoming features not landed in Juno), nearly all work lies in the proxy, which is in fact another kind of driver/agent. Best Regards Henry > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From yamamoto at valinux.co.jp Fri Dec 12 06:43:10 2014 From: yamamoto at valinux.co.jp (YAMAMOTO Takashi) Date: Fri, 12 Dec 2014 15:43:10 +0900 (JST) Subject: [openstack-dev] [Neutron] XenAPI questions In-Reply-To: Your message of "Thu, 11 Dec 2014 10:55:19 +0000" References: Message-ID: <20141212064310.9998970A0B@kuma.localdomain> hi, good to hear. do you have any estimate when it will be available? will it cover dom0 side of the code found in neutron/plugins/openvswitch/agent/xenapi? YAMAMOTO Takashi > Hi Yamamoto, > > XenAPI and Neutron do work well together, and we have an private CI that is running Neutron jobs. As it's not currently the public CI it's harder to access logs. > We're working on trying to move the existing XenServer CI from a nova-network base to a neutron base, at which point the logs will of course be publically accessible and tested against any changes, thus making it easy to answer questions such as the below. > > Bob > >> -----Original Message----- >> From: YAMAMOTO Takashi [mailto:yamamoto at valinux.co.jp] >> Sent: 11 December 2014 03:17 >> To: openstack-dev at lists.openstack.org >> Subject: [openstack-dev] [Neutron] XenAPI questions >> >> hi, >> >> i have questions for XenAPI folks: >> >> - what's the status of XenAPI support in neutron? >> - is there any CI covering it? i want to look at logs. >> - is it possible to write a small program which runs with the xen >> rootwrap and proxies OpenFlow channel between domains? >> (cf. https://review.openstack.org/#/c/138980/) >> >> thank you. >> >> YAMAMOTO Takashi >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From joehuang at huawei.com Fri Dec 12 07:10:46 2014 From: joehuang at huawei.com (joehuang) Date: Fri, 12 Dec 2014 07:10:46 +0000 Subject: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward In-Reply-To: References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD490@szxema505-mbs.china.huawei.com> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541FD5B9@szxema505-mbs.china.huawei.com> Hi Joe, Thank you to lead us to deep diving into cascading. My answer is listed below your question. > I don't think this is sufficient. If the underlying hardware the > between multiple vendors is different setting the same values for > a flavor will result in different performance characteristics. > For example, nova allows for setting VCPUs, but nova doesn't provide > an easy way to define how powerful a VCPU is. Also flavors are commonly > hardware dependent, take what rackspace offers: > http://www.rackspace.com/cloud/public-pricing#cloud-servers > Rackspace has "I/O Optimized" flavors > * High-performance, RAID 10-protected SSD storage > * Option of booting from Cloud Block Storage (additional charges apply > for Cloud Block Storage) > * Redundant 10-Gigabit networking > * Disk I/O scales with the number of data disks up to ~80,000 4K random > read IOPS and ~70,000 4K random write IOPS.* > How would cascading support something like this? [joehuang] Just reminder you that the cascading works like a normal OpenStack, if it can be solved by one OpenStack instance, it should be feasible too in cascading through the self-similar mechanism used ( just treat the cascaded OpenStack as one huge compute node ). The only difference between cascading OpenStack and normal OpenStack is that the agent/driver processes running on compute-node /cinder-volume node / L2/L3 agent. Let me give an example how the issues you mentioned can be solved in cascading. Suppose that we have one cascading OpenStack (OpenStack0), two cascaded OpenStack( OpenStack1, OpenStack2 ) For OpenStack1: there are 5 compute nodes in OpenStack1 with ?High-performance, RAID 10-protected SSD storage?, we can add these 5 nodes to host aggregate ?SSD? with extra spec (Storage:SSD), there are another 5 nodes booting from Cloud Block Storage, we can add these 5 nodes to host aggregate ?cloudstorage? with extra spec (Storage:cloud). All these 10 nodes belongs to AZ1 (availability zone 1) For OpenStack2: there are 5 compute nodes in OpenStack2 with ?Redundant 10-Gigabit networking?, we can add these 5 nodes to host aggregate ?SSD? with extra spec (Storage:SSD), there are another 5 nodes with random access to volume with QoS requirement, we can add these 5 nodes to host aggregate ?randomio? with extra spec (IO:random). All these 10 nodes belongs to AZ2 (availability zone 2). We can define volume QoS associated with volume-type: vol-type-random-qos. In the cascading OpenStack, add compute-node1 as the proxy-node (proxy-node1) for the cascaded OpenStack1, add compute-node2 as the proxy-node (proxy-node2) for the cascaded OpenStack2. From the information described for cascaded OpenStack, add proxy-node1 to AZ1, and host aggregate ?SSD? and ?cloudstorage?, add proxy-node2 to AZ2, and host aggregate ?SSD? and ?randomio?, cinder-proxy running on proxy-node2 will retrieve the volume type with QoS information from the cascaded OpenStack2. After that, the tenant user or the cloud admin can define ?Flavor? with extra-spec which will be matched with host-aggregate spec. In the cascading layer, you need to configure the regarding scheduler filter. Now: if you boot a VM in AZ1 with flavor (Storage:SSD), the request will be scheduled to proxy-node1, and the request will be reassembled as restful request to cascaded OpenStack1, and the node which were added into SSD host aggregate will be scheduled just like a normal OpenStack done. if you boot a VM in AZ2 with flavor (Storage:SSD), the request will be scheduled to proxy-node2, and the request will be reassembled as restful request to cascaded OpenStack2, and the node which were added into SSD host aggregate will be scheduled just like a normal OpenStack done. But if you boot a VM in AZ2 with flavor (randomio), the request will be scheduled to proxy-node2, and the request will be reassembled as restful request to cascaded OpenStack2, and the node which were added into randomio host aggregate will be scheduled just like a normal OpenStack done. If you attach a volume which was created with the volume-type ?vol-type-random-qos? in AZ2 to VM2, and the qos for VM to access volume will also taken into effect. I just give a relative easy to understand example, more complicated use cases can also be settled using the cascading amazing self-similar mechanism, I called it as FRACTAL (fantastic math) pattern (https://www.linkedin.com/pulse/20140729022031-23841540-openstack-cascading-and-fractal?trk=prof-post). Best regards Chaoyi Huang ( joehuang ) From: Joe Gordon [mailto:joe.gordon0 at gmail.com] Sent: Friday, December 12, 2014 11:29 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward On Thu, Dec 11, 2014 at 6:25 PM, joehuang > wrote: Hello, Joe Thank you for your good question. Question: How would something like flavors work across multiple vendors. The OpenStack API doesn't have any hard coded names and sizes for flavors. So a flavor such as m1.tiny may actually be very different vendor to vendor. Answer: The flavor is defined by Cloud Operator from the cascading OpenStack. And Nova-proxy ( which is the driver for ?Nova as hypervisor? ) will sync the flavor to the cascaded OpenStack when it was first used in the cascaded OpenStack. If flavor was changed before a new VM is booted, the changed flavor will also be updated to the cascaded OpenStack just before the new VM booted request. Through this synchronization mechanism, all flavor used in multi-vendor?s cascaded OpenStack will be kept the same as what used in the cascading level, provide a consistent view for flavor. I don't think this is sufficient. If the underlying hardware the between multiple vendors is different setting the same values for a flavor will result in different performance characteristics. For example, nova allows for setting VCPUs, but nova doesn't provide an easy way to define how powerful a VCPU is. Also flavors are commonly hardware dependent, take what rackspace offers: http://www.rackspace.com/cloud/public-pricing#cloud-servers Rackspace has "I/O Optimized" flavors * High-performance, RAID 10-protected SSD storage * Option of booting from Cloud Block Storage (additional charges apply for Cloud Block Storage) * Redundant 10-Gigabit networking * Disk I/O scales with the number of data disks up to ~80,000 4K random read IOPS and ~70,000 4K random write IOPS.* How would cascading support something like this? Best Regards Chaoyi Huang ( joehuang ) From: Joe Gordon [mailto:joe.gordon0 at gmail.com] Sent: Friday, December 12, 2014 8:17 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward On Thu, Dec 11, 2014 at 1:02 AM, joehuang > wrote: Hello, Russell, Many thanks for your reply. See inline comments. -----Original Message----- From: Russell Bryant [mailto:rbryant at redhat.com] Sent: Thursday, December 11, 2014 5:22 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward >> On Fri, Dec 5, 2014 at 8:23 AM, joehuang > wrote: >>> Dear all & TC & PTL, >>> >>> In the 40 minutes cross-project summit session ?Approaches for >>> scaling out?[1], almost 100 peoples attended the meeting, and the >>> conclusion is that cells can not cover the use cases and >>> requirements which the OpenStack cascading solution[2] aim to >>> address, the background including use cases and requirements is also >>> described in the mail. >I must admit that this was not the reaction I came away with the discussion with. >There was a lot of confusion, and as we started looking closer, many (or perhaps most) >people speaking up in the room did not agree that the requirements being stated are >things we want to try to satisfy. [joehuang] Could you pls. confirm your opinion: 1) cells can not cover the use cases and requirements which the OpenStack cascading solution aim to address. 2) Need further discussion whether to satisfy the use cases and requirements. On 12/05/2014 06:47 PM, joehuang wrote: >>> Hello, Davanum, >>> >>> Thanks for your reply. >>> >>> Cells can't meet the demand for the use cases and requirements described in the mail. >You're right that cells doesn't solve all of the requirements you're discussing. >Cells addresses scale in a region. My impression from the summit session > and other discussions is that the scale issues addressed by cells are considered > a priority, while the "global API" bits are not. [joehuang] Agree cells is in the first class priority. >>> 1. Use cases >>> a). Vodafone use case[4](OpenStack summit speech video from 9'02" >>> to 12'30" ), establishing globally addressable tenants which result >>> in efficient services deployment. > Keystone has been working on federated identity. >That part makes sense, and is already well under way. [joehuang] The major challenge for VDF use case is cross OpenStack networking for tenants. The tenant's VM/Volume may be allocated in different data centers geographically, but virtual network (L2/L3/FW/VPN/LB) should be built for each tenant automatically and isolated between tenants. Keystone federation can help authorization automation, but the cross OpenStack network automation challenge is still there. Using prosperity orchestration layer can solve the automation issue, but VDF don't like prosperity API in the north-bound, because no ecosystem is available. And other issues, for example, how to distribute image, also cannot be solved by Keystone federation. >>> b). Telefonica use case[5], create virtual DC( data center) cross >>> multiple physical DCs with seamless experience. >If we're talking about multiple DCs that are effectively local to each other >with high bandwidth and low latency, that's one conversation. >My impression is that you want to provide a single OpenStack API on top of >globally distributed DCs. I honestly don't see that as a problem we should >be trying to tackle. I'd rather continue to focus on making OpenStack work >*really* well split into regions. > I think some people are trying to use cells in a geographically distributed way, > as well. I'm not sure that's a well understood or supported thing, though. > Perhaps the folks working on the new version of cells can comment further. [joehuang] 1) Splited region way cannot provide cross OpenStack networking automation for tenant. 2) exactly, the motivation for cascading is "single OpenStack API on top of globally distributed DCs". Of course, cascading can also be used for DCs close to each other with high bandwidth and low latency. 3) Folks comment from cells are welcome. . >>> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, >>> 8#. For NFV cloud, it?s in nature the cloud will be distributed but >>> inter-connected in many data centers. >I'm afraid I don't understand this one. In many conversations about NFV, I haven't heard this before. [joehuang] This is the ETSI requirement and use cases specification for NFV. ETSI is the home of the Industry Specification Group for NFV. In Figure 14 (virtualization of EPC) of this document, you can see that the operator's cloud including many data centers to provide connection service to end user by inter-connected VNFs. The requirements listed in (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW etc) to run over cloud, eg. migrate the traditional telco. APP from prosperity hardware to cloud. Not all NFV requirements have been covered yet. Forgive me there are so many telco terms here. >> >>> 2.requirements >>> a). The operator has multiple sites cloud; each site can use one or >>> multiple vendor?s OpenStack distributions. >Is this a technical problem, or is a business problem of vendors not >wanting to support a mixed environment that you're trying to work >around with a technical solution? [joehuang] Pls. refer to VDF use case, the multi-vendor policy has been stated very clearly: 1) Local relationships: Operating Companies also have long standing relationships to their own choice of vendors; 2) Multi-Vendor :Each site can use one or multiple vendors which leads to better use of local resources and capabilities. Technical solution must be provided for multi-vendor integration and verification, It's usually ETSI standard in the past for mobile network. But how to do that in multi-vendor's cloud infrastructure? Cascading provide a way to use OpenStack API as the integration interface. How would something like flavors work across multiple vendors. The OpenStack API doesn't have any hard coded names and sizes for flavors. So a flavor such as m1.tiny may actually be very different vendor to vendor. >> b). Each site with its own requirements and upgrade schedule while >> maintaining standard OpenStack API c). The multi-site cloud must >> provide unified resource management with global Open API exposed, for >> example create virtual DC cross multiple physical DCs with seamless >> experience. >> Although a prosperity orchestration layer could be developed for the >> multi-site cloud, but it's prosperity API in the north bound >> interface. The cloud operators want the ecosystem friendly global >> open API for the mutli-site cloud for global access. >I guess the question is, do we see a "global API" as something we want >to accomplish. What you're talking about is huge, and I'm not even sure >how you would expect it to work in some cases (like networking). [joehuang] Yes, the most challenge part is networking. In the PoC, L2 networking cross OpenStack is to leverage the L2 population mechanism.The L2proxy for DC1 in the cascading layer will detect the new VM1(in DC1)'s port is up, and then ML2 L2 population will be activated, the VM1's tunneling endpoint- host IP or L2GW IP in DC1, will be populated to L2proxy for DC2, and L2proxy for DC2 will create a external port in the DC2 Neutron with the VM1's tunneling endpoint- host IP or L2GW IP in DC1. The external port will be attached to the L2GW or only external port created, L2 population(if not L2GW used) inside DC2 can be activated to notify all VMs located in DC2 for the same L2 network. For L3 networking finished in the PoC is to use extra route over GRE to serve local VLAN/VxLAN networks located in different DCs. Of course, other L3 networking method can be developed, for example, through VPN service. There are 4 or 5 BPs talking about edge network gateway to connect OpenStack tenant network to outside network, all these technologies can be leveraged to do cross OpenStack networking for different scenario. To experience the cross OpenStack networking, please try PoC source code: https://github.com/stackforge/tricircle >In any case, to be as clear as possible, I'm not convinced this is something >we should be working on. I'm going to need to see much more >overwhelming support for the idea before helping to figure out any further steps. [joehuang] If you or any other have any doubts, please feel free to ignite a discussion thread. For time difference reason, we (working in China) are not able to join most of IRC meeting, so mail-list is a good way for discussion. Russell Bryant _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Best Regards Chaoyi Huang ( joehuang ) _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From joehuang at huawei.com Fri Dec 12 07:13:43 2014 From: joehuang at huawei.com (joehuang) Date: Fri, 12 Dec 2014 07:13:43 +0000 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <548A63D4.6080901@danplanet.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD479@szxema505-mbs.china.huawei.com> <548A63D4.6080901@danplanet.com> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541FD5C7@szxema505-mbs.china.huawei.com> Hello, Dan, > Correct, cells only applies to single-vendor distributed deployments. > In both its current and future forms, it uses private APIs for > communication between the components, and thus isn't suited for a multi-vendor environment. Thank you for your confirmation. My doubt is what's the "private APIs", which commponents included " communication between the components ". Best Regards Chaoyi Huang ( joehuang ) -----Original Message----- From: Dan Smith [mailto:dms at danplanet.com] Sent: Friday, December 12, 2014 11:41 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward > [joehuang] Could you pls. make it more clear for the deployment mode > of cells when used for globally distributed DCs with single API. Do > you mean cinder/neutron/glance/ceilometer will be shared by all cells, > and use RPC for inter-dc communication, and only support one vendor's > OpenStack distribution? How to do the cross data center integration > and troubleshooting with RPC if the > driver/agent/backend(storage/network/sever) from different vendor. Correct, cells only applies to single-vendor distributed deployments. In both its current and future forms, it uses private APIs for communication between the components, and thus isn't suited for a multi-vendor environment. Just MHO, but building functionality into existing or new components to allow deployments from multiple vendors to appear as a single API endpoint isn't something I have much interest in. --Dan From sgordon at redhat.com Fri Dec 12 08:10:00 2014 From: sgordon at redhat.com (Steve Gordon) Date: Fri, 12 Dec 2014 03:10:00 -0500 (EST) Subject: [openstack-dev] =?utf-8?b?W2FsbF0gW3RjXSBbUFRMXSBDYXNjYWRpbmcg?= =?utf-8?q?vs=2E_Cells_=E2=80=93_summit_recap_and_move_forward?= In-Reply-To: References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD479@szxema505-mbs.china.huawei.com> <548A63D4.6080901@danplanet.com> Message-ID: <27026559.21276.1418371797564.JavaMail.sgordon@localhost.localdomain> ----- Original Message ----- > From: "henry hly" > To: "OpenStack Development Mailing List (not for usage questions)" > > On Fri, Dec 12, 2014 at 11:41 AM, Dan Smith > wrote: > >> [joehuang] Could you pls. make it more clear for the deployment > >> mode > >> of cells when used for globally distributed DCs with single API. > >> Do > >> you mean cinder/neutron/glance/ceilometer will be shared by all > >> cells, and use RPC for inter-dc communication, and only support > >> one > >> vendor's OpenStack distribution? How to do the cross data center > >> integration and troubleshooting with RPC if the > >> driver/agent/backend(storage/network/sever) from different vendor. > > > > Correct, cells only applies to single-vendor distributed > > deployments. In > > both its current and future forms, it uses private APIs for > > communication between the components, and thus isn't suited for a > > multi-vendor environment. > > > > Just MHO, but building functionality into existing or new > > components to > > allow deployments from multiple vendors to appear as a single API > > endpoint isn't something I have much interest in. > > > > --Dan > > > > Even with the same distribution, cell still face many challenges > across multiple DC connected with WAN. Considering OAM, it's easier > to > manage autonomous systems connected with external northband interface > across remote sites, than a single monolithic system connected with > internal RPC message. The key question here is this primarily the role of OpenStack or an external cloud management platform, and I don't profess to know the answer. What do people use (workaround or otherwise) for these use cases *today*? Another question I have is, one of the stated use cases is for managing OpenStack clouds from multiple vendors - is the implication here that some of these have additional divergent API extensions or is the concern solely the incompatibilities inherent in communicating using the RPC mechanisms? If there are divergent API extensions, how is that handled from a proxying point of view if not all underlying OpenStack clouds necessarily support it (I guess same applies when using distributions without additional extensions but of different versions - e.g. Icehouse vs Juno which I believe was also a targeted use case?)? > Although Cell did some separation and modulation (not to say it's > still internal RPC across WAN), they leaves cinder, neutron, > ceilometer. Shall we wait for all these projects to re-factor with > Cell-like hierarchy structure, or adopt a more loose coupled way, to > distribute them into autonomous units at the basis of the whole > Openstack (except Keystone which can handle multiple region > naturally)? Similarly though, is the intent with Cascading that each new project would have to also implement and provide a proxy for use in these deployments? One of the challenges with maintaining/supporting the existing Cells implementation has been that it's effectively it's own thing and as a result it is often not considered when adding new functionality. > As we can see, compared with Cell, much less work is needed to build > a > Cascading solution, No patch is needed except Neutron (waiting some > upcoming features not landed in Juno), nearly all work lies in the > proxy, which is in fact another kind of driver/agent. Right, but the proxies still appear to be a not insignificant amount of code - is the intent not that the proxies would eventually reside within the relevant projects? I've been assuming yes but I am wondering if this was an incorrect assumption on my part based on your comment. Thanks, Steve From dpkshetty at gmail.com Fri Dec 12 08:28:26 2014 From: dpkshetty at gmail.com (Deepak Shetty) Date: Fri, 12 Dec 2014 13:58:26 +0530 Subject: [openstack-dev] [nova][cinder][infra] Ceph CI status update In-Reply-To: <5489CE60.2040102@anteaya.info> References: <20141211163643.GA10911@helmut> <5489CE60.2040102@anteaya.info> Message-ID: On Thu, Dec 11, 2014 at 10:33 PM, Anita Kuno wrote: > On 12/11/2014 09:36 AM, Jon Bernard wrote: > > Heya, quick Ceph CI status update. Once the test_volume_boot_pattern > > was marked as skipped, only the revert_resize test was failing. I have > > submitted a patch to nova for this [1], and that yields an all green > > ceph ci run [2]. So at the moment, and with my revert patch, we're in > > good shape. > > > > I will fix up that patch today so that it can be properly reviewed and > > hopefully merged. From there I'll submit a patch to infra to move the > > job to the check queue as non-voting, and we can go from there. > > > > [1] https://review.openstack.org/#/c/139693/ > > [2] > http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html > > > > Cheers, > > > Please add the name of your CI account to this table: > https://wiki.openstack.org/wiki/ThirdPartySystems > > As outlined in the third party CI requirements: > http://ci.openstack.org/third_party.html#requirements > > Please post system status updates to your individual CI wikipage that is > linked to this table. > How is posting status there different than here : https://wiki.openstack.org/wiki/Cinder/third-party-ci-status thanx, deepak -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp at jamezpolley.com Fri Dec 12 08:33:24 2014 From: jp at jamezpolley.com (James Polley) Date: Fri, 12 Dec 2014 09:33:24 +0100 Subject: [openstack-dev] [TripleP] CI report: 2014-12-5 - 2014-12-11 Message-ID: In the week since the last email we've had no major CI failures. This makes it very easy for me to write my first CI report. There was a brief period where all the Ubuntu tests failed while an update was rolling out to various mirrors. DerekH worked around this quickly by dropping in a DNS hack, which remains in place. A long term fix for this problem probably involves setting up our own apt mirrors. check-tripleo-ironic-overcloud-precise-ha remains flaky, and hence non-voting. As always more details can be found here (although this week there's nothing to see) https://etherpad.openstack.org/p/tripleo-ci-breakages -------------- next part -------------- An HTML attachment was scrubbed... URL: From raodingyuan at chinacloud.com.cn Fri Dec 12 08:37:03 2014 From: raodingyuan at chinacloud.com.cn (Rao Dingyuan) Date: Fri, 12 Dec 2014 16:37:03 +0800 Subject: [openstack-dev] [Openstack] [Ceilometer] [API] Batch alarm creation Message-ID: <0b9601d015e6$ce3510f0$6a9f32d0$@chinacloud.com.cn> Hi Eoghan and folks, I'm thinking of adding an API to create multiple alarms in a batch. I think adding an API to create multiple alarms is a good option to solve the problem that once an *alarm target* (a vm or a new group of vms) is created, multiple requests will be fired because multiple alarms are to be created. In the our current project, this requiement is specially urgent since our alarm target is one VM, and 6 alarms are to be created when one VM is created. What do you guys think? Best Regards, Kurt Rao ----- Original ----- ???: Eoghan Glynn [mailto:eglynn at redhat.com] ????: 2014?12?3? 20:34 ???: Rao Dingyuan ??: openstack at lists.openstack.org ??: Re: [Openstack] [Ceilometer] looking for alarm best practice - please help > Hi folks, > > > > I wonder if anyone could share some best practice regarding to the > usage of ceilometer alarm. We are using the alarm > evaluation/notification of ceilometer and we don?t feel very well of > the way we use it. Below is our > problem: > > > > ============================ > > Scenario: > > When cpu usage or memory usage above a certain threshold, alerts > should be displayed on admin?s web page. There should be a 3 level > alerts according to meter value, namely notice, warning, fatal. Notice > means the meter value is between 50% ~ 70%, warning means between 70% > ~ 85% and fatal means above 85% > > For example: > > * when one vm?s cpu usage is 72%, an alert message should be displayed > saying > ?Warning: vm[d9b7018b-06c4-4fba-8221-37f67f6c6b8c] cpu usage is above 70%?. > > * when one vm?s memory usage is 90%, another alert message should be > created saying ?Fatal: vm[d9b7018b-06c4-4fba-8221-37f67f6c6b8c] memory > usage is above 85%? > > > > Our current Solution: > > We used ceilometer alarm evaluation/notification to implement this. To > distinguish which VM and which meter is above what value, we?ve > created one alarm for each VM by each condition. So, to monitor 1 VM, > 6 alarms will be created because there are 2 meters and for each meter there are 3 levels. > That means, if there are 100 VMs to be monitored, 600 alarms will be > created. > > > > Problems: > > * The first problem is, when the number of meters increases, the > number of alarms will be multiplied. For example, customer may want > alerts on disk and network IO rates, and if we do that, there will be > 4*3=12 alarms for each VM. > > * The second problem is, when one VM is created, multiple alarms will > be created, meaning multiple http requests will be fired. In the case > above, 6 HTTP requests will be needed once a VM is created. And this > number also increases as the number of meters goes up. One way of reducing both the number of alarms and the volume of notifications would be to group related VMs, if such a concept exists in your use-case. This is effectively how Heat autoscaling uses ceilometer, alarming on the average of some statistic over a set of instances (as opposed to triggering on individual instances). The VMs could be grouped by setting user-metadata of form: nova boot ... --meta metering.my_server_group=foobar Any user-metadata prefixed with 'metering.' will be preserved by ceilometer in the resource_metadata.user_metedata stored for each sample, so that it can used to select the statistics on which the alarm is based, e.g. ceilometer alarm-threshold-create --name cpu_high_foobar \ --description 'warning: foobar instance group running hot' \ --meter-name cpu_util --threshold 70.0 \ --comparison-operator gt --statistic avg \ ... --query metadata.user_metedata.my_server_group=foobar This approach is of course predicated on the there being some natural grouping relation between instances in your environment. Cheers, Eoghan > ============================= > > > > Do anyone have any suggestions? > > > > > > > > Best Regards! > > Kurt Rao > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From sgordon at redhat.com Fri Dec 12 08:54:07 2014 From: sgordon at redhat.com (Steve Gordon) Date: Fri, 12 Dec 2014 03:54:07 -0500 (EST) Subject: [openstack-dev] [NFV][Telco] pxe-boot In-Reply-To: <548869F5.1040801@dektech.com.au> References: <20141125223355.Horde.8UWcgskXSBVvU-RJrCgbtw7@mail.dektech.com.au> <693765041.19659732.1417044729676.JavaMail.zimbra@redhat.com> <547ED500.1070008@dektech.com.au> <548869F5.1040801@dektech.com.au> Message-ID: <6017200.21424.1418374444199.JavaMail.sgordon@localhost.localdomain> ----- Original Message ----- > From: "Pasquale Porreca" > To: openstack-dev at lists.openstack.org > > Well, one of the main reason to choose an open source product is to > avoid vendor lock-in. I think it is not > > advisable to embed in the software running in an instance a call to > OpenStack specific services. Possibly a stupid question, but even if PXE boot was supported would the SC not still have to trigger the creation of the PL instance(s) via a call to Nova anyway (albeit with boot media coming from PXE instead of Glance)? -Steve > On 12/10/14 00:20, Joe Gordon wrote: > On Wed, Dec 3, 2014 at 1:16 AM, Pasquale Porreca < > pasquale.porreca at dektech.com.au > wrote: > > > The use case we were thinking about is a Network Function (e.g. IMS > Nodes) implementation in which the high availability is based on > OpenSAF. In this scenario there is an Active/Standby cluster of 2 > System Controllers (SC) plus several Payloads (PL) that boot from > network, controlled by the SC. The logic of which service to deploy > on each payload is inside the SC. > > In OpenStack both SCs and PLs will be instances running in the cloud, > anyway the PLs should still boot from network under the control of > the SC. In fact to use Glance to store the image for the PLs and > keep the control of the PLs in the SC, the SC should trigger the > boot of the PLs with requests to Nova/Glance, but an application > running inside an instance should not directly interact with a cloud > infrastructure service like Glance or Nova. > > Why not? This is a fairly common practice. > -- > Pasquale Porreca > > DEK Technologies > Via dei Castelli Romani, 22 > 00040 Pomezia (Roma) > > Mobile +39 3394823805 > Skype paskporr > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Steve Gordon, RHCE Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform From henry4hly at gmail.com Fri Dec 12 08:55:35 2014 From: henry4hly at gmail.com (henry hly) Date: Fri, 12 Dec 2014 16:55:35 +0800 Subject: [openstack-dev] =?utf-8?b?W2FsbF0gW3RjXSBbUFRMXSBDYXNjYWRpbmcg?= =?utf-8?q?vs=2E_Cells_=E2=80=93_summit_recap_and_move_forward?= In-Reply-To: <27026559.21276.1418371797564.JavaMail.sgordon@localhost.localdomain> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD479@szxema505-mbs.china.huawei.com> <548A63D4.6080901@danplanet.com> <27026559.21276.1418371797564.JavaMail.sgordon@localhost.localdomain> Message-ID: On Fri, Dec 12, 2014 at 4:10 PM, Steve Gordon wrote: > ----- Original Message ----- >> From: "henry hly" >> To: "OpenStack Development Mailing List (not for usage questions)" >> >> On Fri, Dec 12, 2014 at 11:41 AM, Dan Smith >> wrote: >> >> [joehuang] Could you pls. make it more clear for the deployment >> >> mode >> >> of cells when used for globally distributed DCs with single API. >> >> Do >> >> you mean cinder/neutron/glance/ceilometer will be shared by all >> >> cells, and use RPC for inter-dc communication, and only support >> >> one >> >> vendor's OpenStack distribution? How to do the cross data center >> >> integration and troubleshooting with RPC if the >> >> driver/agent/backend(storage/network/sever) from different vendor. >> > >> > Correct, cells only applies to single-vendor distributed >> > deployments. In >> > both its current and future forms, it uses private APIs for >> > communication between the components, and thus isn't suited for a >> > multi-vendor environment. >> > >> > Just MHO, but building functionality into existing or new >> > components to >> > allow deployments from multiple vendors to appear as a single API >> > endpoint isn't something I have much interest in. >> > >> > --Dan >> > >> >> Even with the same distribution, cell still face many challenges >> across multiple DC connected with WAN. Considering OAM, it's easier >> to >> manage autonomous systems connected with external northband interface >> across remote sites, than a single monolithic system connected with >> internal RPC message. > > The key question here is this primarily the role of OpenStack or an external cloud management platform, and I don't profess to know the answer. What do people use (workaround or otherwise) for these use cases *today*? Another question I have is, one of the stated use cases is for managing OpenStack clouds from multiple vendors - is the implication here that some of these have additional divergent API extensions or is the concern solely the incompatibilities inherent in communicating using the RPC mechanisms? If there are divergent API extensions, how is that handled from a proxying point of view if not all underlying OpenStack clouds necessarily support it (I guess same applies when using distributions without additional extensions but of different versions - e.g. Icehouse vs Juno which I believe was also a targeted use case?)? It's not about divergent northband API extension. Services between Openstack projects are SOA based, this is a vertical splitting, so when building large and distributed system (whatever it is) with horizontal splitting, shouldn't we prefer clear and stable RESTful interface between these building blocks? > >> Although Cell did some separation and modulation (not to say it's >> still internal RPC across WAN), they leaves cinder, neutron, >> ceilometer. Shall we wait for all these projects to re-factor with >> Cell-like hierarchy structure, or adopt a more loose coupled way, to >> distribute them into autonomous units at the basis of the whole >> Openstack (except Keystone which can handle multiple region >> naturally)? > > Similarly though, is the intent with Cascading that each new project would have to also implement and provide a proxy for use in these deployments? One of the challenges with maintaining/supporting the existing Cells implementation has been that it's effectively it's own thing and as a result it is often not considered when adding new functionality. Yes we need a new proxy, but nova proxy is just a new type of virt driver, neutron proxy a new type of agent, cinder proxy a new type of volume store...They just utilize existing standard driver/agent mechanism, no influence on other code in tree. > >> As we can see, compared with Cell, much less work is needed to build >> a >> Cascading solution, No patch is needed except Neutron (waiting some >> upcoming features not landed in Juno), nearly all work lies in the >> proxy, which is in fact another kind of driver/agent. > > Right, but the proxies still appear to be a not insignificant amount of code - is the intent not that the proxies would eventually reside within the relevant projects? I've been assuming yes but I am wondering if this was an incorrect assumption on my part based on your comment. > > Thanks, > > Steve > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Cory.Benfield at metaswitch.com Fri Dec 12 09:07:25 2014 From: Cory.Benfield at metaswitch.com (Cory Benfield) Date: Fri, 12 Dec 2014 09:07:25 +0000 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <548A231D.6070302@iweb.com> References: <54899F92.2060900@gmail.com> <548A231D.6070302@iweb.com> Message-ID: On Thu, Dec 11, 2014 at 23:05:01, Mathieu Gagn? wrote: > When no security group is provided, Nova will default to the "default" > security group. However due to the fact 2 security groups had the same > name, nova-compute got confused, put the instance in ERROR state and > logged this traceback [1]: > > NoUniqueMatch: Multiple security groups found matching 'default'. > Use > an ID to be more specific. > We've hit this in our automated testing in the past as well. Similarly, we have no idea how we managed to achieve this, but it's clearly something that the APIs allow you to do. That feels unwise. > - the instance request should be blocked before it ends up on a compute > node with nova-compute. It shouldn't be the job of nova-compute to > find > out issues about duplicated names. It should be the job of nova-api. > Don't waste your time scheduling and spawning an instance that will > never spawn with success. > > - From an end user perspective, this means "nova boot" returns no error > and it's only later that the user is informed of the confusion with > security group names. > > - Why does it have to crash with a traceback? IMO, traceback means "we > didn't think about this use case, here is more information on how to > find the source". As an operator, I don't care about the traceback if > it's a known limitation of Nova/Neutron. Don't pollute my logs with > "normal exceptions". (Log rationalization anyone?) +1 to all of this. From ifzing at 126.com Fri Dec 12 09:36:18 2014 From: ifzing at 126.com (joejiang) Date: Fri, 12 Dec 2014 17:36:18 +0800 (CST) Subject: [openstack-dev] [Nova] RemoteError: Remote error: OperationalError (OperationalError) (1048, "Column 'instance_uuid' cannot be null") Message-ID: Hi folks, when i launch instance use cirros image in the new openstack environment(juno version & centos7 OS base), the following piece is error logs from compute node. anybody meet the same error? ---------------------------- 2014-12-12 17:16:52.481 12966 ERROR nova.compute.manager [-] [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Failed to allocate network(s) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Traceback (most recent call last): 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2190, in _build_resources 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] requested_networks, security_groups) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1683, in _build_networks_for_instance 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] requested_networks, macs, security_groups, dhcp_options) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1717, in _allocate_network 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] instance.save(expected_task_state=[None]) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 189, in wrapper 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] ctxt, self, fn.__name__, args, kwargs) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 351, in object_action 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] objmethod=objmethod, args=args, kwargs=kwargs) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 152, in call 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] retry=self.retry) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in _send 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] timeout=timeout, retry=retry) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 408, in send 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] retry=retry) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 399, in _send 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] raise result 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] RemoteError: Remote error: OperationalError (OperationalError) (1048, "Column 'instance_uuid' cannot be null") 'UPDATE instance_extra SET updated_at=%s, instance_uuid=%s WHERE instance_extra.id = %s' (datetime.datetime(2014, 12, 12, 9, 16, 52, 434376), None, 5L) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 400, in _object_dispatch\n return getattr(target, method)(context, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 204, in wrapper\n return fn(self, ctxt, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 500, in save\n columns_to_join=_expected_cols(expected_attrs))\n', u' File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 746, in instance_update_and_get_original\n columns_to_join=columns_to_join)\n', u' File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 143, in wrapper\n return f(*args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2289, in instance_update_and_get_original\n columns_to_join=columns_to_join)\n', u' File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2380, in _instance_update\n session.add(instance_ref)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 470, in __exit__\n self.rollback()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__\n compat.reraise(exc_type, exc_value, exc_tb)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 467, in __exit__\n self.commit()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 377, in commit\n self._prepare_impl()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 357, in _prepare_impl\n self.session.flush()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1919, in flush\n self._flush(objects)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2037, in _flush\n transaction.rollback(_capture_exception=True)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__\n compat.reraise(exc_type, exc_value, exc_tb)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2001, in _flush\n flush_context.execute()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 372, in execute\n rec.execute(self)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 526, in execute\n uow\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 60, in save_obj\n mapper, table, update)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 518, in _emit_update_statements\n execute(statement, params)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute\n return meth(self, multiparams, params)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 321, in _execute_on_connection\n return connection._execute_clauseelement(self, multiparams, params)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 826, in _execute_clauseelement\n compiled_sql, distilled_params\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context\n context)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1156, in _handle_dbapi_exception\n util.raise_from_cause(newraise, exc_info)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause\n reraise(type(exception), exception, tb=exc_tb)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context\n context)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute\n cursor.execute(statement, parameters)\n', u' File "/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 174, in execute\n self.errorhandler(self, exc, value)\n', u' File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler\n raise errorclass, errorvalue\n', u'OperationalError: (OperationalError) (1048, "Column \'instance_uuid\' cannot be null") \'UPDATE instance_extra SET updated_at=%s, instance_uuid=%s WHERE instance_extra.id = %s\' (datetime.datetime(2014, 12, 12, 9, 16, 52, 434376), None, 5L)\n']. 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] 2014-12-12 17:16:52.515 12966 INFO nova.scheduler.client.report [-] Compute_service record updated for ('computenode.domain.com') 2014-12-12 17:16:52.517 12966 ERROR nova.compute.manager [-] [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Build of instance 67e215e0-2193-439d-89c4-be8c378df78d aborted: Failed to allocate the network(s), not rescheduling. 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Traceback (most recent call last): 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _do_build_and_run_instance 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] filter_properties) 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2129, in _build_and_run_instance 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] 'create.error', fault=e) 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] six.reraise(self.type_, self.value, self.tb) 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2102, in _build_and_run_instance 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] block_device_mapping) as resources: 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__ 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] return self.gen.next() 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2205, in _build_resources 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] reason=msg) 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] BuildAbortException: Build of instance 67e215e0-2193-439d-89c4-be8c378df78d aborted: Failed to allocate the network(s), not rescheduling. 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] 2014-12-12 17:16:52.566 12966 INFO nova.network.neutronv2.api [-] [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Unable to reset device ID for port None 2014-12-12 17:17:04.977 12966 WARNING nova.compute.manager [req-f9b96041-ff4c-4b3c-8a0e-bdedf79193d6 None] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor -------------- next part -------------- An HTML attachment was scrubbed... URL: From berrange at redhat.com Fri Dec 12 09:46:39 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Fri, 12 Dec 2014 09:46:39 +0000 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> <20141211104137.GD23831@redhat.com> Message-ID: <20141212094639.GE32050@redhat.com> On Fri, Dec 12, 2014 at 01:21:36PM +0900, Ryu Ishimoto wrote: > On Thu, Dec 11, 2014 at 7:41 PM, Daniel P. Berrange > wrote: > > > > > Yes, I really think this is a key point. When we introduced the VIF type > > mechanism we never intended for there to be soo many different VIF types > > created. There is a very small, finite number of possible ways to configure > > the libvirt guest XML and it was intended that the VIF types pretty much > > mirror that. This would have given us about 8 distinct VIF type maximum. > > > > I think the reason for the larger than expected number of VIF types, is > > that the drivers are being written to require some arbitrary tools to > > be invoked in the plug & unplug methods. It would really be better if > > those could be accomplished in the Neutron code than the Nova code, via > > a host agent run & provided by the Neutron mechanism. This would let > > us have a very small number of VIF types and so avoid the entire problem > > that this thread is bringing up. > > > > Failing that though, I could see a way to accomplish a similar thing > > without a Neutron launched agent. If one of the VIF type binding > > parameters were the name of a script, we could run that script on > > plug & unplug. So we'd have a finite number of VIF types, and each > > new Neutron mechanism would merely have to provide a script to invoke > > > > eg consider the existing midonet & iovisor VIF types as an example. > > Both of them use the libvirt "ethernet" config, but have different > > things running in their plug methods. If we had a mechanism for > > associating a "plug script" with a vif type, we could use a single > > VIF type for both. > > > > eg iovisor port binding info would contain > > > > vif_type=ethernet > > vif_plug_script=/usr/bin/neutron-iovisor-vif-plug > > > > while midonet would contain > > > > vif_type=ethernet > > vif_plug_script=/usr/bin/neutron-midonet-vif-plug > > > > > > And so you see implementing a new Neutron mechanism in this way would > > not require *any* changes in Nova whatsoever. The work would be entirely > > self-contained within the scope of Neutron. It is simply a packaging > > task to get the vif script installed on the compute hosts, so that Nova > > can execute it. > > > > This is essentially providing a flexible VIF plugin system for Nova, > > without having to have it plug directly into the Nova codebase with > > the API & RPC stability constraints that implies. > > > > > +1 > > Port binding mechanism could vary among different networking technologies, > which is not nova's concern, so this proposal makes sense. Note that some > vendors already provide port binding scripts that are currently executed > directly from nova's vif.py ('mm-ctl' of midonet and 'ifc_ctl' for iovisor > are two such examples), and this proposal makes it unnecessary to have > these hard-coded in nova. The only question I have is, how would nova > figure out the arguments for these scripts? Should nova dictate what they > are? We could define some standard set of arguments & environment variables to pass the information from the VIF to the script in a standard way. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From pasquale.porreca at dektech.com.au Fri Dec 12 09:48:45 2014 From: pasquale.porreca at dektech.com.au (Pasquale Porreca) Date: Fri, 12 Dec 2014 10:48:45 +0100 Subject: [openstack-dev] [NFV][Telco] pxe-boot In-Reply-To: References: <20141125223355.Horde.8UWcgskXSBVvU-RJrCgbtw7@mail.dektech.com.au> <693765041.19659732.1417044729676.JavaMail.zimbra@redhat.com> <547ED500.1070008@dektech.com.au> <548869F5.1040801@dektech.com.au> Message-ID: <548AB9FD.8070005@dektech.com.au> >From my point of view it is not advisable to base some functionalities of the instances on direct calls to Openstack API. This for 2 main reasons, the first one: if the Openstack code changes (and we know Openstack code does change) it will be required to change the code of the software running in the instance too; the second one: if in the future one wants to pass to another cloud infrastructure it will be more difficult to achieve it. On 12/12/14 01:20, Joe Gordon wrote: > On Wed, Dec 10, 2014 at 7:42 AM, Pasquale Porreca < > pasquale.porreca at dektech.com.au> wrote: > >> > Well, one of the main reason to choose an open source product is to avoid >> > vendor lock-in. I think it is not >> > advisable to embed in the software running in an instance a call to >> > OpenStack specific services. >> > > I'm sorry I don't follow the logic here, can you elaborate. > > -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr From pasquale.porreca at dektech.com.au Fri Dec 12 09:58:57 2014 From: pasquale.porreca at dektech.com.au (Pasquale Porreca) Date: Fri, 12 Dec 2014 10:58:57 +0100 Subject: [openstack-dev] [NFV][Telco] pxe-boot In-Reply-To: <6017200.21424.1418374444199.JavaMail.sgordon@localhost.localdomain> References: <20141125223355.Horde.8UWcgskXSBVvU-RJrCgbtw7@mail.dektech.com.au> <693765041.19659732.1417044729676.JavaMail.zimbra@redhat.com> <547ED500.1070008@dektech.com.au> <548869F5.1040801@dektech.com.au> <6017200.21424.1418374444199.JavaMail.sgordon@localhost.localdomain> Message-ID: <548ABC61.2070502@dektech.com.au> It is possible to decide in advance how many PL will be necessary for a service, so their creation can be decided externally from the SC. Anyway the role that any PL should assume and so the image to install on each PL should be decided by the SC. On 12/12/14 09:54, Steve Gordon wrote: > ----- Original Message ----- >> > From: "Pasquale Porreca" >> > To: openstack-dev at lists.openstack.org >> > >> > Well, one of the main reason to choose an open source product is to >> > avoid vendor lock-in. I think it is not >> > >> > advisable to embed in the software running in an instance a call to >> > OpenStack specific services. > Possibly a stupid question, but even if PXE boot was supported would the SC not still have to trigger the creation of the PL instance(s) via a call to Nova anyway (albeit with boot media coming from PXE instead of Glance)? -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr From visnusaran.murugan at hp.com Fri Dec 12 10:29:32 2014 From: visnusaran.murugan at hp.com (Murugan, Visnusaran) Date: Fri, 12 Dec 2014 10:29:32 +0000 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <548A3FB8.9030007@redhat.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> <548A3FB8.9030007@redhat.com> Message-ID: <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> > -----Original Message----- > From: Zane Bitter [mailto:zbitter at redhat.com] > Sent: Friday, December 12, 2014 6:37 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept > showdown > > On 11/12/14 08:26, Murugan, Visnusaran wrote: > >>> [Murugan, Visnusaran] > >>> In case of rollback where we have to cleanup earlier version of > >>> resources, > >> we could get the order from old template. We'd prefer not to have a > >> graph table. > >> > >> In theory you could get it by keeping old templates around. But that > >> means keeping a lot of templates, and it will be hard to keep track > >> of when you want to delete them. It also means that when starting an > >> update you'll need to load every existing previous version of the > >> template in order to calculate the dependencies. It also leaves the > >> dependencies in an ambiguous state when a resource fails, and > >> although that can be worked around it will be a giant pain to implement. > >> > > > > Agree that looking to all templates for a delete is not good. But > > baring Complexity, we feel we could achieve it by way of having an > > update and a delete stream for a stack update operation. I will > > elaborate in detail in the etherpad sometime tomorrow :) > > > >> I agree that I'd prefer not to have a graph table. After trying a > >> couple of different things I decided to store the dependencies in the > >> Resource table, where we can read or write them virtually for free > >> because it turns out that we are always reading or updating the > >> Resource itself at exactly the same time anyway. > >> > > > > Not sure how this will work in an update scenario when a resource does > > not change and its dependencies do. > > We'll always update the requirements, even when the properties don't > change. > Can you elaborate a bit on rollback. We had an approach with depends_on and needed_by columns in ResourceTable. But dropped it when we figured out we had too many DB operations for Update. > > Also taking care of deleting resources in order will be an issue. > > It works fine. > > > This implies that there will be different versions of a resource which > > will even complicate further. > > No it doesn't, other than the different versions we already have due to > UpdateReplace. > > >>>> This approach reduces DB queries by waiting for completion > >>>> notification > >> on a topic. The drawback I see is that delete stack stream will be > >> huge as it will have the entire graph. We can always dump such data > >> in ResourceLock.data Json and pass a simple flag > >> "load_stream_from_db" to converge RPC call as a workaround for delete > operation. > >>> > >>> This seems to be essentially equivalent to my 'SyncPoint' > >>> proposal[1], with > >> the key difference that the data is stored in-memory in a Heat engine > >> rather than the database. > >>> > >>> I suspect it's probably a mistake to move it in-memory for similar > >>> reasons to the argument Clint made against synchronising the marking > >>> off > >> of dependencies in-memory. The database can handle that and the > >> problem of making the DB robust against failures of a single machine > >> has already been solved by someone else. If we do it in-memory we are > >> just creating a single point of failure for not much gain. (I guess > >> you could argue it doesn't matter, since if any Heat engine dies > >> during the traversal then we'll have to kick off another one anyway, > >> but it does limit our options if that changes in the > >> future.) [Murugan, Visnusaran] Resource completes, removes itself > >> from resource_lock and notifies engine. Engine will acquire parent > >> lock and initiate parent only if all its children are satisfied (no child entry in > resource_lock). > >> This will come in place of Aggregator. > >> > >> Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly what I > did. > >> The three differences I can see are: > >> > >> 1) I think you are proposing to create all of the sync points at the > >> start of the traversal, rather than on an as-needed basis. This is > >> probably a good idea. I didn't consider it because of the way my > >> prototype evolved, but there's now no reason I can see not to do this. > >> If we could move the data to the Resource table itself then we could > >> even get it for free from an efficiency point of view. > > > > +1. But we will need engine_id to be stored somewhere for recovery > purpose (easy to be queried format). > > Yeah, so I'm starting to think you're right, maybe the/a Lock table is the right > thing to use there. We could probably do it within the resource table using > the same select-for-update to set the engine_id, but I agree that we might > be starting to jam too much into that one table. > yeah. Unrelated values in resource table. Upon resource completion we have to unset engine_id as well as compared to dropping a row from resource lock. Both are good. Having engine_id in resource_table will reduce db operaions in half. We should go with just resource table along with engine_id. > > Sync points are created as-needed. Single resource is enough to restart > that entire stream. > > I think there is a disconnect in our understanding. I will detail it as well in > the etherpad. > > OK, that would be good. > > >> 2) You're using a single list from which items are removed, rather > >> than two lists (one static, and one to which items are added) that get > compared. > >> Assuming (1) then this is probably a good idea too. > > > > Yeah. We have a single list per active stream which work by removing > > Complete/satisfied resources from it. > > I went to change this and then remembered why I did it this way: the sync > point is also storing data about the resources that are triggering it. Part of this > is the RefID and attributes, and we could replace that by storing that data in > the Resource itself and querying it rather than having it passed in via the > notification. But the other part is the ID/key of those resources, which we > _need_ to know in order to update the requirements in case one of them > has been replaced and thus the graph doesn't reflect it yet. (Or, for that > matter, we need it to know where to go looking for the RefId and/or > attributes if they're in the > DB.) So we have to store some data, we can't just remove items from the > required list (although we could do that as well). > > >> 3) You're suggesting to notify the engine unconditionally and let the > >> engine decide if the list is empty. That's probably not a good idea - > >> not only does it require extra reads, it introduces a race condition > >> that you then have to solve (it can be solved, it's just more work). > >> Since the update to remove a child from the list is atomic, it's best > >> to just trigger the engine only if the list is now empty. > >> > > > > No. Notify only if stream has something to be processed. The newer > > Approach based on db lock will be that the last resource will initiate its > parent. > > This is opposite to what our Aggregator model had suggested. > > OK, I think we're on the same page on this one then. > Yeah. > >>> It's not clear to me how the 'streams' differ in practical terms > >>> from just passing a serialisation of the Dependencies object, other > >>> than being incomprehensible to me ;). The current Dependencies > >>> implementation > >>> (1) is a very generic implementation of a DAG, (2) works and has > >>> plenty of > >> unit tests, (3) has, with I think one exception, a pretty > >> straightforward API, > >> (4) has a very simple serialisation, returned by the edges() method, > >> which can be passed back into the constructor to recreate it, and (5) > >> has an API that is to some extent relied upon by resources, and so > >> won't likely be removed outright in any event. > >>> Whatever code we need to handle dependencies ought to just build on > >> this existing implementation. > >>> [Murugan, Visnusaran] Our thought was to reduce payload size > >> (template/graph). Just planning for worst case scenario (million > >> resource > >> stack) We could always dump them in ResourceLock.data to be loaded by > >> Worker. > >> > >> If there's a smaller representation of a graph than a list of edges > >> then I don't know what it is. The proposed stream structure certainly > >> isn't it, unless you mean as an alternative to storing the entire > >> graph once for each resource. A better alternative is to store it > >> once centrally - in my current implementation it is passed down > >> through the trigger messages, but since only one traversal can be in > >> progress at a time it could just as easily be stored in the Stack table of the > database at the slight cost of an extra write. > >> > > > > Agree that edge is the smallest representation of a graph. But it does > > not give us a complete picture without doing a DB lookup. Our > > assumption was to store streams in IN_PROGRESS resource_lock.data > > column. This could be in resource table instead. > > That's true, but I think in practice at any point where we need to look at this > we will always have already loaded the Stack from the DB for some other > reason, so we actually can get it for free. (See detailed discussion in my reply > to Anant.) > Aren't we planning to stop loading stack with all resource objects in future to Address scalability concerns we currently have? > >> I'm not opposed to doing that, BTW. In fact, I'm really interested in > >> your input on how that might help make recovery from failure more > >> robust. I know Anant mentioned that not storing enough data to > >> recover when a node dies was his big concern with my current approach. > >> > > > > With streams, We feel recovery will be easier. All we need is a > > trigger :) > > > >> I can see that by both creating all the sync points at the start of > >> the traversal and storing the dependency graph in the database > >> instead of letting it flow through the RPC messages, we would be able > >> to resume a traversal where it left off, though I'm not sure what that buys > us. > >> > >> And I guess what you're suggesting is that by having an explicit lock > >> with the engine ID specified, we can detect when a resource is stuck > >> in IN_PROGRESS due to an engine going down? That's actually pretty > interesting. > >> > > > > Yeah :) > > > >>> Based on our call on Thursday, I think you're taking the idea of the > >>> Lock > >> table too literally. The point of referring to locks is that we can > >> use the same concepts as the Lock table relies on to do atomic > >> updates on a particular row of the database, and we can use those > >> atomic updates to prevent race conditions when implementing > >> SyncPoints/Aggregators/whatever you want to call them. It's not that > >> we'd actually use the Lock table itself, which implements a mutex and > >> therefore offers only a much slower and more stateful way of doing > >> what we want (lock mutex, change data, unlock mutex). > >>> [Murugan, Visnusaran] Are you suggesting something like a > >>> select-for- > >> update in resource table itself without having a lock table? > >> > >> Yes, that's exactly what I was suggesting. > > > > DB is always good for sync. But we need to be careful not to overdo it. > > Yeah, I see what you mean now, it's starting to _feel_ like there'd be too > many things mixed together in the Resource table. Are you aware of some > concrete harm that might cause though? What happens if we overdo it? Is > select-for-update on a huge row more expensive than the whole overhead > of manipulating the Lock? > > Just trying to figure out if intuition is leading me astray here. > You are right. There should be no difference apart from little bump In memory usage. But I think it should be fine. > > Will update etherpad by tomorrow. > > OK, thanks. > > cheers, > Zane. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From visnusaran.murugan at hp.com Fri Dec 12 11:40:03 2014 From: visnusaran.murugan at hp.com (Murugan, Visnusaran) Date: Fri, 12 Dec 2014 11:40:03 +0000 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> <548A3FB8.9030007@redhat.com> <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> Message-ID: <4641310AFBEE10419D0A020273367C140CA39B0F@G1W3645.americas.hpqcorp.net> Hi zaneb, Etherpad updated. https://etherpad.openstack.org/p/execution-stream-and-aggregator-based-convergence > -----Original Message----- > From: Murugan, Visnusaran > Sent: Friday, December 12, 2014 4:00 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept > showdown > > > > > -----Original Message----- > > From: Zane Bitter [mailto:zbitter at redhat.com] > > Sent: Friday, December 12, 2014 6:37 AM > > To: openstack-dev at lists.openstack.org > > Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept > > showdown > > > > On 11/12/14 08:26, Murugan, Visnusaran wrote: > > >>> [Murugan, Visnusaran] > > >>> In case of rollback where we have to cleanup earlier version of > > >>> resources, > > >> we could get the order from old template. We'd prefer not to have a > > >> graph table. > > >> > > >> In theory you could get it by keeping old templates around. But > > >> that means keeping a lot of templates, and it will be hard to keep > > >> track of when you want to delete them. It also means that when > > >> starting an update you'll need to load every existing previous > > >> version of the template in order to calculate the dependencies. It > > >> also leaves the dependencies in an ambiguous state when a resource > > >> fails, and although that can be worked around it will be a giant pain to > implement. > > >> > > > > > > Agree that looking to all templates for a delete is not good. But > > > baring Complexity, we feel we could achieve it by way of having an > > > update and a delete stream for a stack update operation. I will > > > elaborate in detail in the etherpad sometime tomorrow :) > > > > > >> I agree that I'd prefer not to have a graph table. After trying a > > >> couple of different things I decided to store the dependencies in > > >> the Resource table, where we can read or write them virtually for > > >> free because it turns out that we are always reading or updating > > >> the Resource itself at exactly the same time anyway. > > >> > > > > > > Not sure how this will work in an update scenario when a resource > > > does not change and its dependencies do. > > > > We'll always update the requirements, even when the properties don't > > change. > > > > Can you elaborate a bit on rollback. We had an approach with depends_on > and needed_by columns in ResourceTable. But dropped it when we figured > out we had too many DB operations for Update. > > > > Also taking care of deleting resources in order will be an issue. > > > > It works fine. > > > > > This implies that there will be different versions of a resource > > > which will even complicate further. > > > > No it doesn't, other than the different versions we already have due > > to UpdateReplace. > > > > >>>> This approach reduces DB queries by waiting for completion > > >>>> notification > > >> on a topic. The drawback I see is that delete stack stream will be > > >> huge as it will have the entire graph. We can always dump such data > > >> in ResourceLock.data Json and pass a simple flag > > >> "load_stream_from_db" to converge RPC call as a workaround for > > >> delete > > operation. > > >>> > > >>> This seems to be essentially equivalent to my 'SyncPoint' > > >>> proposal[1], with > > >> the key difference that the data is stored in-memory in a Heat > > >> engine rather than the database. > > >>> > > >>> I suspect it's probably a mistake to move it in-memory for similar > > >>> reasons to the argument Clint made against synchronising the > > >>> marking off > > >> of dependencies in-memory. The database can handle that and the > > >> problem of making the DB robust against failures of a single > > >> machine has already been solved by someone else. If we do it > > >> in-memory we are just creating a single point of failure for not > > >> much gain. (I guess you could argue it doesn't matter, since if any > > >> Heat engine dies during the traversal then we'll have to kick off > > >> another one anyway, but it does limit our options if that changes > > >> in the > > >> future.) [Murugan, Visnusaran] Resource completes, removes itself > > >> from resource_lock and notifies engine. Engine will acquire parent > > >> lock and initiate parent only if all its children are satisfied (no > > >> child entry in > > resource_lock). > > >> This will come in place of Aggregator. > > >> > > >> Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly > > >> what I > > did. > > >> The three differences I can see are: > > >> > > >> 1) I think you are proposing to create all of the sync points at > > >> the start of the traversal, rather than on an as-needed basis. This > > >> is probably a good idea. I didn't consider it because of the way my > > >> prototype evolved, but there's now no reason I can see not to do this. > > >> If we could move the data to the Resource table itself then we > > >> could even get it for free from an efficiency point of view. > > > > > > +1. But we will need engine_id to be stored somewhere for recovery > > purpose (easy to be queried format). > > > > Yeah, so I'm starting to think you're right, maybe the/a Lock table is > > the right thing to use there. We could probably do it within the > > resource table using the same select-for-update to set the engine_id, > > but I agree that we might be starting to jam too much into that one table. > > > > yeah. Unrelated values in resource table. Upon resource completion we > have to unset engine_id as well as compared to dropping a row from > resource lock. > Both are good. Having engine_id in resource_table will reduce db operaions > in half. We should go with just resource table along with engine_id. > > > > Sync points are created as-needed. Single resource is enough to > > > restart > > that entire stream. > > > I think there is a disconnect in our understanding. I will detail it > > > as well in > > the etherpad. > > > > OK, that would be good. > > > > >> 2) You're using a single list from which items are removed, rather > > >> than two lists (one static, and one to which items are added) that > > >> get > > compared. > > >> Assuming (1) then this is probably a good idea too. > > > > > > Yeah. We have a single list per active stream which work by removing > > > Complete/satisfied resources from it. > > > > I went to change this and then remembered why I did it this way: the > > sync point is also storing data about the resources that are > > triggering it. Part of this is the RefID and attributes, and we could > > replace that by storing that data in the Resource itself and querying > > it rather than having it passed in via the notification. But the other > > part is the ID/key of those resources, which we _need_ to know in > > order to update the requirements in case one of them has been replaced > > and thus the graph doesn't reflect it yet. (Or, for that matter, we > > need it to know where to go looking for the RefId and/or attributes if > > they're in the > > DB.) So we have to store some data, we can't just remove items from > > the required list (although we could do that as well). > > > > >> 3) You're suggesting to notify the engine unconditionally and let > > >> the engine decide if the list is empty. That's probably not a good > > >> idea - not only does it require extra reads, it introduces a race > > >> condition that you then have to solve (it can be solved, it's just more > work). > > >> Since the update to remove a child from the list is atomic, it's > > >> best to just trigger the engine only if the list is now empty. > > >> > > > > > > No. Notify only if stream has something to be processed. The newer > > > Approach based on db lock will be that the last resource will > > > initiate its > > parent. > > > This is opposite to what our Aggregator model had suggested. > > > > OK, I think we're on the same page on this one then. > > > > > Yeah. > > > >>> It's not clear to me how the 'streams' differ in practical terms > > >>> from just passing a serialisation of the Dependencies object, > > >>> other than being incomprehensible to me ;). The current > > >>> Dependencies implementation > > >>> (1) is a very generic implementation of a DAG, (2) works and has > > >>> plenty of > > >> unit tests, (3) has, with I think one exception, a pretty > > >> straightforward API, > > >> (4) has a very simple serialisation, returned by the edges() > > >> method, which can be passed back into the constructor to recreate > > >> it, and (5) has an API that is to some extent relied upon by > > >> resources, and so won't likely be removed outright in any event. > > >>> Whatever code we need to handle dependencies ought to just build > > >>> on > > >> this existing implementation. > > >>> [Murugan, Visnusaran] Our thought was to reduce payload size > > >> (template/graph). Just planning for worst case scenario (million > > >> resource > > >> stack) We could always dump them in ResourceLock.data to be loaded > > >> by Worker. > > >> > > >> If there's a smaller representation of a graph than a list of edges > > >> then I don't know what it is. The proposed stream structure > > >> certainly isn't it, unless you mean as an alternative to storing > > >> the entire graph once for each resource. A better alternative is to > > >> store it once centrally - in my current implementation it is passed > > >> down through the trigger messages, but since only one traversal can > > >> be in progress at a time it could just as easily be stored in the > > >> Stack table of the > > database at the slight cost of an extra write. > > >> > > > > > > Agree that edge is the smallest representation of a graph. But it > > > does not give us a complete picture without doing a DB lookup. Our > > > assumption was to store streams in IN_PROGRESS resource_lock.data > > > column. This could be in resource table instead. > > > > That's true, but I think in practice at any point where we need to > > look at this we will always have already loaded the Stack from the DB > > for some other reason, so we actually can get it for free. (See > > detailed discussion in my reply to Anant.) > > > > Aren't we planning to stop loading stack with all resource objects in future to > Address scalability concerns we currently have? > > > >> I'm not opposed to doing that, BTW. In fact, I'm really interested > > >> in your input on how that might help make recovery from failure > > >> more robust. I know Anant mentioned that not storing enough data to > > >> recover when a node dies was his big concern with my current > approach. > > >> > > > > > > With streams, We feel recovery will be easier. All we need is a > > > trigger :) > > > > > >> I can see that by both creating all the sync points at the start of > > >> the traversal and storing the dependency graph in the database > > >> instead of letting it flow through the RPC messages, we would be > > >> able to resume a traversal where it left off, though I'm not sure > > >> what that buys > > us. > > >> > > >> And I guess what you're suggesting is that by having an explicit > > >> lock with the engine ID specified, we can detect when a resource is > > >> stuck in IN_PROGRESS due to an engine going down? That's actually > > >> pretty > > interesting. > > >> > > > > > > Yeah :) > > > > > >>> Based on our call on Thursday, I think you're taking the idea of > > >>> the Lock > > >> table too literally. The point of referring to locks is that we can > > >> use the same concepts as the Lock table relies on to do atomic > > >> updates on a particular row of the database, and we can use those > > >> atomic updates to prevent race conditions when implementing > > >> SyncPoints/Aggregators/whatever you want to call them. It's not > > >> that we'd actually use the Lock table itself, which implements a > > >> mutex and therefore offers only a much slower and more stateful way > > >> of doing what we want (lock mutex, change data, unlock mutex). > > >>> [Murugan, Visnusaran] Are you suggesting something like a > > >>> select-for- > > >> update in resource table itself without having a lock table? > > >> > > >> Yes, that's exactly what I was suggesting. > > > > > > DB is always good for sync. But we need to be careful not to overdo it. > > > > Yeah, I see what you mean now, it's starting to _feel_ like there'd be > > too many things mixed together in the Resource table. Are you aware of > > some concrete harm that might cause though? What happens if we overdo > > it? Is select-for-update on a huge row more expensive than the whole > > overhead of manipulating the Lock? > > > > Just trying to figure out if intuition is leading me astray here. > > > > You are right. There should be no difference apart from little bump In > memory usage. But I think it should be fine. > > > > Will update etherpad by tomorrow. > > > > OK, thanks. > > > > cheers, > > Zane. > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ihrachys at redhat.com Fri Dec 12 11:41:51 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Fri, 12 Dec 2014 12:41:51 +0100 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <548A231D.6070302@iweb.com> References: <54899F92.2060900@gmail.com> <548A231D.6070302@iweb.com> Message-ID: <548AD47F.1080401@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 12/12/14 00:05, Mathieu Gagn? wrote: > We recently had an issue in production where a user had 2 > "default" security groups (for reasons we have yet to identify). This is probably the result of the race condition that is discussed in the thread: https://bugs.launchpad.net/neutron/+bug/1194579 /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUitR/AAoJEC5aWaUY1u57VggIALzdTLHnO7Fr8gKlWPS7Uu+o Su9KV41td8Epzs3pNsGYkH2Kz4T5obAneCORUiZl7boBpAJcnMm3Jt9K8YnTCVUy t4AbfIxSrTD7drHf3HoMoNEDrSntdnpTHoGpG+idNpFjc0kjBjm81W3y14Gab0k5 5Mw/jV8mdnB6aRs5Zhari50/04X8SZeDpQNgBHL5kY40CZ+sUtS4C8OKfj7OEAuW LNmkHgDAtwewbNdluntbSdLGVjyl/F9s+21HoajqBcGNhH8ZHpAr4hphMbZv8lBY iAD2tztxvkacYaGduBFh6bewxVNGaUJBWmmc2xqHAXXbDP3d9aOk5q0wHK3SPQY= =TDwc -----END PGP SIGNATURE----- From tnapierala at mirantis.com Fri Dec 12 11:54:29 2014 From: tnapierala at mirantis.com (Tomasz Napierala) Date: Fri, 12 Dec 2014 12:54:29 +0100 Subject: [openstack-dev] [Fuel] Hard Code Freeze for 6.0 In-Reply-To: References: Message-ID: <096BA77B-FAF6-46F3-9E83-FBEE72D898B4@mirantis.com> Hi, As with 5.1.x, please inform the list if you are rising priority to critical in any bugs targeted to 6.0. Regards, > On 09 Dec 2014, at 23:43, Mike Scherbakov wrote: > > Hi all, > I'm glad to announce that we've reached Hard Code Freeze (HCF) [1] criteria for 6.0 milestone. > > stable/6.0 branches for our repos were created. > > Bug reporters, please do not forget to target both 6.1 (master) and 6.0 (stable/6.0) milestones since now. If the fix is merged to master, it has to be backported to stable/6.0 to make it available in 6.0. Please ensure that you do NOT merge changes to stable branch first. It always has to be a backport with the same Change-ID. Please see more on this at [2]. > > I hope Fuel DevOps team can quickly update nightly builds [3] to reflect changes. > > [1] https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze > [2] https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series > [3] https://fuel-jenkins.mirantis.com/view/ISO/ > > Thanks, > -- > Mike Scherbakov > #mihgen > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Tomasz 'Zen' Napierala Sr. OpenStack Engineer tnapierala at mirantis.com From t.trifonov at gmail.com Fri Dec 12 12:51:35 2014 From: t.trifonov at gmail.com (Tihomir Trifonov) Date: Fri, 12 Dec 2014 14:51:35 +0200 Subject: [openstack-dev] [horizon] REST and Django In-Reply-To: References: <547DD08A.7000402@redhat.com> Message-ID: > > Here's an example: Admin user Joe has an Domain open and stares at it for > 15 minutes while he updates the description. Admin user Bob is asked to go > ahead and enable it. He opens the record, edits it, and then saves it. Joe > finished perfecting the description and saves it. Doing this action would > mean that the Domain is enabled and the description gets updated. Last man > in still wins if he updates the same fields, but if they update different > fields then both of their changes will take affect without them stomping on > each other. Whether that is good or bad may depend on the situation? That's a great example. I believe that all of the Openstack APIs support PATCH updates of arbitrary fields. This way - the frontend(AngularJS) can detect which fields are being modified, and to submit only these fields for update. If we however use a form with POST, although we will load the object before updating it, the middleware cannot find which fields are actually modified, and will update them all, which is more likely what PUT should do. Thus having full control in the frontend part, we can submit only changed fields. If however a service API doesn't support PATCH, it is actually a problem in the API and not in the client... The service API documentation almost always lags (although, helped by specs > now) and the service team takes on the burden of exposing a programmatic > way to access the API. This is tested and easily consumable via the > python clients, which removes some guesswork from using the service. True. But what if the service team modifies a method signature from let's say: def add_something(self, request, > ? field1, field2): > > to def add_something(self, request, > ? field1, field2, field3): > > and in the middleware we have the old signature: ?def add_something(self, request, > ? field1, field2): > we still need to modify the middleware to add the new field. If however the middleware is transparent and just passes **kwargs, it will pass through whatever the frontend sends. So we just need to update the frontend, which can be done using custom views, and not necessary going through an upstream change. My point is why do we need to hide some features of the backend service API behind a "firewall" what the middleware in fact is? On Fri, Dec 12, 2014 at 8:08 AM, Tripp, Travis S wrote: > > I just re-read and I apologize for the hastily written email I previously > sent. I?ll try to salvage it with a bit of a revision below (please ignore > the previous email). > > On 12/11/14, 7:02 PM, "Tripp, Travis S" wrote > (REVISED): > > >Tihomir, > > > >Your comments in the patch were very helpful for me to understand your > >concerns about the ease of customizing without requiring upstream > >changes. It also reminded me that I?ve also previously questioned the > >python middleman. > > > >However, here are a couple of bullet points for Devil?s Advocate > >consideration. > > > > > > * Will we take on auto-discovery of API extensions in two spots > >(python for legacy and JS for new)? > > * The Horizon team will have to keep an even closer eye on every > >single project and be ready to react if there are changes to the API that > >break things. Right now in Glance, for example, they are working on some > >fixes to the v2 API (soon to become v2.3) that will allow them to > >deprecate v1 somewhat transparently to users of the client library. > > * The service API documentation almost always lags (although, helped > >by specs now) and the service team takes on the burden of exposing a > >programmatic way to access the API. This is tested and easily consumable > >via the python clients, which removes some guesswork from using the > >service. > > * This is going to be an incremental approach with legacy support > >requirements anyway. So, incorporating python side changes won?t just go > >away. > > * Which approach would be better if we introduce a server side > >caching mechanism or a new source of data such as elastic search to > >improve performance? Would the client side code have to be changed > >dramatically to take advantage of those improvements or could it be done > >transparently on the server side if we own the exposed API? > > > >I?m not sure I fully understood your example about Cinder. Was it the > >cinder client that held up delivery of horizon support, the cinder API or > >both? If the API isn?t in, then it would hold up delivery of the feature > >in any case. There still would be timing pressures to react and build a > >new view that supports it. For customization, with Richard?s approach new > >views could be supported by just dropping in a new REST API decorated > >module with the APIs you want, including direct pass through support if > >desired to new APIs. Downstream customizations / Upstream changes to > >views seem a bit like a bit of a related, but different issue to me as > >long as their is an easy way to drop in new API support. > > > >Finally, regarding the client making two calls to do an update: > > > >?>>Do we really need the lines:? > > > >>> project = api.keystone.tenant_get(request, id) > >>> kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None) > >? > >I agree that if you already have all the data it may be bad to have to do > >another call. I do think there is room for discussing the reasoning, > >though. > >As far as I can tell, they do this so that if you are updating an entity, > >you have to be very specific about the fields you are changing. I > >actually see this as potentially a protectionary measure against data > >loss and sometimes a very nice to have feature. It perhaps was intended > >to *help* guard against race conditions (no locking and no transactions > >with many users simultaneously accessing the data). > > > >Here's an example: Admin user Joe has a Domain open and stares at it for > >15 minutes while he updates just the description. Admin user Bob is asked > >to go ahead and enable it. He opens the record, edits it, and then saves > >it. Joe finished perfecting the description and saves it. They could in > >effect both edit the same domain independently. Last man in still wins if > >he updates the same fields, but if they update different fields then both > >of their changes will take affect without them stomping on each other. Or > >maybe it is intended to encourage client users to compare their current > >and previous to see if they should issue a warning if the data changed > >between getting and updating the data. Or maybe like you said, it is just > >overhead API calls. > > > > > > >From: Tihomir Trifonov >> > >Reply-To: OpenStack List > > openstack-dev at lists.openstack.or > >g>> > >Date: Thursday, December 11, 2014 at 7:53 AM > >To: OpenStack List > > openstack-dev at lists.openstack.or > >g>> > >Subject: Re: [openstack-dev] [horizon] REST and Django > > > >?? > >Client just needs to know which URL to hit in order to invoke a certain > >API, and does not need to know the procedure name or parameters ordering. > > > > > >?That's where the difference is. I think the client has to know the > >procedure name and parameters. Otherwise? we have a translation factory > >pattern, that converts one naming convention to another. And you won't be > >able to call any service API if there is no code in the middleware to > >translate it to the service API procedure name and parameters. To avoid > >this - we can use a transparent proxy model - direct mapping of a client > >call to service API naming, which can be done if the client invokes the > >methods with the names in the service API, so that the middleware will > >just pass parameters, and will not translate. Instead of: > > > > > >updating user data: > > > > => >/keystone/update/ > => > > > >we may use: > > > > => > > > >=> > > > > > >?The idea here is that if we have keystone 4.0 client, ?we will have to > >just add it to the clients [] list and nothing more is required at the > >middleware level. Just create the frontend code to use the new Keystone > >4.0 methods. Otherwise we will have to add all new/different signatures > >of 4.0 against 2.0/3.0 in the middleware in order to use Keystone 4.0. > > > >There is also a great example of using a pluggable/new feature in > >Horizon. Do you remember the volume types support patch? The patch was > >pending in Gerrit for few months - first waiting the cinder support for > >volume types to go upstream, then waiting few more weeks for review. I am > >not sure, but as far as I remember, the Horizon patch even missed a > >release milestone and was introduced in the next release. > > > >If we have a transparent middleware - this will be no more an issue. As > >long as someone has written the frontend modules(which should be easy to > >add and customize), and they install the required version of the service > >API - they will not need updated Horizon to start using the feature. > >Maybe I am not the right person to give examples here, but how many of > >you had some kind of Horizon customization being locally merged/patched > >in your local distros/setups, until the patch is being pushed upstream? > > > >I will say it again. Nova, Keystone, Cinder, Glance etc. already have > >stable public APIs. Why do we want to add the translation middleware and > >to introduce another level of REST API? This layer will often hide new > >features, added to the service APIs and will delay their appearance in > >Horizon. That's simply not needed. I believe it is possible to just wrap > >the authentication in the middleware REST, but not to translate anything > >as RPC methods/parameters. > > > > > >?And one more example: > > > >?@rest_utils.ajax() > >def put(self, request, id): > > """Update a single project. > > > > The POST data should be an application/json object containing the > > parameters to update: "name" (string), "description" (string), > > "domain_id" (string) and "enabled" (boolean, defaults to true). > > Additional, undefined parameters may also be provided, but you'll > >have > > to look deep into keystone to figure out what they might be. > > > > This method returns HTTP 204 (no content) on success. > > """ > > project = api.keystone.tenant_get(request, id) > > kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None) > > api.keystone.tenant_update(request, project, **kwargs) > > > >?Do we really need the lines:? > > > >project = api.keystone.tenant_get(request, id) > >kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None) > >? > >? ?Since we update the project on the client, it is obvious that we > >already fetched the project data. So we can simply send: > > > > > >POST /keystone/3.0/tenant_update > > > >Content-Type: application/json > > > >{"id": cached.id, "domain_id": cached.domain_id, > >"name": "new name", "description": "new description", "enabled": > >cached.enabled} > > > >Fewer requests, faster application. > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Tihomir Trifonov -------------- next part -------------- An HTML attachment was scrubbed... URL: From flavio at redhat.com Fri Dec 12 13:55:32 2014 From: flavio at redhat.com (Flavio Percoco) Date: Fri, 12 Dec 2014 14:55:32 +0100 Subject: [openstack-dev] [oslo] deprecation 'pattern' library?? In-Reply-To: <5488C7C6.2000508@dague.net> References: <41F34BA2-6A3D-4AF9-B414-57B5526812D6@doughellmann.com> <5488C7C6.2000508@dague.net> Message-ID: <20141212135532.GF5166@redhat.com> On 10/12/14 17:23 -0500, Sean Dague wrote: >On 12/10/2014 04:00 PM, Doug Hellmann wrote: >> >> On Dec 10, 2014, at 3:26 PM, Joshua Harlow wrote: >> >>> Hi oslo folks (and others), >>> >>> I've recently put up a review for some common deprecation patterns: >>> >>> https://review.openstack.org/#/c/140119/ >>> >>> In summary, this is a common set of patterns that can be used by oslo libraries, other libraries... This is different from the versionutils one (which is more of a developer<->operator deprecation interaction) and is more focused on the developer <-> developer deprecation interaction (developers say using oslo libraries). >>> >>> Doug had the question about why not just put this out there on pypi with a useful name not so strongly connected to oslo; since that review is more of a common set of patterns that can be used by libraries outside openstack/oslo as well. There wasn't many/any similar libraries that I found (zope.deprecation is probably the closest) and twisted has something in-built to it that is something similar. So in order to avoid creating our own version of zope.deprecation in that review we might as well create a neat name that can be useful for oslo/openstack/elsewhere... >>> >>> Some ideas that were thrown around on IRC (check 'https://pypi.python.org/pypi/%s' % name for 404 to see if likely not registered): >>> >>> * debtcollector >> >> +1 >> >> I suspect we?ll want a minimal spec for the new lib, but let?s wait and hear what some of the other cores think. > >Not a core, but as someone that will be using it, that seems reasonable. > >The biggest issue with the deprecation patterns in projects is >aggressive cleaning tended to clean out all the deprecations at the >beginning of a cycle... and then all the deprecation assist code, as it >was unused.... sad panda. > >Having it in a common lib as a bunch of decorators would be great. >Especially if we can work out things like *not* spamming deprecation >load warnings on every worker start. Agreed with the above. (also, debtcollector would be my choice too) Flavio > > -Sean > >> >> Doug >> >>> * bagman >>> * deprecate >>> * deprecation >>> * baggage >>> >>> Any other neat names people can think about? >>> >>> Or in general any other comments/ideas about providing such a deprecation pattern library? >>> >>> -Josh >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > >-- >Sean Dague >http://dague.net > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From maxime.leroy at 6wind.com Fri Dec 12 14:05:28 2014 From: maxime.leroy at 6wind.com (Maxime Leroy) Date: Fri, 12 Dec 2014 15:05:28 +0100 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: <20141212094639.GE32050@redhat.com> References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> <20141211104137.GD23831@redhat.com> <20141212094639.GE32050@redhat.com> Message-ID: On Fri, Dec 12, 2014 at 10:46 AM, Daniel P. Berrange wrote: > On Fri, Dec 12, 2014 at 01:21:36PM +0900, Ryu Ishimoto wrote: >> On Thu, Dec 11, 2014 at 7:41 PM, Daniel P. Berrange >> wrote: >> [..] >> Port binding mechanism could vary among different networking technologies, >> which is not nova's concern, so this proposal makes sense. Note that some >> vendors already provide port binding scripts that are currently executed >> directly from nova's vif.py ('mm-ctl' of midonet and 'ifc_ctl' for iovisor >> are two such examples), and this proposal makes it unnecessary to have >> these hard-coded in nova. The only question I have is, how would nova >> figure out the arguments for these scripts? Should nova dictate what they >> are? > > We could define some standard set of arguments & environment variables > to pass the information from the VIF to the script in a standard way. > Many information are used by the plug/unplug method: vif_id, vif_address, ovs_interfaceid, firewall, net_id, tenant_id, vnic_type, instance_uuid... Not sure we can define a set of standard arguments. Maybe instead to use a script we should load some plug/unplug functions from a python module with importlib. So a vif_plug_module option instead to have a vif_plug_script ? There are several other problems to solve if we are going to use this vif_plug_script: - How to have the authorization to run this script (i.e. rootwrap)? - How to test plug/unplug function from these scripts? Now, we have unity tests in nova/tests/unit/virt/libvirt/test_vif.py for plug/unplug method. - How this script will be installed? -> should it be including in the L2 agent package? Some L2 switch doesn't have a L2 agent. Regards, Maxime From berrange at redhat.com Fri Dec 12 14:16:01 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Fri, 12 Dec 2014 14:16:01 +0000 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> <20141211104137.GD23831@redhat.com> <20141212094639.GE32050@redhat.com> Message-ID: <20141212141601.GN32050@redhat.com> On Fri, Dec 12, 2014 at 03:05:28PM +0100, Maxime Leroy wrote: > On Fri, Dec 12, 2014 at 10:46 AM, Daniel P. Berrange > wrote: > > On Fri, Dec 12, 2014 at 01:21:36PM +0900, Ryu Ishimoto wrote: > >> On Thu, Dec 11, 2014 at 7:41 PM, Daniel P. Berrange > >> wrote: > >> > [..] > >> Port binding mechanism could vary among different networking technologies, > >> which is not nova's concern, so this proposal makes sense. Note that some > >> vendors already provide port binding scripts that are currently executed > >> directly from nova's vif.py ('mm-ctl' of midonet and 'ifc_ctl' for iovisor > >> are two such examples), and this proposal makes it unnecessary to have > >> these hard-coded in nova. The only question I have is, how would nova > >> figure out the arguments for these scripts? Should nova dictate what they > >> are? > > > > We could define some standard set of arguments & environment variables > > to pass the information from the VIF to the script in a standard way. > > > > Many information are used by the plug/unplug method: vif_id, > vif_address, ovs_interfaceid, firewall, net_id, tenant_id, vnic_type, > instance_uuid... > > Not sure we can define a set of standard arguments. That's really not a problem. There will be some set of common info needed for all. Then for any particular vif type we know what extra specific fields are define in the port binding metadata. We'll just set env variables for each of those. > Maybe instead to use a script we should load some plug/unplug > functions from a python module with importlib. So a vif_plug_module > option instead to have a vif_plug_script ? No, we explicitly do *not* want any usage of the Nova python modules. That is all private internal Nova implementation detail that nothing is permitted to rely on - this is why the VIF plugin feature was removed in the first place. > There are several other problems to solve if we are going to use this > vif_plug_script: > > - How to have the authorization to run this script (i.e. rootwrap)? Yes, rootwrap. > - How to test plug/unplug function from these scripts? > Now, we have unity tests in nova/tests/unit/virt/libvirt/test_vif.py > for plug/unplug method. Integration and/or functional tests run for the VIF impl would exercise this code still. > - How this script will be installed? > -> should it be including in the L2 agent package? Some L2 switch > doesn't have a L2 agent. That's just a normal downstream packaging task which is easily handled by people doing that work. If there's no L2 agent package they can trivially just create a new package for the script(s) that need installing on the comput node. They would have to be doing exactly this anyway if you had the VIF plugin as a python module instead. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From ihrachys at redhat.com Fri Dec 12 14:27:21 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Fri, 12 Dec 2014 15:27:21 +0100 Subject: [openstack-dev] [all] [ha] potential issue with implicit async-compatible mysql drivers In-Reply-To: <783B3168-BD43-42CB-90EC-B55D10EAB5EA@redhat.com> References: <783B3168-BD43-42CB-90EC-B55D10EAB5EA@redhat.com> Message-ID: <548AFB49.8080802@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Reading the latest comments at https://github.com/PyMySQL/PyMySQL/issues/275, it seems to me that the issue is not to be solved in drivers themselves but instead in libraries that arrange connections (sqlalchemy/oslo.db), correct? Will the proposed connection reopening help? /Ihar On 05/12/14 23:43, Mike Bayer wrote: > Hey list - > > I?m posting this here just to get some ideas on what might be > happening here, as it may or may not have some impact on Openstack > if and when we move to MySQL drivers that are async-patchable, like > MySQL-connector or PyMySQL. I had a user post this issue a few > days ago which I?ve since distilled into test cases for PyMySQL and > MySQL-connector separately. It uses gevent, not eventlet, so I?m > not really sure if this applies. But there?s plenty of very smart > people here so if anyone can shed some light on what is actually > happening here, that would help. > > The program essentially illustrates code that performs several > steps upon a connection, however if the greenlet is suddenly > killed, the state from the connection, while damaged, is still > being allowed to continue on in some way, and what?s > super-catastrophic here is that you see a transaction actually > being committed *without* all the statements proceeding on it. > > In my work with MySQL drivers, I?ve noted for years that they are > all very, very bad at dealing with concurrency-related issues. The > whole ?MySQL has gone away? and ?commands out of sync? errors are > ones that we?ve all just drowned in, and so often these are due to > the driver getting mixed up due to concurrent use of a connection. > However this one seems more insidious. Though at the same time, > the script has some complexity happening (like a simplistic > connection pool) and I?m not really sure where the core of the > issue lies. > > The script is at > https://gist.github.com/zzzeek/d196fa91c40cb515365e and also below. > If you run it for a few seconds, go over to your MySQL command line > and run this query: > > SELECT * FROM table_b WHERE a_id not in (SELECT id FROM table_a) > ORDER BY a_id DESC; > > and what you?ll see is tons of rows in table_b where the ?a_id? is > zero (because cursor.lastrowid fails), but the *rows are > committed*. If you read the segment of code that does this, it > should be impossible: > > connection = pool.get() rowid = execute_sql( connection, "INSERT > INTO table_a (data) VALUES (%s)", ("a",) ) > > gevent.sleep(random.random() * 0.2) try: execute_sql( connection, > "INSERT INTO table_b (a_id, data) VALUES (%s, %s)", (rowid, "b",) > ) connection.commit() pool.return_conn(connection) > > except Exception: connection.rollback() > pool.return_conn(connection) > > so if the gevent.sleep() throws a timeout error, somehow we are > getting thrown back in there, with the connection in an invalid > state, but not invalid enough to commit. > > If a simple check for ?SELECT connection_id()? is added, this query > fails and the whole issue is prevented. Additionally, if you put a > foreign key constraint on that b_table.a_id, then the issue is > prevented, and you see that the constraint violation is happening > all over the place within the commit() call. The connection is > being used such that its state just started after the > gevent.sleep() call. > > Now, there?s also a very rudimental connection pool here. That is > also part of what?s going on. If i try to run without the pool, > the whole script just runs out of connections, fast, which suggests > that this gevent timeout cleans itself up very, very badly. > However, SQLAlchemy?s pool works a lot like this one, so if folks > here can tell me if the connection pool is doing something bad, > then that?s key, because I need to make a comparable change in > SQLAlchemy?s pool. Otherwise I worry our eventlet use could have > big problems under high load. > > > > > > # -*- coding: utf-8 -*- import gevent.monkey > gevent.monkey.patch_all() > > import collections import threading import time import random > import sys > > import logging logging.basicConfig() log = > logging.getLogger('foo') log.setLevel(logging.DEBUG) > > #import pymysql as dbapi from mysql import connector as dbapi > > > class SimplePool(object): def __init__(self): self.checkedin = > collections.deque([ self._connect() for i in range(50) ]) > self.checkout_lock = threading.Lock() self.checkin_lock = > threading.Lock() > > def _connect(self): return dbapi.connect( user="scott", > passwd="tiger", host="localhost", db="test") > > def get(self): with self.checkout_lock: while not self.checkedin: > time.sleep(.1) return self.checkedin.pop() > > def return_conn(self, conn): try: conn.rollback() except: > log.error("Exception during rollback", exc_info=True) try: > conn.close() except: log.error("Exception during close", > exc_info=True) > > # recycle to a new connection conn = self._connect() with > self.checkin_lock: self.checkedin.append(conn) > > > def verify_connection_id(conn): cursor = conn.cursor() try: > cursor.execute("select connection_id()") row = cursor.fetchone() > return row[0] except: return None finally: cursor.close() > > > def execute_sql(conn, sql, params=()): cursor = conn.cursor() > cursor.execute(sql, params) lastrowid = cursor.lastrowid > cursor.close() return lastrowid > > > pool = SimplePool() > > # SELECT * FROM table_b WHERE a_id not in # (SELECT id FROM > table_a) ORDER BY a_id DESC; > > PREPARE_SQL = [ "DROP TABLE IF EXISTS table_b", "DROP TABLE IF > EXISTS table_a", """CREATE TABLE table_a ( id INT NOT NULL > AUTO_INCREMENT, data VARCHAR (256) NOT NULL, PRIMARY KEY (id) ) > engine='InnoDB'""", """CREATE TABLE table_b ( id INT NOT NULL > AUTO_INCREMENT, a_id INT NOT NULL, data VARCHAR (256) NOT NULL, -- > uncomment this to illustrate where the driver is attempting -- to > INSERT the row during ROLLBACK -- FOREIGN KEY (a_id) REFERENCES > table_a(id), PRIMARY KEY (id) ) engine='InnoDB' """] > > connection = pool.get() for sql in PREPARE_SQL: > execute_sql(connection, sql) connection.commit() > pool.return_conn(connection) print("Table prepared...") > > > def transaction_kill_worker(): while True: try: connection = None > with gevent.Timeout(0.1): connection = pool.get() rowid = > execute_sql( connection, "INSERT INTO table_a (data) VALUES (%s)", > ("a",)) gevent.sleep(random.random() * 0.2) > > try: execute_sql( connection, "INSERT INTO table_b (a_id, data) > VALUES (%s, %s)", (rowid, "b",)) > > # this version prevents the commit from # proceeding on a bad > connection # if verify_connection_id(connection): # > connection.commit() > > # this version does not. It will commit the # row for table_b > without the table_a being present. connection.commit() > > pool.return_conn(connection) except Exception: > connection.rollback() pool.return_conn(connection) > sys.stdout.write("$") except gevent.Timeout: # try to return the > connection anyway if connection is not None: > pool.return_conn(connection) sys.stdout.write("#") except > Exception: # logger.exception(e) sys.stdout.write("@") else: > sys.stdout.write(".") finally: if connection is not None: > pool.return_conn(connection) > > > def main(): for i in range(50): > gevent.spawn(transaction_kill_worker) > > gevent.sleep(3) > > while True: gevent.sleep(5) > > > if __name__ == "__main__": main() > > > > > > > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUivtJAAoJEC5aWaUY1u57nn8IAJP5zK/htG8EeoOSWZVV1ksA en+lQuIA09aNdkHSNS1b/lNOYwsF4X5SM0dU5Cs4LCsumC5jM9S/cNOn3sfpVooA vd31O1kKtd255YtnsKSmrOPiytoI69n2/65tVqgWLHpuXRSaj4HtqOEY/vOWMX6g BON50QUYwwxAZLNOPmEO7vUnJ3VYO6zquH2mQrA1Vg/LCm3+VaodEHOVCxieaJ/n iQPB4Vx1dkuP10HzWyjQW0j4kbUakqgkq/VHaiCYNC85HzPz6KJUOK/neZcBrWsZ RQcLae1dX1yGMXDd5hyJaoe3qUfjuvSZmV5jS3ok/x8rnKdmVl65PtUUlLfVOU0= =wJ55 -----END PGP SIGNATURE----- From rbryant at redhat.com Fri Dec 12 14:50:33 2014 From: rbryant at redhat.com (Russell Bryant) Date: Fri, 12 Dec 2014 07:50:33 -0700 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <5489F6CF.7000602@rackspace.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> Message-ID: <548B00B9.3090609@redhat.com> On 12/11/2014 12:55 PM, Andrew Laski wrote: > Cells can handle a single API on top of globally distributed DCs. I > have spoken with a group that is doing exactly that. But it requires > that the API is a trusted part of the OpenStack deployments in those > distributed DCs. And the way the rest of the components fit into that scenario is far from clear to me. Do you consider this more of a "if you can make it work, good for you", or something we should aim to be more generally supported over time? Personally, I see the globally distributed OpenStack under a single API case much more complex, and worth considering out of scope for the short to medium term, at least. For me, this discussion boils down to ... 1) Do we consider these use cases in scope at all? 2) If we consider it in scope, is it enough of a priority to warrant a cross-OpenStack push in the near term to work on it? 3) If yes to #2, how would we do it? Cascading, or something built around cells? I haven't worried about #3 much, because I consider #2 or maybe even #1 to be a show stopper here. -- Russell Bryant From dpyzhov at mirantis.com Fri Dec 12 14:55:30 2014 From: dpyzhov at mirantis.com (Dmitry Pyzhov) Date: Fri, 12 Dec 2014 18:55:30 +0400 Subject: [openstack-dev] [Fuel] Feature delivery rules and automated tests Message-ID: Guys, we've done a good job in 6.0. Most of the features were merged before feature freeze. Our QA were involved in testing even earlier. It was much better than before. We had a discussion with Anastasia. There were several bug reports for features yesterday, far beyond HCF. So we still have a long way to be perfect. We should add one rule: we need to have automated tests before HCF. Actually, we should have results of these tests just after FF. It is quite challengeable because we have a short development cycle. So my proposal is to require full deployment and run of automated tests for each feature before soft code freeze. And it needs to be tracked in checklists and on feature syncups. Your opinion? -------------- next part -------------- An HTML attachment was scrubbed... URL: From eli at mirantis.com Fri Dec 12 14:58:19 2014 From: eli at mirantis.com (Evgeniy L) Date: Fri, 12 Dec 2014 18:58:19 +0400 Subject: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format In-Reply-To: References: Message-ID: Hi, I don't agree with many of your statements but, I would like to continue discussion about really important topic i.e. UI flow, my suggestion was to add groups, for plugin in metadata.yaml plugin developer can have description of the groups which it belongs to: groups: - id: storage subgroup: - id: cinder With this information we can show a new option on UI (wizard), if option is selected, it means that plugin is enabled, if plugin belongs to several groups, we can use OR statement. The main point is, for environment creation we must specify ids of plugins. Yet another reason for that is plugins multiversioning, we must know exactly which plugin with which version is used for environment, and I don't see how "conditions" can help us with it. Thanks, > >> >> On Wed, Dec 10, 2014 at 8:23 PM, Vitaly Kramskikh wrote: > > > > 2014-12-10 19:31 GMT+03:00 Evgeniy L : > >> >> >> On Wed, Dec 10, 2014 at 6:50 PM, Vitaly Kramskikh < >> vkramskikh at mirantis.com> wrote: >> >>> >>> >>> 2014-12-10 16:57 GMT+03:00 Evgeniy L : >>> >>>> Hi, >>>> >>>> First let me describe what our plans for the nearest release. We want >>>> to deliver >>>> role as a simple plugin, it means that plugin developer can define his >>>> own role >>>> with yaml and also it should work fine with our current approach when >>>> user can >>>> define several fields on the settings tab. >>>> >>>> Also I would like to mention another thing which we should probably >>>> discuss >>>> in separate thread, how plugins should be implemented. We have two types >>>> of plugins, simple and complicated, the definition of simple - I can do >>>> everything >>>> I need with yaml, the definition of complicated - probably I have to >>>> write some >>>> python code. It doesn't mean that this python code should do absolutely >>>> everything it wants, but it means we should implement stable, documented >>>> interface where plugin is connected to the core. >>>> >>>> Now lets talk about UI flow, our current problem is how to get the >>>> information >>>> if plugins is used in the environment or not, this information is >>>> required for >>>> backend which generates appropriate tasks for task executor, also this >>>> information can be used in the future if we decide to implement plugins >>>> deletion >>>> mechanism. >>>> >>>> I didn't come up with a some new solution, as before we have two >>>> options to >>>> solve the problem: >>>> >>>> # 1 >>>> >>>> Use conditional language which is currently used on UI, it will look >>>> like >>>> Vitaly described in the example [1]. >>>> Plugin developer should: >>>> >>>> 1. describe at least one element for UI, which he will be able to use >>>> in task >>>> >>>> 2. add condition which is written in our own programming language >>>> >>>> Example of the condition for LBaaS plugin: >>>> >>>> condition: settings:lbaas.metadata.enabled == true >>>> >>>> 3. add condition to metadata.yaml a condition which defines if plugin >>>> is enabled >>>> >>>> is_enabled: settings:lbaas.metadata.enabled == true >>>> >>>> This approach has good flexibility, but also it has problems: >>>> >>>> a. It's complicated and not intuitive for plugin developer. >>>> >>> It is less complicated than python code >>> >> >> I'm not sure why are you talking about python code here, my point >> is we should not force developer to use this conditions in any language. >> >> But that's how current plugin-like stuff works. There are various tasks > which are run only if some checkboxes are set, so stuff like Ceph and > vCenter will need conditions to describe tasks. > >> Anyway I don't agree with the statement there are more people who know >> python than "fuel ui conditional language". >> >> >>> b. It doesn't cover case when the user installs 3rd party plugin >>>> which doesn't have any conditions (because of # a) and >>>> user doesn't have a way to disable it for environment if it >>>> breaks his configuration. >>>> >>> If plugin doesn't have conditions for tasks, then it has invalid >>> metadata. >>> >> >> Yep, and it's a problem of the platform, which provides a bad interface. >> > Why is it bad? It plugin writer doesn't provide plugin name or version, > then metadata is invalid also. It is plugin writer's fault that he didn't > write metadata properly. > >> >> >>> >>>> # 2 >>>> >>>> As we discussed from the very beginning after user selects a release he >>>> can >>>> choose a set of plugins which he wants to be enabled for environment. >>>> After that we can say that plugin is enabled for the environment and we >>>> send >>>> tasks related to this plugin to task executor. >>>> >>>> >> My approach also allows to eliminate "enableness" of plugins which >>>> will cause UX issues and issues like you described above. vCenter and Ceph >>>> also don't have "enabled" state. vCenter has hypervisor and storage, Ceph >>>> provides backends for Cinder and Glance which can be used simultaneously or >>>> only one of them can be used. >>>> >>>> Both of described plugins have enabled/disabled state, vCenter is >>>> enabled >>>> when vCenter is selected as hypervisor. Ceph is enabled when it's >>>> selected >>>> as a backend for Cinder or Glance. >>>> >>> Nope, Ceph for Volumes can be used without Ceph for Images. Both of >>> these plugins can also have some granular tasks which are enabled by >>> various checkboxes (like VMware vCenter for volumes). How would you >>> determine whether tasks which installs VMware vCenter for volumes should >>> run? >>> >> >> Why "nope"? I have "Cinder OR Glance". >> > Oh, I missed it. So there are 2 checkboxes, how would you determine > "enableness"? > >> It can be easily handled in deployment script. >> > I don't know much about the status of granular deployment blueprint, but > AFAIK that's what we are going to get rid of. > >> >> >>>> If you don't like the idea of having Ceph/vCenter checkboxes on the >>>> first page, >>>> I can suggest as an idea (research is required) to define groups like >>>> Storage Backend, >>>> Network Manager and we will allow plugin developer to embed his option >>>> in radiobutton >>>> field on wizard pages. But plugin developer should not describe >>>> conditions, he should >>>> just write that his plugin is a Storage Backend, Hypervisor or new >>>> Network Manager. >>>> And the plugins e.g. Zabbix, Nagios, which don't belong to any of this >>>> groups >>>> should be shown as checkboxes on the first page of the wizard. >>>> >>> Why don't you just ditch "enableness" of plugins and get rid of this >>> complex stuff? Can you explain why do you need to know if plugin is >>> "enabled"? Let me summarize my opinion on this: >>> >> >> I described why we need it many times. Also it looks like you skipped >> another option >> and I would like to see some more information why you don't like it and >> why it's >> a bad from UX stand point of view. >> > Yes, I skipped it. You said "research is required", so please do it, write > a proposal and then we will compare it with condition approach. You still > don't have your proposal, so there is nothing to compare and discuss. From > the first perspective it seems complex and restrictive. > >> >>> - You don't need to know whether plugin is enabled or not. You need >>> to know what tasks should be run and whether plugin is removable (anything >>> else?). These conditions can be described by the DSL. >>> >>> I do need to know if plugin is enabled to figure out if it's removable, >> in fact those are the same things. >> > So there is nothing else you need "enableness", right? If you "described > why we need it many times", I think you need to do it one more time (in > form of a list). If we need "enableness" just to determine whether the > plugin is removable, then it is not the reason to ruin our UX. > >> >>> - >>> - Explicitly asking the user to enable plugin for new environment >>> should be considered as a last resort solution because it significantly >>> impair our UX for inexperienced user. Just imagine: a new user which barely >>> knows about OpenStack chooses a name for the environment, OS release and >>> then he needs to choose plugins. Really? >>> >>> I really think that it's absolutely ok to show checkbox with LBaaS for >> the user who found the >> plugin, downloaded it on the master and installed it with CLI. >> >> And right now this user have to go to this settings tab with attempts to >> find this checkbox, >> also he may not find it for example because of incompatible release >> version, and it's clearly >> a bad UX. >> > I like how it is done in modern browsers - after upgrade of master node > there is notification about incompatible plugins, and in list of plugins > there is a message that plugin is incompatible. We need to design how we > will handle it. Currently it is definitely a bad UX because nothing is done > for this. > >> My proposal for "complex" plugin interface: there should be python >>> classes with exactly the same fields from yaml files: plugin name, version, >>> etc. But condition for cluster deletion and for tasks which are written in >>> DSL in case of "simple" yaml config should become methods which plugin >>> writer can make as complex as he wants. >>> >> Why do you want to use python to define plugin name, version etc? It's a >> static data which are >> used for installation, I don't think that in fuel client (or some other >> installation tool) we want >> to unpack the plugin and import this module to get the information which >> is required for installation. >> > It is just a proposal in which I try to solve problems which you see in my > approach. If you want a "complex" interface with arbitrary python code, > that's how I see it. All fields are the same here, the approach is the > same, just conditions are in python. And YAML config can be converted to > this class, and all other code won't need to handle 2 different interfaces > for plugins. > >> >>>> [1] >>>> https://github.com/vkramskikh/fuel-plugins/commit/1ddb166731fc4bf614f502b276eb136687cb20cf >>>> >>>> On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh < >>>> vkramskikh at mirantis.com> wrote: >>>> >>>>> >>>>> >>>>> 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak : >>>>> >>>>>> >>>>>>> - environment_config.yaml should contain exact config which will >>>>>>> be mixed into cluster_attributes. No need to implicitly generate any >>>>>>> controls like it is done now. >>>>>>> >>>>>>> Initially i had the same thoughts and wanted to use it the way it >>>>>> is, but now i completely agree with Evgeniy that additional DSL will cause >>>>>> a lot >>>>>> of problems with compatibility between versions and developer >>>>>> experience. >>>>>> >>>>> As far as I understand, you want to introduce another approach to >>>>> describe UI part or plugins? >>>>> >>>>>> We need to search for alternatives.. >>>>>> 1. for UI i would prefer separate tab for plugins, where user will be >>>>>> able to enable/disable plugin explicitly. >>>>>> >>>>> Of course, we need a separate page for plugin management. >>>>> >>>>>> Currently settings tab is overloaded. >>>>>> 2. on backend we need to validate plugins against certain env before >>>>>> enabling it, >>>>>> and for simple case we may expose some basic entities like >>>>>> network_mode. >>>>>> For case where you need complex logic - python code is far more >>>>>> flexible that new DSL. >>>>>> >>>>>>> >>>>>>> - metadata.yaml should also contain "is_removable" field. This >>>>>>> field is needed to determine whether it is possible to remove installed >>>>>>> plugin. It is impossible to remove plugins in the current implementation. >>>>>>> This field should contain an expression written in our DSL which we already >>>>>>> use in a few places. The LBaaS plugin also uses it to hide the checkbox if >>>>>>> Neutron is not used, so even simple plugins like this need to utilize it. >>>>>>> This field can also be autogenerated, for more complex plugins plugin >>>>>>> writer needs to fix it manually. For example, for Ceph it could look like >>>>>>> "settings:storage.volumes_ceph.value == false and >>>>>>> settings:storage.images_ceph.value == false". >>>>>>> >>>>>>> How checkbox will help? There is several cases of plugin removal.. >>>>>> >>>>> It is not a checkbox, this is condition that determines whether the >>>>> plugin is removable. It allows plugin developer specify when plguin can be >>>>> safely removed from Fuel if there are some environments which were created >>>>> after the plugin had been installed. >>>>> >>>>>> 1. Plugin is installed, but not enabled for any env - just remove the >>>>>> plugin >>>>>> 2. Plugin is installed, enabled and cluster deployed - forget about >>>>>> it for now.. >>>>>> 3. Plugin is installed and only enabled - we need to maintain state >>>>>> of db consistent after plugin is removed, it is problematic, but possible >>>>>> >>>>> My approach also allows to eliminate "enableness" of plugins which >>>>> will cause UX issues and issues like you described above. vCenter and Ceph >>>>> also don't have "enabled" state. vCenter has hypervisor and storage, Ceph >>>>> provides backends for Cinder and Glance which can be used simultaneously or >>>>> only one of them can be used. >>>>> >>>>>> My main point that plugin is enabled/disabled explicitly by user, >>>>>> after that we can decide ourselves can it be removed or not. >>>>>> >>>>>>> >>>>>>> - For every task in tasks.yaml there should be added new >>>>>>> "condition" field with an expression which determines whether the task >>>>>>> should be run. In the current implementation tasks are always run for >>>>>>> specified roles. For example, vCenter plugin can have a few tasks with >>>>>>> conditions like "settings:common.libvirt_type.value == 'vcenter'" or >>>>>>> "settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar >>>>>>> approach will be used in implementation of Granular Deployment feature. >>>>>>> >>>>>>> I had some thoughts about using DSL, it seemed to me especially >>>>>> helpfull when you need to disable part of embedded into core functionality, >>>>>> like deploying with another hypervisor, or network dirver (contrail >>>>>> for example). And DSL wont cover all cases here, this quite similar to >>>>>> metadata.yaml, simple cases can be covered by some variables in tasks (like >>>>>> group, unique, etc), but complex is easier to test and describe in python. >>>>>> >>>>> Could you please provide example of such conditions? vCenter and Ceph >>>>> can be turned into plugins using this approach. >>>>> >>>>> Also, I'm not against python version of plugins. It could look like a >>>>> python class with exactly the same fields form YAML files, but conditions >>>>> will be written in python. >>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> OpenStack-dev mailing list >>>>>> OpenStack-dev at lists.openstack.org >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Vitaly Kramskikh, >>>>> Software Engineer, >>>>> Mirantis, Inc. >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh < >>>> vkramskikh at mirantis.com> wrote: >>>> >>>>> >>>>> >>>>> 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak : >>>>> >>>>>> >>>>>>> - environment_config.yaml should contain exact config which will >>>>>>> be mixed into cluster_attributes. No need to implicitly generate any >>>>>>> controls like it is done now. >>>>>>> >>>>>>> Initially i had the same thoughts and wanted to use it the way it >>>>>> is, but now i completely agree with Evgeniy that additional DSL will cause >>>>>> a lot >>>>>> of problems with compatibility between versions and developer >>>>>> experience. >>>>>> >>>>> As far as I understand, you want to introduce another approach to >>>>> describe UI part or plugins? >>>>> >>>>>> We need to search for alternatives.. >>>>>> 1. for UI i would prefer separate tab for plugins, where user will be >>>>>> able to enable/disable plugin explicitly. >>>>>> >>>>> Of course, we need a separate page for plugin management. >>>>> >>>>>> Currently settings tab is overloaded. >>>>>> 2. on backend we need to validate plugins against certain env before >>>>>> enabling it, >>>>>> and for simple case we may expose some basic entities like >>>>>> network_mode. >>>>>> For case where you need complex logic - python code is far more >>>>>> flexible that new DSL. >>>>>> >>>>>>> >>>>>>> - metadata.yaml should also contain "is_removable" field. This >>>>>>> field is needed to determine whether it is possible to remove installed >>>>>>> plugin. It is impossible to remove plugins in the current implementation. >>>>>>> This field should contain an expression written in our DSL which we already >>>>>>> use in a few places. The LBaaS plugin also uses it to hide the checkbox if >>>>>>> Neutron is not used, so even simple plugins like this need to utilize it. >>>>>>> This field can also be autogenerated, for more complex plugins plugin >>>>>>> writer needs to fix it manually. For example, for Ceph it could look like >>>>>>> "settings:storage.volumes_ceph.value == false and >>>>>>> settings:storage.images_ceph.value == false". >>>>>>> >>>>>>> How checkbox will help? There is several cases of plugin removal.. >>>>>> >>>>> It is not a checkbox, this is condition that determines whether the >>>>> plugin is removable. It allows plugin developer specify when plguin can be >>>>> safely removed from Fuel if there are some environments which were created >>>>> after the plugin had been installed. >>>>> >>>>>> 1. Plugin is installed, but not enabled for any env - just remove the >>>>>> plugin >>>>>> 2. Plugin is installed, enabled and cluster deployed - forget about >>>>>> it for now.. >>>>>> 3. Plugin is installed and only enabled - we need to maintain state >>>>>> of db consistent after plugin is removed, it is problematic, but possible >>>>>> >>>>> My approach also allows to eliminate "enableness" of plugins which >>>>> will cause UX issues and issues like you described above. vCenter and Ceph >>>>> also don't have "enabled" state. vCenter has hypervisor and storage, Ceph >>>>> provides backends for Cinder and Glance which can be used simultaneously or >>>>> only one of them can be used. >>>>> >>>>>> My main point that plugin is enabled/disabled explicitly by user, >>>>>> after that we can decide ourselves can it be removed or not. >>>>>> >>>>>>> >>>>>>> - For every task in tasks.yaml there should be added new >>>>>>> "condition" field with an expression which determines whether the task >>>>>>> should be run. In the current implementation tasks are always run for >>>>>>> specified roles. For example, vCenter plugin can have a few tasks with >>>>>>> conditions like "settings:common.libvirt_type.value == 'vcenter'" or >>>>>>> "settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar >>>>>>> approach will be used in implementation of Granular Deployment feature. >>>>>>> >>>>>>> I had some thoughts about using DSL, it seemed to me especially >>>>>> helpfull when you need to disable part of embedded into core functionality, >>>>>> like deploying with another hypervisor, or network dirver (contrail >>>>>> for example). And DSL wont cover all cases here, this quite similar to >>>>>> metadata.yaml, simple cases can be covered by some variables in tasks (like >>>>>> group, unique, etc), but complex is easier to test and describe in python. >>>>>> >>>>> Could you please provide example of such conditions? vCenter and Ceph >>>>> can be turned into plugins using this approach. >>>>> >>>>> Also, I'm not against python version of plugins. It could look like a >>>>> python class with exactly the same fields form YAML files, but conditions >>>>> will be written in python. >>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> OpenStack-dev mailing list >>>>>> OpenStack-dev at lists.openstack.org >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Vitaly Kramskikh, >>>>> Software Engineer, >>>>> Mirantis, Inc. >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> >>> -- >>> Vitaly Kramskikh, >>> Software Engineer, >>> Mirantis, Inc. >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Vitaly Kramskikh, > Software Engineer, > Mirantis, Inc. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anton.massoud at ericsson.com Fri Dec 12 15:10:20 2014 From: anton.massoud at ericsson.com (Anton Massoud) Date: Fri, 12 Dec 2014 15:10:20 +0000 Subject: [openstack-dev] tempest - object_storage Message-ID: <28C561A029C4AD4CB43E4B382193FA7922419BB1@ESESSMB207.ericsson.se> Hi, We are running tempest on icehouse, in our tempest runs, we found that the below test cases are failing in random iterations but not always. tempest.api.object_storage.test_account_services.AccountTest.test_update_account_metadata_with_delete_matadata_key[gate,smoke] tempest.api.object_storage.test_container_services.ContainerTest.test_list_container_contents_with_path[gate,smoke] tempest.api.object_storage.test_object_expiry.ObjectExpiryTest.test_get_object_after_expiry_time[gate] tempest.api.object_storage.test_object_expiry.ObjectExpiryTest.test_get_object_at_expiry_time[gate] tempest.api.object_storage.test_object_services.ObjectTest.test_object_upload_in_segments[gate] tempest.api.object_storage.test_object_services.ObjectTest.test_update_object_metadata[gate,smoke] tempest.api.object_storage.test_object_services.ObjectTest.test_update_object_metadata_with_create_and_remove_metadata[gate,smoke] tempest.api.object_storage.test_object_services.ObjectTest.test_update_object_metadata_with_remove_metadata tempest.api.object_storage.test_account_services.AccountTest.test_update_account_metadata_with_create_and_delete_metadata[gate,smoke] tempest.api.object_storage.test_container_services.ContainerTest.test_create_container[gate,smoke] tempest.api.object_storage.test_container_services.ContainerTest.test_delete_container[gate,smoke] tempest.api.object_storage.test_object_services.ObjectTest.test_copy_object_2d_way[gate,smoke] Anyone else has experienced the same, any idea what that may be caused of /Anton -------------- next part -------------- An HTML attachment was scrubbed... URL: From rybrown at redhat.com Fri Dec 12 15:30:23 2014 From: rybrown at redhat.com (Ryan Brown) Date: Fri, 12 Dec 2014 10:30:23 -0500 Subject: [openstack-dev] [Openstack] [Ceilometer] [API] Batch alarm creation In-Reply-To: <0b9601d015e6$ce3510f0$6a9f32d0$@chinacloud.com.cn> References: <0b9601d015e6$ce3510f0$6a9f32d0$@chinacloud.com.cn> Message-ID: <548B0A0F.6030203@redhat.com> On 12/12/2014 03:37 AM, Rao Dingyuan wrote: > Hi Eoghan and folks, > > I'm thinking of adding an API to create multiple alarms in a batch. > > I think adding an API to create multiple alarms is a good option to solve the problem that once an *alarm target* (a vm or a new group of vms) is created, multiple requests will be fired because multiple alarms are to be created. > > In the our current project, this requiement is specially urgent since our alarm target is one VM, and 6 alarms are to be created when one VM is created. > > What do you guys think? > > > Best Regards, > Kurt Rao Allowing batch operations is definitely a good idea, though it may not be a solution to all of the problems you outlined. One way to batch object creations would be to give clients the option to POST a collection of alarms instead of a single alarm. Currently your API looks like[1]: POST /v2/alarms BODY: { "alarm_actions": ... ... } For batches you could modify your API to accept a body like: { "alarms": [ {"alarm_actions": ...}, {"alarm_actions": ...}, {"alarm_actions": ...}, {"alarm_actions": ...} ] } It could (pretty easily) be a backwards-compatible change since the schemata don't conflict, and you can even add some kind of a "batch":true flag to make it explicit that the user wants to create a collection. The API-WG has a spec[2] out right now explaining the rationale behind collection representations. [1]: http://docs.openstack.org/developer/ceilometer/webapi/v2.html#post--v2-alarms [2]: https://review.openstack.org/#/c/133660/11/guidelines/representation_structure.rst,unified > > > > ----- Original ----- > ???: Eoghan Glynn [mailto:eglynn at redhat.com] > ????: 2014?12?3? 20:34 > ???: Rao Dingyuan > ??: openstack at lists.openstack.org > ??: Re: [Openstack] [Ceilometer] looking for alarm best practice - please help > > > >> Hi folks, >> >> >> >> I wonder if anyone could share some best practice regarding to the >> usage of ceilometer alarm. We are using the alarm >> evaluation/notification of ceilometer and we don?t feel very well of >> the way we use it. Below is our >> problem: >> >> >> >> ============================ >> >> Scenario: >> >> When cpu usage or memory usage above a certain threshold, alerts >> should be displayed on admin?s web page. There should be a 3 level >> alerts according to meter value, namely notice, warning, fatal. Notice >> means the meter value is between 50% ~ 70%, warning means between 70% >> ~ 85% and fatal means above 85% >> >> For example: >> >> * when one vm?s cpu usage is 72%, an alert message should be displayed >> saying >> ?Warning: vm[d9b7018b-06c4-4fba-8221-37f67f6c6b8c] cpu usage is above 70%?. >> >> * when one vm?s memory usage is 90%, another alert message should be >> created saying ?Fatal: vm[d9b7018b-06c4-4fba-8221-37f67f6c6b8c] memory >> usage is above 85%? >> >> >> >> Our current Solution: >> >> We used ceilometer alarm evaluation/notification to implement this. To >> distinguish which VM and which meter is above what value, we?ve >> created one alarm for each VM by each condition. So, to monitor 1 VM, >> 6 alarms will be created because there are 2 meters and for each meter there are 3 levels. >> That means, if there are 100 VMs to be monitored, 600 alarms will be >> created. >> >> >> >> Problems: >> >> * The first problem is, when the number of meters increases, the >> number of alarms will be multiplied. For example, customer may want >> alerts on disk and network IO rates, and if we do that, there will be >> 4*3=12 alarms for each VM. >> >> * The second problem is, when one VM is created, multiple alarms will >> be created, meaning multiple http requests will be fired. In the case >> above, 6 HTTP requests will be needed once a VM is created. And this >> number also increases as the number of meters goes up. > > One way of reducing both the number of alarms and the volume of notifications would be to group related VMs, if such a concept exists in your use-case. > > This is effectively how Heat autoscaling uses ceilometer, alarming on the average of some statistic over a set of instances (as opposed to triggering on individual instances). > > The VMs could be grouped by setting user-metadata of form: > > nova boot ... --meta metering.my_server_group=foobar > > Any user-metadata prefixed with 'metering.' will be preserved by ceilometer in the resource_metadata.user_metedata stored for each sample, so that it can used to select the statistics on which the alarm is based, e.g. > > ceilometer alarm-threshold-create --name cpu_high_foobar \ > --description 'warning: foobar instance group running hot' \ > --meter-name cpu_util --threshold 70.0 \ > --comparison-operator gt --statistic avg \ > ... > --query metadata.user_metedata.my_server_group=foobar > > This approach is of course predicated on the there being some natural grouping relation between instances in your environment. > > Cheers, > Eoghan > > >> ============================= >> >> >> >> Do anyone have any suggestions? >> >> >> >> >> >> >> >> Best Regards! >> >> Kurt Rao >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. From andrew.laski at rackspace.com Fri Dec 12 16:06:36 2014 From: andrew.laski at rackspace.com (Andrew Laski) Date: Fri, 12 Dec 2014 11:06:36 -0500 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <548B00B9.3090609@redhat.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> <548B00B9.3090609@redhat.com> Message-ID: <548B128C.40504@rackspace.com> On 12/12/2014 09:50 AM, Russell Bryant wrote: > On 12/11/2014 12:55 PM, Andrew Laski wrote: >> Cells can handle a single API on top of globally distributed DCs. I >> have spoken with a group that is doing exactly that. But it requires >> that the API is a trusted part of the OpenStack deployments in those >> distributed DCs. > And the way the rest of the components fit into that scenario is far > from clear to me. Do you consider this more of a "if you can make it > work, good for you", or something we should aim to be more generally > supported over time? Personally, I see the globally distributed > OpenStack under a single API case much more complex, and worth > considering out of scope for the short to medium term, at least. I do consider this to be out of scope for cells, for at least the medium term as you've said. There is additional complexity in making that a supported configuration that is not being addressed in the cells effort. I am just making the statement that this is something cells could address if desired, and therefore doesn't need an additional solution. > For me, this discussion boils down to ... > > 1) Do we consider these use cases in scope at all? > > 2) If we consider it in scope, is it enough of a priority to warrant a > cross-OpenStack push in the near term to work on it? > > 3) If yes to #2, how would we do it? Cascading, or something built > around cells? > > I haven't worried about #3 much, because I consider #2 or maybe even #1 > to be a show stopper here. > From akamyshnikova at mirantis.com Fri Dec 12 16:18:58 2014 From: akamyshnikova at mirantis.com (Anna Kamyshnikova) Date: Fri, 12 Dec 2014 20:18:58 +0400 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <548AD47F.1080401@redhat.com> References: <54899F92.2060900@gmail.com> <548A231D.6070302@iweb.com> <548AD47F.1080401@redhat.com> Message-ID: Thanks everyone for sharing yours opinion! I will create a separate change with another option that was suggested. Yes, I'm currently working on this bug https://bugs.launchpad.net/neutron/+bug/1194579. On Fri, Dec 12, 2014 at 2:41 PM, Ihar Hrachyshka wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > On 12/12/14 00:05, Mathieu Gagn? wrote: > > We recently had an issue in production where a user had 2 > > "default" security groups (for reasons we have yet to identify). > > This is probably the result of the race condition that is discussed in > the thread: https://bugs.launchpad.net/neutron/+bug/1194579 > /Ihar > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > > iQEcBAEBCgAGBQJUitR/AAoJEC5aWaUY1u57VggIALzdTLHnO7Fr8gKlWPS7Uu+o > Su9KV41td8Epzs3pNsGYkH2Kz4T5obAneCORUiZl7boBpAJcnMm3Jt9K8YnTCVUy > t4AbfIxSrTD7drHf3HoMoNEDrSntdnpTHoGpG+idNpFjc0kjBjm81W3y14Gab0k5 > 5Mw/jV8mdnB6aRs5Zhari50/04X8SZeDpQNgBHL5kY40CZ+sUtS4C8OKfj7OEAuW > LNmkHgDAtwewbNdluntbSdLGVjyl/F9s+21HoajqBcGNhH8ZHpAr4hphMbZv8lBY > iAD2tztxvkacYaGduBFh6bewxVNGaUJBWmmc2xqHAXXbDP3d9aOk5q0wHK3SPQY= > =TDwc > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dklyle0 at gmail.com Fri Dec 12 17:20:19 2014 From: dklyle0 at gmail.com (David Lyle) Date: Fri, 12 Dec 2014 10:20:19 -0700 Subject: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard In-Reply-To: References: <547DD08A.7000402@redhat.com> Message-ID: Not entirely sure why they both exist either. So by move, you meant override (nuance). That's different and I have no issue with that. I'm also fine with attempting to consolidate _conf and _scripts. David On Thu, Dec 11, 2014 at 1:22 PM, Thai Q Tran wrote: > > It would not create a circular dependency, dashboard would depend on > horizon - not the latter. > Scripts that are library specific will live in horizon while scripts that > are panel specific will live in dashboard. > Let me draw a more concrete example. > > In Horizon > We know that _script and _conf are included in the base.html > We create a _script and _conf placeholder file for project overrides > (similar to _stylesheets and _header) > In Dashboard > We create a _script and _conf file with today's content > It overrides the _script and _conf file in horizon > Now we can include panel specific scripts without causing circular > dependency. > > In fact, I would like to go further and suggest that _script and _conf be > combine into a single file. > Not sure why we need two places to include scripts. > > > -----David Lyle wrote: ----- > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > From: David Lyle > Date: 12/11/2014 09:23AM > Subject: Re: [openstack-dev] [Horizon] Moving _conf and _scripts to > dashboard > > > I'm probably not understanding the nuance of the question but moving the > _scripts.html file to openstack_dashboard creates some circular > dependencies, does it not? templates/base.html in the horizon side of the > repo includes _scripts.html and insures that the javascript needed by the > existing horizon framework is present. > > _conf.html seems like a better candidate for moving as it's more closely > tied to the application code. > > David > > > On Wed, Dec 10, 2014 at 7:20 PM, Thai Q Tran wrote: > >> Sorry for duplicate mail, forgot the subject. >> >> -----Thai Q Tran/Silicon Valley/IBM wrote: ----- >> To: "OpenStack Development Mailing List \(not for usage questions\)" < >> openstack-dev at lists.openstack.org> >> From: Thai Q Tran/Silicon Valley/IBM >> Date: 12/10/2014 03:37PM >> Subject: Moving _conf and _scripts to dashboard >> >> The way we are structuring our javascripts today is complicated. All of >> our static javascripts reside in /horizon/static and are imported through >> _conf.html and _scripts.html. Notice that there are already some panel >> specific javascripts like: horizon.images.js, horizon.instances.js, >> horizon.users.js. They do not belong in horizon. They belong in >> openstack_dashboard because they are specific to a panel. >> >> Why am I raising this issue now? In Angular, we need controllers written >> in javascript for each panel. As we angularize more and more panels, we >> need to store them in a way that make sense. To me, it make sense for us to >> move _conf and _scripts to openstack_dashboard. Or if this is not possible, >> then provide a mechanism to override them in openstack_dashboard. >> >> Thoughts? >> Thai >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougw at a10networks.com Fri Dec 12 17:28:29 2014 From: dougw at a10networks.com (Doug Wiegley) Date: Fri, 12 Dec 2014 17:28:29 +0000 Subject: [openstack-dev] [neutron] Services are now split out and neutron is open for commits! In-Reply-To: References: Message-ID: Hi all, Neutron grenade jobs have been failing since late afternoon Thursday, due to split fallout. Armando has a fix, and it?s working it?s way through the gate: https://review.openstack.org/#/c/141256/ Get your rechecks ready! Thanks, Doug From: Douglas Wiegley > Date: Wednesday, December 10, 2014 at 10:29 PM To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [neutron] Services are now split out and neutron is open for commits! Hi all, I?d like to echo the thanks to all involved, and thanks for the patience during this period of transition. And a logistical note: if you have any outstanding reviews against the now missing files/directories (db/{loadbalancer,firewall,vpn}, services/, or tests/unit/services), you must re-submit your review against the new repos. Existing neutron reviews for service code will be summarily abandoned in the near future. Lbaas folks, hold off on re-submitting feature/lbaasv2 reviews. I?ll have that branch merged in the morning, and ping in channel when it?s ready for submissions. Finally, if any tempest lovers want to take a crack at splitting the tempest runs into four, perhaps using salv?s reviews of splitting them in two as a guide, and then creating jenkins jobs, we need some help getting those going. Please ping me directly (IRC: dougwig). Thanks, doug From: Kyle Mestery > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, December 10, 2014 at 4:10 PM To: "OpenStack Development Mailing List (not for usage questions)" > Subject: [openstack-dev] [neutron] Services are now split out and neutron is open for commits! Folks, just a heads up that we have completed splitting out the services (FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3]. This was all done in accordance with the spec approved here [4]. Thanks to all involved, but a special thanks to Doug and Anita, as well as infra. Without all of their work and help, this wouldn't have been possible! Neutron and the services repositories are now open for merges again. We're going to be landing some major L3 agent refactoring across the 4 repositories in the next four days, look for Carl to be leading that work with the L3 team. In the meantime, please report any issues you have in launchpad [5] as bugs, and find people in #openstack-neutron or send an email. We've verified things come up and all the tempest and API tests for basic neutron work fine. In the coming week, we'll be getting all the tests working for the services repositories. Medium term, we need to also move all the advanced services tempest tests out of tempest and into the respective repositories. We also need to beef these tests up considerably, so if you want to help out on a critical project for Neutron, please let me know. Thanks! Kyle [1] http://git.openstack.org/cgit/openstack/neutron-fwaas [2] http://git.openstack.org/cgit/openstack/neutron-lbaas [3] http://git.openstack.org/cgit/openstack/neutron-vpnaas [4] http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo/services-split.rst [5] https://bugs.launchpad.net/neutron -------------- next part -------------- An HTML attachment was scrubbed... URL: From mestery at mestery.com Fri Dec 12 17:33:44 2014 From: mestery at mestery.com (Kyle Mestery) Date: Fri, 12 Dec 2014 10:33:44 -0700 Subject: [openstack-dev] [neutron] Services are now split out and neutron is open for commits! In-Reply-To: References: Message-ID: This has merged now, FYI. On Fri, Dec 12, 2014 at 10:28 AM, Doug Wiegley wrote: > Hi all, > > Neutron grenade jobs have been failing since late afternoon Thursday, > due to split fallout. Armando has a fix, and it?s working it?s way through > the gate: > > https://review.openstack.org/#/c/141256/ > > Get your rechecks ready! > > Thanks, > Doug > > > From: Douglas Wiegley > Date: Wednesday, December 10, 2014 at 10:29 PM > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > Subject: Re: [openstack-dev] [neutron] Services are now split out and > neutron is open for commits! > > Hi all, > > I?d like to echo the thanks to all involved, and thanks for the patience > during this period of transition. > > And a logistical note: if you have any outstanding reviews against the > now missing files/directories (db/{loadbalancer,firewall,vpn}, services/, > or tests/unit/services), you must re-submit your review against the new > repos. Existing neutron reviews for service code will be summarily > abandoned in the near future. > > Lbaas folks, hold off on re-submitting feature/lbaasv2 reviews. I?ll > have that branch merged in the morning, and ping in channel when it?s ready > for submissions. > > Finally, if any tempest lovers want to take a crack at splitting the > tempest runs into four, perhaps using salv?s reviews of splitting them in > two as a guide, and then creating jenkins jobs, we need some help getting > those going. Please ping me directly (IRC: dougwig). > > Thanks, > doug > > > From: Kyle Mestery > Reply-To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > Date: Wednesday, December 10, 2014 at 4:10 PM > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > Subject: [openstack-dev] [neutron] Services are now split out and neutron > is open for commits! > > Folks, just a heads up that we have completed splitting out the > services (FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3]. > This was all done in accordance with the spec approved here [4]. Thanks to > all involved, but a special thanks to Doug and Anita, as well as infra. > Without all of their work and help, this wouldn't have been possible! > > Neutron and the services repositories are now open for merges again. We're > going to be landing some major L3 agent refactoring across the 4 > repositories in the next four days, look for Carl to be leading that work > with the L3 team. > > In the meantime, please report any issues you have in launchpad [5] as > bugs, and find people in #openstack-neutron or send an email. We've > verified things come up and all the tempest and API tests for basic neutron > work fine. > > In the coming week, we'll be getting all the tests working for the > services repositories. Medium term, we need to also move all the advanced > services tempest tests out of tempest and into the respective repositories. > We also need to beef these tests up considerably, so if you want to help > out on a critical project for Neutron, please let me know. > > Thanks! > Kyle > > [1] http://git.openstack.org/cgit/openstack/neutron-fwaas > [2] http://git.openstack.org/cgit/openstack/neutron-lbaas > [3] http://git.openstack.org/cgit/openstack/neutron-vpnaas > [4] > http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo/services-split.rst > [5] https://bugs.launchpad.net/neutron > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stoner at redhat.com Fri Dec 12 18:04:05 2014 From: stoner at redhat.com (Sean Toner) Date: Fri, 12 Dec 2014 13:04:05 -0500 Subject: [openstack-dev] Questions regarding Functional Testing (Paris Summit) Message-ID: <2270469.sZVWXDMDUh@workdev-stoner.usersys.redhat.com> Hi everyone, I have been reading the etherpad from the Paris summit wrt to moving the functional tests into their respective projects (https://etherpad.openstack.org/p/kilo-crossproject-move-func-tests-to-projects). I am mostly interested this from the nova project perspective. However, I still have a lot of questions. For example, is it permissible (or a good idea) to use the python- *clients as a library for the tasks? I know these were not allowed in Tempest, but I don't see why they couldn't be used here (especially since, AFAIK, there is no testing done on the SDK clients themselves). Another question is also about a difference between Tempest and these new functional tests. In nova's case, it would be very useful to actually utilize the libvirt library in order to touch the hypervisor itself. In Tempest, it's not allowed to do that. Would it make sense to be able to make calls to libvirt within a nova functional test? Basically, since Tempest was a "public" only library, there needs to be a different set of rules as to what can and can't be done. Even the definition of what exactly a functional test is should be more clearly stated. For example, I have been working on a project for some nova tests that also use the glance and keystone clients (since I am using the python SDK clients). I saw this quote from the etherpad: " Many "api" tests in Tempest require more than one service (eg, nova api tests require glance) Is this an API test or an integration test or a functional test? sounds to me like cross project integration tests +1+1" I would disagree that a functional test should belong to only one project. IMHO, a functional test is essentially a black box test that might span one or more projects, though the projects should be related. For example, I have worked on one of the new features where the config drive image property is set in the glance image itself, rather than specified during the nova boot call. I believe that's how a functional test can be defined. A black box test which may require looking "under the hood" that Tempest does not allow. Has there been any other work or thoughts on how functional testing should be done? Thanks, Sean Toner From tqtran at us.ibm.com Fri Dec 12 18:05:00 2014 From: tqtran at us.ibm.com (Thai Q Tran) Date: Fri, 12 Dec 2014 11:05:00 -0700 Subject: [openstack-dev] [horizon] REST and Django In-Reply-To: References: , <547DD08A.7000402@redhat.com> Message-ID: An HTML attachment was scrubbed... URL: From marun at redhat.com Fri Dec 12 18:05:40 2014 From: marun at redhat.com (Maru Newby) Date: Fri, 12 Dec 2014 10:05:40 -0800 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <548A1A34.40105@dague.net> References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> <548A1A34.40105@dague.net> Message-ID: On Dec 11, 2014, at 2:27 PM, Sean Dague wrote: > On 12/11/2014 04:16 PM, Jay Pipes wrote: >> On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote: >>> On Dec 11, 2014, at 1:04 PM, Jay Pipes wrote: >>>> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: >>>>> >>>>> On Dec 11, 2014, at 8:00 AM, Henry Gessau wrote: >>>>> >>>>>> On Thu, Dec 11, 2014, Mark McClain wrote: >>>>>>> >>>>>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes >>>>>>> > wrote: >>>>>>>> >>>>>>>> I'm generally in favor of making name attributes opaque, utf-8 >>>>>>>> strings that >>>>>>>> are entirely user-defined and have no constraints on them. I >>>>>>>> consider the >>>>>>>> name to be just a tag that the user places on some resource. It >>>>>>>> is the >>>>>>>> resource's ID that is unique. >>>>>>>> >>>>>>>> I do realize that Nova takes a different approach to *some* >>>>>>>> resources, >>>>>>>> including the security group name. >>>>>>>> >>>>>>>> End of the day, it's probably just a personal preference whether >>>>>>>> names >>>>>>>> should be unique to a tenant/user or not. >>>>>>>> >>>>>>>> Maru had asked me my opinion on whether names should be unique and I >>>>>>>> answered my personal opinion that no, they should not be, and if >>>>>>>> Neutron >>>>>>>> needed to ensure that there was one and only one default security >>>>>>>> group for >>>>>>>> a tenant, that a way to accomplish such a thing in a race-free >>>>>>>> way, without >>>>>>>> use of SELECT FOR UPDATE, was to use the approach I put into the >>>>>>>> pastebin on >>>>>>>> the review above. >>>>>>>> >>>>>>> >>>>>>> I agree with Jay. We should not care about how a user names the >>>>>>> resource. >>>>>>> There other ways to prevent this race and Jay?s suggestion is a >>>>>>> good one. >>>>>> >>>>>> However we should open a bug against Horizon because the user >>>>>> experience there >>>>>> is terrible with duplicate security group names. >>>>> >>>>> The reason security group names are unique is that the ec2 api >>>>> supports source >>>>> rule specifications by tenant_id (user_id in amazon) and name, so >>>>> not enforcing >>>>> uniqueness means that invocation in the ec2 api will either fail or be >>>>> non-deterministic in some way. >>>> >>>> So we should couple our API evolution to EC2 API then? >>>> >>>> -jay >>> >>> No I was just pointing out the historical reason for uniqueness, and >>> hopefully >>> encouraging someone to find the best behavior for the ec2 api if we >>> are going >>> to keep the incompatibility there. Also I personally feel the ux is >>> better >>> with unique names, but it is only a slight preference. >> >> Sorry for snapping, you made a fair point. > > Yeh, honestly, I agree with Vish. I do feel that the UX of that > constraint is useful. Otherwise you get into having to show people UUIDs > in a lot more places. While those are good for consistency, they are > kind of terrible to show to people. While there is a good case for the UX of unique names - it also makes orchestration via tools like puppet a heck of a lot simpler - the fact is that most OpenStack resources do not require unique names. That being the case, why would we want security groups to deviate from this convention? Maru From tsufiev at mirantis.com Fri Dec 12 18:09:28 2014 From: tsufiev at mirantis.com (Timur Sufiev) Date: Fri, 12 Dec 2014 22:09:28 +0400 Subject: [openstack-dev] FW: [horizon] [ux] Changing how the modals are closed in Horizon In-Reply-To: References: Message-ID: It seems to me that the consensus on keeping the simpler approach -- to make Bootstrap data-backdrop="static" as the default behavior -- has been reached. Am I right? On Thu, Dec 4, 2014 at 10:59 PM, Kruithof, Piet wrote: > > My preference would be ?change the default behavior to 'static?? for the > following reasons: > > - There are plenty of ways to close the modal, so there?s not really a > need for this feature. > - There are no visual cues, such as an ?X? or a Cancel button, that > selecting outside of the modal closes it. > - Downside is losing all of your data. > > My two cents? > > Begin forwarded message: > > From: "Rob Cresswell (rcresswe)" rcresswe at cisco.com>> > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org >> > Date: December 3, 2014 at 5:21:51 AM PST > Subject: Re: [openstack-dev] [horizon] [ux] Changing how the modals are > closed in Horizon > Reply-To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org >> > > +1 to changing the behaviour to ?static'. Modal inside a modal is > potentially slightly more useful, but looks messy and inconsistent, which I > think outweighs the functionality. > > Rob > > > On 2 Dec 2014, at 12:21, Timur Sufiev tsufiev at mirantis.com>> wrote: > > Hello, Horizoneers and UX-ers! > > The default behavior of modals in Horizon (defined in turn by Bootstrap > defaults) regarding their closing is to simply close the modal once user > clicks somewhere outside of it (on the backdrop element below and around > the modal). This is not very convenient for the modal forms containing a > lot of input - when it is closed without a warning all the data the user > has already provided is lost. Keeping this in mind, I've made a patch [1] > changing default Bootstrap 'modal_backdrop' parameter to 'static', which > means that forms are not closed once the user clicks on a backdrop, while > it's still possible to close them by pressing 'Esc' or clicking on the 'X' > link at the top right border of the form. Also the patch [1] allows to > customize this behavior (between 'true'-current one/'false' - no backdrop > element/'static') on a per-form basis. > > What I didn't know at the moment I was uploading my patch is that David > Lyle had been working on a similar solution [2] some time ago. It's a bit > more elaborate than mine: if the user has already filled some some inputs > in the form, then a confirmation dialog is shown, otherwise the form is > silently dismissed as it happens now. > > The whole point of writing about this in the ML is to gather opinions > which approach is better: > * stick to the current behavior; > * change the default behavior to 'static'; > * use the David's solution with confirmation dialog (once it'll be rebased > to the current codebase). > > What do you think? > > [1] https://review.openstack.org/#/c/113206/ > [2] https://review.openstack.org/#/c/23037/ > > P.S. I remember that I promised to write this email a week ago, but better > late than never :). > > -- > Timur Sufiev > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Timur Sufiev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbayer at redhat.com Fri Dec 12 18:16:21 2014 From: mbayer at redhat.com (Mike Bayer) Date: Fri, 12 Dec 2014 13:16:21 -0500 Subject: [openstack-dev] [all] [ha] potential issue with implicit async-compatible mysql drivers In-Reply-To: <548AFB49.8080802@redhat.com> References: <783B3168-BD43-42CB-90EC-B55D10EAB5EA@redhat.com> <548AFB49.8080802@redhat.com> Message-ID: > On Dec 12, 2014, at 9:27 AM, Ihar Hrachyshka wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > Reading the latest comments at > https://github.com/PyMySQL/PyMySQL/issues/275, it seems to me that the > issue is not to be solved in drivers themselves but instead in > libraries that arrange connections (sqlalchemy/oslo.db), correct? > > Will the proposed connection reopening help? disagree, this is absolutely a driver bug. I?ve re-read that last comment and now I see that the developer is suggesting that this condition not be flagged in any way, so I?ve responded. The connection should absolutely blow up and if it wants to refuse to be usable afterwards, that?s fine (it?s the same as MySQLdb ?commands out of sync?). It just has to *not* emit any further SQL as though nothing is wrong. It doesn?t matter much for PyMySQL anyway, I don?t know that PyMySQL is up to par for openstack in any case (look at the entries in their changelog: https://github.com/PyMySQL/PyMySQL/blob/master/CHANGELOG "Several other bug fixes?, ?Many bug fixes"- really? is this an iphone app?) We really should be looking to get this fixed in MySQL-connector, which seems to have a similar issue. It?s just so difficult to get responses from MySQL-connector that the PyMySQL thread is at least informative. > > /Ihar > > On 05/12/14 23:43, Mike Bayer wrote: >> Hey list - >> >> I?m posting this here just to get some ideas on what might be >> happening here, as it may or may not have some impact on Openstack >> if and when we move to MySQL drivers that are async-patchable, like >> MySQL-connector or PyMySQL. I had a user post this issue a few >> days ago which I?ve since distilled into test cases for PyMySQL and >> MySQL-connector separately. It uses gevent, not eventlet, so I?m >> not really sure if this applies. But there?s plenty of very smart >> people here so if anyone can shed some light on what is actually >> happening here, that would help. >> >> The program essentially illustrates code that performs several >> steps upon a connection, however if the greenlet is suddenly >> killed, the state from the connection, while damaged, is still >> being allowed to continue on in some way, and what?s >> super-catastrophic here is that you see a transaction actually >> being committed *without* all the statements proceeding on it. >> >> In my work with MySQL drivers, I?ve noted for years that they are >> all very, very bad at dealing with concurrency-related issues. The >> whole ?MySQL has gone away? and ?commands out of sync? errors are >> ones that we?ve all just drowned in, and so often these are due to >> the driver getting mixed up due to concurrent use of a connection. >> However this one seems more insidious. Though at the same time, >> the script has some complexity happening (like a simplistic >> connection pool) and I?m not really sure where the core of the >> issue lies. >> >> The script is at >> https://gist.github.com/zzzeek/d196fa91c40cb515365e and also below. >> If you run it for a few seconds, go over to your MySQL command line >> and run this query: >> >> SELECT * FROM table_b WHERE a_id not in (SELECT id FROM table_a) >> ORDER BY a_id DESC; >> >> and what you?ll see is tons of rows in table_b where the ?a_id? is >> zero (because cursor.lastrowid fails), but the *rows are >> committed*. If you read the segment of code that does this, it >> should be impossible: >> >> connection = pool.get() rowid = execute_sql( connection, "INSERT >> INTO table_a (data) VALUES (%s)", ("a",) ) >> >> gevent.sleep(random.random() * 0.2) try: execute_sql( connection, >> "INSERT INTO table_b (a_id, data) VALUES (%s, %s)", (rowid, "b",) >> ) connection.commit() pool.return_conn(connection) >> >> except Exception: connection.rollback() >> pool.return_conn(connection) >> >> so if the gevent.sleep() throws a timeout error, somehow we are >> getting thrown back in there, with the connection in an invalid >> state, but not invalid enough to commit. >> >> If a simple check for ?SELECT connection_id()? is added, this query >> fails and the whole issue is prevented. Additionally, if you put a >> foreign key constraint on that b_table.a_id, then the issue is >> prevented, and you see that the constraint violation is happening >> all over the place within the commit() call. The connection is >> being used such that its state just started after the >> gevent.sleep() call. >> >> Now, there?s also a very rudimental connection pool here. That is >> also part of what?s going on. If i try to run without the pool, >> the whole script just runs out of connections, fast, which suggests >> that this gevent timeout cleans itself up very, very badly. >> However, SQLAlchemy?s pool works a lot like this one, so if folks >> here can tell me if the connection pool is doing something bad, >> then that?s key, because I need to make a comparable change in >> SQLAlchemy?s pool. Otherwise I worry our eventlet use could have >> big problems under high load. >> >> >> >> >> >> # -*- coding: utf-8 -*- import gevent.monkey >> gevent.monkey.patch_all() >> >> import collections import threading import time import random >> import sys >> >> import logging logging.basicConfig() log = >> logging.getLogger('foo') log.setLevel(logging.DEBUG) >> >> #import pymysql as dbapi from mysql import connector as dbapi >> >> >> class SimplePool(object): def __init__(self): self.checkedin = >> collections.deque([ self._connect() for i in range(50) ]) >> self.checkout_lock = threading.Lock() self.checkin_lock = >> threading.Lock() >> >> def _connect(self): return dbapi.connect( user="scott", >> passwd="tiger", host="localhost", db="test") >> >> def get(self): with self.checkout_lock: while not self.checkedin: >> time.sleep(.1) return self.checkedin.pop() >> >> def return_conn(self, conn): try: conn.rollback() except: >> log.error("Exception during rollback", exc_info=True) try: >> conn.close() except: log.error("Exception during close", >> exc_info=True) >> >> # recycle to a new connection conn = self._connect() with >> self.checkin_lock: self.checkedin.append(conn) >> >> >> def verify_connection_id(conn): cursor = conn.cursor() try: >> cursor.execute("select connection_id()") row = cursor.fetchone() >> return row[0] except: return None finally: cursor.close() >> >> >> def execute_sql(conn, sql, params=()): cursor = conn.cursor() >> cursor.execute(sql, params) lastrowid = cursor.lastrowid >> cursor.close() return lastrowid >> >> >> pool = SimplePool() >> >> # SELECT * FROM table_b WHERE a_id not in # (SELECT id FROM >> table_a) ORDER BY a_id DESC; >> >> PREPARE_SQL = [ "DROP TABLE IF EXISTS table_b", "DROP TABLE IF >> EXISTS table_a", """CREATE TABLE table_a ( id INT NOT NULL >> AUTO_INCREMENT, data VARCHAR (256) NOT NULL, PRIMARY KEY (id) ) >> engine='InnoDB'""", """CREATE TABLE table_b ( id INT NOT NULL >> AUTO_INCREMENT, a_id INT NOT NULL, data VARCHAR (256) NOT NULL, -- >> uncomment this to illustrate where the driver is attempting -- to >> INSERT the row during ROLLBACK -- FOREIGN KEY (a_id) REFERENCES >> table_a(id), PRIMARY KEY (id) ) engine='InnoDB' """] >> >> connection = pool.get() for sql in PREPARE_SQL: >> execute_sql(connection, sql) connection.commit() >> pool.return_conn(connection) print("Table prepared...") >> >> >> def transaction_kill_worker(): while True: try: connection = None >> with gevent.Timeout(0.1): connection = pool.get() rowid = >> execute_sql( connection, "INSERT INTO table_a (data) VALUES (%s)", >> ("a",)) gevent.sleep(random.random() * 0.2) >> >> try: execute_sql( connection, "INSERT INTO table_b (a_id, data) >> VALUES (%s, %s)", (rowid, "b",)) >> >> # this version prevents the commit from # proceeding on a bad >> connection # if verify_connection_id(connection): # >> connection.commit() >> >> # this version does not. It will commit the # row for table_b >> without the table_a being present. connection.commit() >> >> pool.return_conn(connection) except Exception: >> connection.rollback() pool.return_conn(connection) >> sys.stdout.write("$") except gevent.Timeout: # try to return the >> connection anyway if connection is not None: >> pool.return_conn(connection) sys.stdout.write("#") except >> Exception: # logger.exception(e) sys.stdout.write("@") else: >> sys.stdout.write(".") finally: if connection is not None: >> pool.return_conn(connection) >> >> >> def main(): for i in range(50): >> gevent.spawn(transaction_kill_worker) >> >> gevent.sleep(3) >> >> while True: gevent.sleep(5) >> >> >> if __name__ == "__main__": main() >> >> >> >> >> >> >> _______________________________________________ OpenStack-dev >> mailing list OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > > iQEcBAEBCgAGBQJUivtJAAoJEC5aWaUY1u57nn8IAJP5zK/htG8EeoOSWZVV1ksA > en+lQuIA09aNdkHSNS1b/lNOYwsF4X5SM0dU5Cs4LCsumC5jM9S/cNOn3sfpVooA > vd31O1kKtd255YtnsKSmrOPiytoI69n2/65tVqgWLHpuXRSaj4HtqOEY/vOWMX6g > BON50QUYwwxAZLNOPmEO7vUnJ3VYO6zquH2mQrA1Vg/LCm3+VaodEHOVCxieaJ/n > iQPB4Vx1dkuP10HzWyjQW0j4kbUakqgkq/VHaiCYNC85HzPz6KJUOK/neZcBrWsZ > RQcLae1dX1yGMXDd5hyJaoe3qUfjuvSZmV5jS3ok/x8rnKdmVl65PtUUlLfVOU0= > =wJ55 > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ivarlazzaro at gmail.com Fri Dec 12 18:25:05 2014 From: ivarlazzaro at gmail.com (Ivar Lazzaro) Date: Fri, 12 Dec 2014 10:25:05 -0800 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: References: <54899F92.2060900@gmail.com> Message-ID: In general, I agree with Jay about the opaqueness of the names. I see however good reasons for having user-defined unique attributes (see Clint's point about idempotency). A middle ground here could be granting to the users the ability to specify the resource ID. A similar proposal was made some time ago by Eugene [0] [0] http://lists.openstack.org/pipermail/openstack-dev/2014-September/046150.html On Thu, Dec 11, 2014 at 6:59 AM, Mark McClain wrote: > > > On Dec 11, 2014, at 8:43 AM, Jay Pipes wrote: > > I'm generally in favor of making name attributes opaque, utf-8 strings > that are entirely user-defined and have no constraints on them. I consider > the name to be just a tag that the user places on some resource. It is the > resource's ID that is unique. > > I do realize that Nova takes a different approach to *some* resources, > including the security group name. > > End of the day, it's probably just a personal preference whether names > should be unique to a tenant/user or not. > > Maru had asked me my opinion on whether names should be unique and I > answered my personal opinion that no, they should not be, and if Neutron > needed to ensure that there was one and only one default security group for > a tenant, that a way to accomplish such a thing in a race-free way, without > use of SELECT FOR UPDATE, was to use the approach I put into the pastebin > on the review above. > > > I agree with Jay. We should not care about how a user names the > resource. There other ways to prevent this race and Jay?s suggestion is a > good one. > > mark > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From os.lcheng at gmail.com Fri Dec 12 18:27:29 2014 From: os.lcheng at gmail.com (Lin Hua Cheng) Date: Fri, 12 Dec 2014 10:27:29 -0800 Subject: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard In-Reply-To: References: <547DD08A.7000402@redhat.com> Message-ID: Consolidating them would break it for users that have customization and extension on the two templates. -Lin On Fri, Dec 12, 2014 at 9:20 AM, David Lyle wrote: > > Not entirely sure why they both exist either. > > So by move, you meant override (nuance). That's different and I have no > issue with that. > > I'm also fine with attempting to consolidate _conf and _scripts. > > David > > On Thu, Dec 11, 2014 at 1:22 PM, Thai Q Tran wrote: > >> >> It would not create a circular dependency, dashboard would depend on >> horizon - not the latter. >> Scripts that are library specific will live in horizon while scripts that >> are panel specific will live in dashboard. >> Let me draw a more concrete example. >> >> In Horizon >> We know that _script and _conf are included in the base.html >> We create a _script and _conf placeholder file for project overrides >> (similar to _stylesheets and _header) >> In Dashboard >> We create a _script and _conf file with today's content >> It overrides the _script and _conf file in horizon >> Now we can include panel specific scripts without causing circular >> dependency. >> >> In fact, I would like to go further and suggest that _script and _conf be >> combine into a single file. >> Not sure why we need two places to include scripts. >> >> >> -----David Lyle wrote: ----- >> To: "OpenStack Development Mailing List (not for usage questions)" < >> openstack-dev at lists.openstack.org> >> From: David Lyle >> Date: 12/11/2014 09:23AM >> Subject: Re: [openstack-dev] [Horizon] Moving _conf and _scripts to >> dashboard >> >> >> I'm probably not understanding the nuance of the question but moving the >> _scripts.html file to openstack_dashboard creates some circular >> dependencies, does it not? templates/base.html in the horizon side of the >> repo includes _scripts.html and insures that the javascript needed by the >> existing horizon framework is present. >> >> _conf.html seems like a better candidate for moving as it's more closely >> tied to the application code. >> >> David >> >> >> On Wed, Dec 10, 2014 at 7:20 PM, Thai Q Tran wrote: >> >>> Sorry for duplicate mail, forgot the subject. >>> >>> -----Thai Q Tran/Silicon Valley/IBM wrote: ----- >>> To: "OpenStack Development Mailing List \(not for usage questions\)" < >>> openstack-dev at lists.openstack.org> >>> From: Thai Q Tran/Silicon Valley/IBM >>> Date: 12/10/2014 03:37PM >>> Subject: Moving _conf and _scripts to dashboard >>> >>> The way we are structuring our javascripts today is complicated. All of >>> our static javascripts reside in /horizon/static and are imported through >>> _conf.html and _scripts.html. Notice that there are already some panel >>> specific javascripts like: horizon.images.js, horizon.instances.js, >>> horizon.users.js. They do not belong in horizon. They belong in >>> openstack_dashboard because they are specific to a panel. >>> >>> Why am I raising this issue now? In Angular, we need controllers written >>> in javascript for each panel. As we angularize more and more panels, we >>> need to store them in a way that make sense. To me, it make sense for us to >>> move _conf and _scripts to openstack_dashboard. Or if this is not possible, >>> then provide a mechanism to override them in openstack_dashboard. >>> >>> Thoughts? >>> Thai >>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbryant at redhat.com Fri Dec 12 18:28:40 2014 From: rbryant at redhat.com (Russell Bryant) Date: Fri, 12 Dec 2014 13:28:40 -0500 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <548B128C.40504@rackspace.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> <548B00B9.3090609@redhat.com> <548B128C.40504@rackspace.com> Message-ID: <548B33D8.60109@redhat.com> On 12/12/2014 11:06 AM, Andrew Laski wrote: > > On 12/12/2014 09:50 AM, Russell Bryant wrote: >> On 12/11/2014 12:55 PM, Andrew Laski wrote: >>> Cells can handle a single API on top of globally distributed DCs. I >>> have spoken with a group that is doing exactly that. But it requires >>> that the API is a trusted part of the OpenStack deployments in those >>> distributed DCs. >> And the way the rest of the components fit into that scenario is far >> from clear to me. Do you consider this more of a "if you can make it >> work, good for you", or something we should aim to be more generally >> supported over time? Personally, I see the globally distributed >> OpenStack under a single API case much more complex, and worth >> considering out of scope for the short to medium term, at least. > > I do consider this to be out of scope for cells, for at least the medium > term as you've said. There is additional complexity in making that a > supported configuration that is not being addressed in the cells > effort. I am just making the statement that this is something cells > could address if desired, and therefore doesn't need an additional > solution. OK, great. Thanks for the clarification. I think we're on the same page. :-) -- Russell Bryant From joe.gordon0 at gmail.com Fri Dec 12 18:30:07 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Fri, 12 Dec 2014 10:30:07 -0800 Subject: [openstack-dev] =?utf-8?b?W2FsbF0gW3RjXSBbUFRMXSBDYXNjYWRpbmcg?= =?utf-8?q?vs=2E_Cells_=E2=80=93_summit_recap_and_move_forward?= In-Reply-To: <548B00B9.3090609@redhat.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> <548B00B9.3090609@redhat.com> Message-ID: On Fri, Dec 12, 2014 at 6:50 AM, Russell Bryant wrote: > On 12/11/2014 12:55 PM, Andrew Laski wrote: > > Cells can handle a single API on top of globally distributed DCs. I > > have spoken with a group that is doing exactly that. But it requires > > that the API is a trusted part of the OpenStack deployments in those > > distributed DCs. > > And the way the rest of the components fit into that scenario is far > from clear to me. Do you consider this more of a "if you can make it > work, good for you", or something we should aim to be more generally > supported over time? Personally, I see the globally distributed > OpenStack under a single API case much more complex, and worth > considering out of scope for the short to medium term, at least. > > For me, this discussion boils down to ... > > 1) Do we consider these use cases in scope at all? > > 2) If we consider it in scope, is it enough of a priority to warrant a > cross-OpenStack push in the near term to work on it? > > 3) If yes to #2, how would we do it? Cascading, or something built > around cells? > > I haven't worried about #3 much, because I consider #2 or maybe even #1 > to be a show stopper here. > Agreed > > -- > Russell Bryant > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpyzhov at mirantis.com Fri Dec 12 18:35:13 2014 From: dpyzhov at mirantis.com (Dmitry Pyzhov) Date: Fri, 12 Dec 2014 22:35:13 +0400 Subject: [openstack-dev] [Fuel] Logs format on UI (High/6.0) Message-ID: We have a high priority bug in 6.0: https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story. Our openstack services use to send logs in strange format with extra copy of timestamp and loglevel: ==> ./neutron-metadata-agent.log <== 2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 INFO neutron.common.config [-] Logging enabled! And we have a workaround for this. We hide extra timestamp and use second loglevel. In Juno some of services have updated oslo.logging and now send logs in simple format: ==> ./nova-api.log <== 2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from /etc/nova/api-paste.ini In order to keep backward compatibility and deal with both formats we have a dirty workaround for our workaround: https://review.openstack.org/#/c/141450/ As I see, our best choice here is to throw away all workarounds and show logs on UI as is. If service sends duplicated data - we should show duplicated data. Long term fix here is to update oslo.logging in all packages. We can do it in 6.1. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Fri Dec 12 18:40:03 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Fri, 12 Dec 2014 10:40:03 -0800 Subject: [openstack-dev] [NFV][Telco] pxe-boot In-Reply-To: <548AB9FD.8070005@dektech.com.au> References: <20141125223355.Horde.8UWcgskXSBVvU-RJrCgbtw7@mail.dektech.com.au> <693765041.19659732.1417044729676.JavaMail.zimbra@redhat.com> <547ED500.1070008@dektech.com.au> <548869F5.1040801@dektech.com.au> <548AB9FD.8070005@dektech.com.au> Message-ID: On Fri, Dec 12, 2014 at 1:48 AM, Pasquale Porreca < pasquale.porreca at dektech.com.au> wrote: > From my point of view it is not advisable to base some functionalities > of the instances on direct calls to Openstack API. This for 2 main > reasons, the first one: if the Openstack code changes (and we know > Openstack code does change) it will be required to change the code of > the software running in the instance too; the second one: if in the > future one wants to pass to another cloud infrastructure it will be more > difficult to achieve it. > Thoughts on your two reasons: 1) What happens if OpenStack code changes? While OpenStack is under very active development we have stable APIs, especially around something like booting an instance. So the API call to boot an instance with a specific image *should not* change as you upgrade OpenStack (unless we deprecate an API, but this will be a slow multi year process). 2) "if in the future one wants to pass to another cloud infrastructure it will be more difficult to achieve it." Why not use something like apache jcloud to make this easier? https://jclouds.apache.org/ > > On 12/12/14 01:20, Joe Gordon wrote: > > On Wed, Dec 10, 2014 at 7:42 AM, Pasquale Porreca < > > pasquale.porreca at dektech.com.au> wrote: > > > >> > Well, one of the main reason to choose an open source product is to > avoid > >> > vendor lock-in. I think it is not > >> > advisable to embed in the software running in an instance a call to > >> > OpenStack specific services. > >> > > > I'm sorry I don't follow the logic here, can you elaborate. > > > > > > -- > Pasquale Porreca > > DEK Technologies > Via dei Castelli Romani, 22 > 00040 Pomezia (Roma) > > Mobile +39 3394823805 > Skype paskporr > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tqtran at us.ibm.com Fri Dec 12 18:43:23 2014 From: tqtran at us.ibm.com (Thai Q Tran) Date: Fri, 12 Dec 2014 11:43:23 -0700 Subject: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard In-Reply-To: References: , <547DD08A.7000402@redhat.com> Message-ID: An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Dec 12 19:16:29 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 12 Dec 2014 14:16:29 -0500 Subject: [openstack-dev] [all] [ha] potential issue with implicit async-compatible mysql drivers In-Reply-To: References: <783B3168-BD43-42CB-90EC-B55D10EAB5EA@redhat.com> <548AFB49.8080802@redhat.com> Message-ID: <4DA19508-CFB7-4F7F-AE9B-CCB34C34EFBD@doughellmann.com> On Dec 12, 2014, at 1:16 PM, Mike Bayer wrote: > >> On Dec 12, 2014, at 9:27 AM, Ihar Hrachyshka wrote: >> >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA512 >> >> Reading the latest comments at >> https://github.com/PyMySQL/PyMySQL/issues/275, it seems to me that the >> issue is not to be solved in drivers themselves but instead in >> libraries that arrange connections (sqlalchemy/oslo.db), correct? >> >> Will the proposed connection reopening help? > > disagree, this is absolutely a driver bug. I?ve re-read that last comment and now I see that the developer is suggesting that this condition not be flagged in any way, so I?ve responded. The connection should absolutely blow up and if it wants to refuse to be usable afterwards, that?s fine (it?s the same as MySQLdb ?commands out of sync?). It just has to *not* emit any further SQL as though nothing is wrong. > > It doesn?t matter much for PyMySQL anyway, I don?t know that PyMySQL is up to par for openstack in any case (look at the entries in their changelog: https://github.com/PyMySQL/PyMySQL/blob/master/CHANGELOG "Several other bug fixes?, ?Many bug fixes"- really? is this an iphone app?) This does make me a little concerned about merging https://review.openstack.org/#/c/133962/ so I?ve added a -2 for the time being to let the discussion go on here. Doug > > We really should be looking to get this fixed in MySQL-connector, which seems to have a similar issue. It?s just so difficult to get responses from MySQL-connector that the PyMySQL thread is at least informative. > > > > > >> >> /Ihar >> >> On 05/12/14 23:43, Mike Bayer wrote: >>> Hey list - >>> >>> I?m posting this here just to get some ideas on what might be >>> happening here, as it may or may not have some impact on Openstack >>> if and when we move to MySQL drivers that are async-patchable, like >>> MySQL-connector or PyMySQL. I had a user post this issue a few >>> days ago which I?ve since distilled into test cases for PyMySQL and >>> MySQL-connector separately. It uses gevent, not eventlet, so I?m >>> not really sure if this applies. But there?s plenty of very smart >>> people here so if anyone can shed some light on what is actually >>> happening here, that would help. >>> >>> The program essentially illustrates code that performs several >>> steps upon a connection, however if the greenlet is suddenly >>> killed, the state from the connection, while damaged, is still >>> being allowed to continue on in some way, and what?s >>> super-catastrophic here is that you see a transaction actually >>> being committed *without* all the statements proceeding on it. >>> >>> In my work with MySQL drivers, I?ve noted for years that they are >>> all very, very bad at dealing with concurrency-related issues. The >>> whole ?MySQL has gone away? and ?commands out of sync? errors are >>> ones that we?ve all just drowned in, and so often these are due to >>> the driver getting mixed up due to concurrent use of a connection. >>> However this one seems more insidious. Though at the same time, >>> the script has some complexity happening (like a simplistic >>> connection pool) and I?m not really sure where the core of the >>> issue lies. >>> >>> The script is at >>> https://gist.github.com/zzzeek/d196fa91c40cb515365e and also below. >>> If you run it for a few seconds, go over to your MySQL command line >>> and run this query: >>> >>> SELECT * FROM table_b WHERE a_id not in (SELECT id FROM table_a) >>> ORDER BY a_id DESC; >>> >>> and what you?ll see is tons of rows in table_b where the ?a_id? is >>> zero (because cursor.lastrowid fails), but the *rows are >>> committed*. If you read the segment of code that does this, it >>> should be impossible: >>> >>> connection = pool.get() rowid = execute_sql( connection, "INSERT >>> INTO table_a (data) VALUES (%s)", ("a",) ) >>> >>> gevent.sleep(random.random() * 0.2) try: execute_sql( connection, >>> "INSERT INTO table_b (a_id, data) VALUES (%s, %s)", (rowid, "b",) >>> ) connection.commit() pool.return_conn(connection) >>> >>> except Exception: connection.rollback() >>> pool.return_conn(connection) >>> >>> so if the gevent.sleep() throws a timeout error, somehow we are >>> getting thrown back in there, with the connection in an invalid >>> state, but not invalid enough to commit. >>> >>> If a simple check for ?SELECT connection_id()? is added, this query >>> fails and the whole issue is prevented. Additionally, if you put a >>> foreign key constraint on that b_table.a_id, then the issue is >>> prevented, and you see that the constraint violation is happening >>> all over the place within the commit() call. The connection is >>> being used such that its state just started after the >>> gevent.sleep() call. >>> >>> Now, there?s also a very rudimental connection pool here. That is >>> also part of what?s going on. If i try to run without the pool, >>> the whole script just runs out of connections, fast, which suggests >>> that this gevent timeout cleans itself up very, very badly. >>> However, SQLAlchemy?s pool works a lot like this one, so if folks >>> here can tell me if the connection pool is doing something bad, >>> then that?s key, because I need to make a comparable change in >>> SQLAlchemy?s pool. Otherwise I worry our eventlet use could have >>> big problems under high load. >>> >>> >>> >>> >>> >>> # -*- coding: utf-8 -*- import gevent.monkey >>> gevent.monkey.patch_all() >>> >>> import collections import threading import time import random >>> import sys >>> >>> import logging logging.basicConfig() log = >>> logging.getLogger('foo') log.setLevel(logging.DEBUG) >>> >>> #import pymysql as dbapi from mysql import connector as dbapi >>> >>> >>> class SimplePool(object): def __init__(self): self.checkedin = >>> collections.deque([ self._connect() for i in range(50) ]) >>> self.checkout_lock = threading.Lock() self.checkin_lock = >>> threading.Lock() >>> >>> def _connect(self): return dbapi.connect( user="scott", >>> passwd="tiger", host="localhost", db="test") >>> >>> def get(self): with self.checkout_lock: while not self.checkedin: >>> time.sleep(.1) return self.checkedin.pop() >>> >>> def return_conn(self, conn): try: conn.rollback() except: >>> log.error("Exception during rollback", exc_info=True) try: >>> conn.close() except: log.error("Exception during close", >>> exc_info=True) >>> >>> # recycle to a new connection conn = self._connect() with >>> self.checkin_lock: self.checkedin.append(conn) >>> >>> >>> def verify_connection_id(conn): cursor = conn.cursor() try: >>> cursor.execute("select connection_id()") row = cursor.fetchone() >>> return row[0] except: return None finally: cursor.close() >>> >>> >>> def execute_sql(conn, sql, params=()): cursor = conn.cursor() >>> cursor.execute(sql, params) lastrowid = cursor.lastrowid >>> cursor.close() return lastrowid >>> >>> >>> pool = SimplePool() >>> >>> # SELECT * FROM table_b WHERE a_id not in # (SELECT id FROM >>> table_a) ORDER BY a_id DESC; >>> >>> PREPARE_SQL = [ "DROP TABLE IF EXISTS table_b", "DROP TABLE IF >>> EXISTS table_a", """CREATE TABLE table_a ( id INT NOT NULL >>> AUTO_INCREMENT, data VARCHAR (256) NOT NULL, PRIMARY KEY (id) ) >>> engine='InnoDB'""", """CREATE TABLE table_b ( id INT NOT NULL >>> AUTO_INCREMENT, a_id INT NOT NULL, data VARCHAR (256) NOT NULL, -- >>> uncomment this to illustrate where the driver is attempting -- to >>> INSERT the row during ROLLBACK -- FOREIGN KEY (a_id) REFERENCES >>> table_a(id), PRIMARY KEY (id) ) engine='InnoDB' """] >>> >>> connection = pool.get() for sql in PREPARE_SQL: >>> execute_sql(connection, sql) connection.commit() >>> pool.return_conn(connection) print("Table prepared...") >>> >>> >>> def transaction_kill_worker(): while True: try: connection = None >>> with gevent.Timeout(0.1): connection = pool.get() rowid = >>> execute_sql( connection, "INSERT INTO table_a (data) VALUES (%s)", >>> ("a",)) gevent.sleep(random.random() * 0.2) >>> >>> try: execute_sql( connection, "INSERT INTO table_b (a_id, data) >>> VALUES (%s, %s)", (rowid, "b",)) >>> >>> # this version prevents the commit from # proceeding on a bad >>> connection # if verify_connection_id(connection): # >>> connection.commit() >>> >>> # this version does not. It will commit the # row for table_b >>> without the table_a being present. connection.commit() >>> >>> pool.return_conn(connection) except Exception: >>> connection.rollback() pool.return_conn(connection) >>> sys.stdout.write("$") except gevent.Timeout: # try to return the >>> connection anyway if connection is not None: >>> pool.return_conn(connection) sys.stdout.write("#") except >>> Exception: # logger.exception(e) sys.stdout.write("@") else: >>> sys.stdout.write(".") finally: if connection is not None: >>> pool.return_conn(connection) >>> >>> >>> def main(): for i in range(50): >>> gevent.spawn(transaction_kill_worker) >>> >>> gevent.sleep(3) >>> >>> while True: gevent.sleep(5) >>> >>> >>> if __name__ == "__main__": main() >>> >>> >>> >>> >>> >>> >>> _______________________________________________ OpenStack-dev >>> mailing list OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG/MacGPG2 v2.0.22 (Darwin) >> >> iQEcBAEBCgAGBQJUivtJAAoJEC5aWaUY1u57nn8IAJP5zK/htG8EeoOSWZVV1ksA >> en+lQuIA09aNdkHSNS1b/lNOYwsF4X5SM0dU5Cs4LCsumC5jM9S/cNOn3sfpVooA >> vd31O1kKtd255YtnsKSmrOPiytoI69n2/65tVqgWLHpuXRSaj4HtqOEY/vOWMX6g >> BON50QUYwwxAZLNOPmEO7vUnJ3VYO6zquH2mQrA1Vg/LCm3+VaodEHOVCxieaJ/n >> iQPB4Vx1dkuP10HzWyjQW0j4kbUakqgkq/VHaiCYNC85HzPz6KJUOK/neZcBrWsZ >> RQcLae1dX1yGMXDd5hyJaoe3qUfjuvSZmV5jS3ok/x8rnKdmVl65PtUUlLfVOU0= >> =wJ55 >> -----END PGP SIGNATURE----- >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dklyle0 at gmail.com Fri Dec 12 19:26:33 2014 From: dklyle0 at gmail.com (David Lyle) Date: Fri, 12 Dec 2014 12:26:33 -0700 Subject: [openstack-dev] FW: [horizon] [ux] Changing how the modals are closed in Horizon In-Reply-To: References: Message-ID: works for me, less complexity +1 On Fri, Dec 12, 2014 at 11:09 AM, Timur Sufiev wrote: > It seems to me that the consensus on keeping the simpler approach -- to > make Bootstrap data-backdrop="static" as the default behavior -- has been > reached. Am I right? > > On Thu, Dec 4, 2014 at 10:59 PM, Kruithof, Piet < > pieter.c.kruithof-jr at hp.com> wrote: >> >> My preference would be ?change the default behavior to 'static?? for the >> following reasons: >> >> - There are plenty of ways to close the modal, so there?s not really a >> need for this feature. >> - There are no visual cues, such as an ?X? or a Cancel button, that >> selecting outside of the modal closes it. >> - Downside is losing all of your data. >> >> My two cents? >> >> Begin forwarded message: >> >> From: "Rob Cresswell (rcresswe)" > rcresswe at cisco.com>> >> To: "OpenStack Development Mailing List (not for usage questions)" < >> openstack-dev at lists.openstack.org> openstack-dev at lists.openstack.org>> >> Date: December 3, 2014 at 5:21:51 AM PST >> Subject: Re: [openstack-dev] [horizon] [ux] Changing how the modals are >> closed in Horizon >> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < >> openstack-dev at lists.openstack.org> openstack-dev at lists.openstack.org>> >> >> +1 to changing the behaviour to ?static'. Modal inside a modal is >> potentially slightly more useful, but looks messy and inconsistent, which I >> think outweighs the functionality. >> >> Rob >> >> >> On 2 Dec 2014, at 12:21, Timur Sufiev > tsufiev at mirantis.com>> wrote: >> >> Hello, Horizoneers and UX-ers! >> >> The default behavior of modals in Horizon (defined in turn by Bootstrap >> defaults) regarding their closing is to simply close the modal once user >> clicks somewhere outside of it (on the backdrop element below and around >> the modal). This is not very convenient for the modal forms containing a >> lot of input - when it is closed without a warning all the data the user >> has already provided is lost. Keeping this in mind, I've made a patch [1] >> changing default Bootstrap 'modal_backdrop' parameter to 'static', which >> means that forms are not closed once the user clicks on a backdrop, while >> it's still possible to close them by pressing 'Esc' or clicking on the 'X' >> link at the top right border of the form. Also the patch [1] allows to >> customize this behavior (between 'true'-current one/'false' - no backdrop >> element/'static') on a per-form basis. >> >> What I didn't know at the moment I was uploading my patch is that David >> Lyle had been working on a similar solution [2] some time ago. It's a bit >> more elaborate than mine: if the user has already filled some some inputs >> in the form, then a confirmation dialog is shown, otherwise the form is >> silently dismissed as it happens now. >> >> The whole point of writing about this in the ML is to gather opinions >> which approach is better: >> * stick to the current behavior; >> * change the default behavior to 'static'; >> * use the David's solution with confirmation dialog (once it'll be >> rebased to the current codebase). >> >> What do you think? >> >> [1] https://review.openstack.org/#/c/113206/ >> [2] https://review.openstack.org/#/c/23037/ >> >> P.S. I remember that I promised to write this email a week ago, but >> better late than never :). >> >> -- >> Timur Sufiev >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org> OpenStack-dev at lists.openstack.org> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org> OpenStack-dev at lists.openstack.org> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Timur Sufiev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From blak111 at gmail.com Fri Dec 12 19:40:07 2014 From: blak111 at gmail.com (Kevin Benton) Date: Fri, 12 Dec 2014 11:40:07 -0800 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: References: <54899F92.2060900@gmail.com> Message-ID: If we allow resource IDs to be set they will no longer be globally unique. I'm not sure if this will impact anything directly right now, but it might be something that impacts tools orchestrating multiple neutron deployments (e.g. cascading, cells, etc). On Fri, Dec 12, 2014 at 10:25 AM, Ivar Lazzaro wrote: > In general, I agree with Jay about the opaqueness of the names. I see > however good reasons for having user-defined unique attributes (see > Clint's point about idempotency). > A middle ground here could be granting to the users the ability to specify > the resource ID. > A similar proposal was made some time ago by Eugene [0] > > > [0] > http://lists.openstack.org/pipermail/openstack-dev/2014-September/046150.html > > On Thu, Dec 11, 2014 at 6:59 AM, Mark McClain wrote: > >> >> On Dec 11, 2014, at 8:43 AM, Jay Pipes wrote: >> >> I'm generally in favor of making name attributes opaque, utf-8 strings >> that are entirely user-defined and have no constraints on them. I consider >> the name to be just a tag that the user places on some resource. It is the >> resource's ID that is unique. >> >> I do realize that Nova takes a different approach to *some* resources, >> including the security group name. >> >> End of the day, it's probably just a personal preference whether names >> should be unique to a tenant/user or not. >> >> Maru had asked me my opinion on whether names should be unique and I >> answered my personal opinion that no, they should not be, and if Neutron >> needed to ensure that there was one and only one default security group for >> a tenant, that a way to accomplish such a thing in a race-free way, without >> use of SELECT FOR UPDATE, was to use the approach I put into the pastebin >> on the review above. >> >> >> I agree with Jay. We should not care about how a user names the >> resource. There other ways to prevent this race and Jay?s suggestion is a >> good one. >> >> mark >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Kevin Benton -------------- next part -------------- An HTML attachment was scrubbed... URL: From anteaya at anteaya.info Fri Dec 12 20:02:42 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Fri, 12 Dec 2014 15:02:42 -0500 Subject: [openstack-dev] [nova][cinder][infra] Ceph CI status update In-Reply-To: References: <20141211163643.GA10911@helmut> <5489CE60.2040102@anteaya.info> Message-ID: <548B49E2.1040005@anteaya.info> On 12/12/2014 03:28 AM, Deepak Shetty wrote: > On Thu, Dec 11, 2014 at 10:33 PM, Anita Kuno wrote: > >> On 12/11/2014 09:36 AM, Jon Bernard wrote: >>> Heya, quick Ceph CI status update. Once the test_volume_boot_pattern >>> was marked as skipped, only the revert_resize test was failing. I have >>> submitted a patch to nova for this [1], and that yields an all green >>> ceph ci run [2]. So at the moment, and with my revert patch, we're in >>> good shape. >>> >>> I will fix up that patch today so that it can be properly reviewed and >>> hopefully merged. From there I'll submit a patch to infra to move the >>> job to the check queue as non-voting, and we can go from there. >>> >>> [1] https://review.openstack.org/#/c/139693/ >>> [2] >> http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html >>> >>> Cheers, >>> >> Please add the name of your CI account to this table: >> https://wiki.openstack.org/wiki/ThirdPartySystems >> >> As outlined in the third party CI requirements: >> http://ci.openstack.org/third_party.html#requirements >> >> Please post system status updates to your individual CI wikipage that is >> linked to this table. >> > > How is posting status there different than here : > https://wiki.openstack.org/wiki/Cinder/third-party-ci-status > > thanx, > deepak > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > There are over 100 CI accounts now and growing. Searching the email archives to evaluate the status of a CI is not something that infra will do, we will look on that wikipage or we will check the third-party-announce email list (which all third party CI systems should be subscribed to, as outlined in the third_party.html page lined above). If we do not find information where we have asked you to put it and were we expect it, we may disable your system until you have fulfilled the requirements as outlined in the third_party.html page linked above. Sprinkling status updates amongst the emails posted to -dev and expecting the infra team and other -devs to find them when needed is unsustainable and has been for some time, which is why we came up with the wikipage to aggregate them. Please direct all further questions about this matter to one of the two third-party meetings as linked above. Thank you, Anita. From tnurlygayanov at mirantis.com Fri Dec 12 20:02:56 2014 From: tnurlygayanov at mirantis.com (Timur Nurlygayanov) Date: Fri, 12 Dec 2014 23:02:56 +0300 Subject: [openstack-dev] tempest - object_storage In-Reply-To: <28C561A029C4AD4CB43E4B382193FA7922419BB1@ESESSMB207.ericsson.se> References: <28C561A029C4AD4CB43E4B382193FA7922419BB1@ESESSMB207.ericsson.se> Message-ID: Hi Anton, it depends on the storage service which you use on this lab. Tempest tests can fail, for example, if the configuration or your object storage has some customizations. Also you can change the tempest configuration to adapt tempest tests for your environment. On my test OpenStack IceHouse cloud with Swift (HA, baremetal) these tests are green, but other 'object_storage' tests failed: - tempest.api.object_storage.test_account_bulk.BulkTest.test_bulk_delete[gate] - tempest.api.object_storage.test_account_bulk.BulkTest.test_extract_archive[gate] - tempest.api.object_storage.test_account_bulk.BulkTest.test_bulk_delete_by_POST[gate] - tempest.api.object_storage.test_account_quotas_negative.AccountQuotasNegativeTest.test_user_modify_quota[gate,negative,smoke] - tempest.api.object_storage.test_container_quotas.ContainerQuotasTest.test_upload_large_object[gate,smoke] - tempest.api.object_storage.test_container_quotas.ContainerQuotasTest.test_upload_too_many_objects[gate,smoke] - tempest.api.object_storage.test_container_staticweb.StaticWebTest.test_web_index - tempest.api.object_storage.test_container_staticweb.StaticWebTest.test_web_listing_css - tempest.api.object_storage.test_container_staticweb.StaticWebTest.test_web_error - tempest.api.object_storage.test_crossdomain.CrossdomainTest.test_get_crossdomain_policy - tempest.api.object_storage.test_object_formpost.ObjectFormPostTest.test_post_object_using_form[gate] - tempest.api.object_storage.test_object_formpost_negative.ObjectFormPostNegativeTest.test_post_object_using_form_invalid_signature[gate] - tempest.api.object_storage.test_object_formpost_negative.ObjectFormPostNegativeTest.test_post_object_using_form_expired[gate,negative] - tempest.api.object_storage.test_object_services.PublicObjectTest.test_access_public_container_object_without_using_creds[gate,smoke] - tempest.api.object_storage.test_object_slo.ObjectSloTest.test_delete_large_object[gate] - tempest.api.object_storage.test_object_slo.ObjectSloTest.test_list_large_object_metadata[gate] - tempest.api.object_storage.test_object_slo.ObjectSloTest.test_retrieve_large_object[gate] - tempest.api.object_storage.test_object_slo.ObjectSloTest.test_upload_manifest[gate] - tempest.api.object_storage.test_object_temp_url.ObjectTempUrlTest.test_get_object_using_temp_url_with_inline_query_parameter[gate] - tempest.api.object_storage.test_object_temp_url_negative.ObjectTempUrlNegativeTest.test_get_object_after_expiration_time[gate,negative] - tempest.api.object_storage.test_object_version.ContainerTest.test_versioned_container[gate,smoke] On Fri, Dec 12, 2014 at 6:10 PM, Anton Massoud wrote: > > Hi, > > > > We are running tempest on icehouse, in our tempest runs, we found that the > below test cases are failing in random iterations but not always. > > > > > tempest.api.object_storage.test_account_services.AccountTest.test_update_account_metadata_with_delete_matadata_key[gate,smoke] > > > tempest.api.object_storage.test_container_services.ContainerTest.test_list_container_contents_with_path[gate,smoke] > > > tempest.api.object_storage.test_object_expiry.ObjectExpiryTest.test_get_object_after_expiry_time[gate] > > > tempest.api.object_storage.test_object_expiry.ObjectExpiryTest.test_get_object_at_expiry_time[gate] > > > tempest.api.object_storage.test_object_services.ObjectTest.test_object_upload_in_segments[gate] > > > tempest.api.object_storage.test_object_services.ObjectTest.test_update_object_metadata[gate,smoke] > > > tempest.api.object_storage.test_object_services.ObjectTest.test_update_object_metadata_with_create_and_remove_metadata[gate,smoke] > > > tempest.api.object_storage.test_object_services.ObjectTest.test_update_object_metadata_with_remove_metadata > > > tempest.api.object_storage.test_account_services.AccountTest.test_update_account_metadata_with_create_and_delete_metadata[gate,smoke] > > > tempest.api.object_storage.test_container_services.ContainerTest.test_create_container[gate,smoke] > > > tempest.api.object_storage.test_container_services.ContainerTest.test_delete_container[gate,smoke] > > > tempest.api.object_storage.test_object_services.ObjectTest.test_copy_object_2d_way[gate,smoke] > > > > > > Anyone else has experienced the same, any idea what that may be caused of > > > > /Anton > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Timur, Senior QA Engineer OpenStack Projects Mirantis Inc -------------- next part -------------- An HTML attachment was scrubbed... URL: From dannchoi at cisco.com Fri Dec 12 20:03:50 2014 From: dannchoi at cisco.com (Danny Choi (dannchoi)) Date: Fri, 12 Dec 2014 20:03:50 +0000 Subject: [openstack-dev] [devstack] localrc for mutli-node setup Message-ID: Hi, I would like to use devstack to deploy OpenStack on a multi-node setup, i.e. separate Controller, Network and Compute nodes What is the localrc for each node? I would assume, for example, we don?t need to enable neutron service at the Controller node, etc? Does anyone has the localrc file for each node type that can share? Thanks, Danny -------------- next part -------------- An HTML attachment was scrubbed... URL: From amit.gandhi at RACKSPACE.COM Fri Dec 12 20:17:29 2014 From: amit.gandhi at RACKSPACE.COM (Amit Gandhi) Date: Fri, 12 Dec 2014 20:17:29 +0000 Subject: [openstack-dev] [api] Usage of the PATCH verb Message-ID: <4C649DDC-99D0-4842-80B6-668CF3528E93@rackspace.com> Hi We are currently using PATCH in the Poppy API to update existing resources. However, we have recently had some discussions on how this should have been implemented. I would like to get the advise of the Openstack Community and the API working group on how PATCH semantics should work. The following RFC documents [1][2] (and a blog post [3]) advise the using PATCH as the following: 2.1. A Simple PATCH Example PATCH /file.txt HTTP/1.1 Host: www.example.com Content-Type: application/example If-Match: "e0023aa4e" Content-Length: 100 [ { "op": "test", "path": "/a/b/c", "value": "foo" }, { "op": "remove", "path": "/a/b/c" }, { "op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ] }, { "op": "replace", "path": "/a/b/c", "value": 42 }, { "op": "move", "from": "/a/b/c", "path": "/a/b/d" }, { "op": "copy", "from": "/a/b/d", "path": "/a/b/e" } ] Basically, the changes consist of an operation, the path in the json object to modify, and the new value. The way we currently have it implemented is to submit just the changes, and the server applies the change to the resource. This means passing entire lists to change etc. I would like to hear some feedback from others on how PATCH should be implemented. Thanks Amit Gandhi - Rackspace [1] https://tools.ietf.org/html/rfc5789 [2] http://tools.ietf.org/html/rfc6902 [3] http://williamdurand.fr/2014/02/14/please-do-not-patch-like-an-idiot/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano at openstack.org Fri Dec 12 20:32:17 2014 From: stefano at openstack.org (Stefano Maffulli) Date: Fri, 12 Dec 2014 12:32:17 -0800 Subject: [openstack-dev] [neutron] Spec reviews this week by the neutron-drivers team In-Reply-To: References: Message-ID: <548B50D1.3090401@openstack.org> I have adapted to Neutron the specs review dashboard prepared by Joe Gordon for Nova. Check it out below. Reminder: the deadline to approve kilo specs is this coming Monday, Dec 15. https://review.openstack.org/#/dashboard/?foreach=project%3A%5Eopenstack%2Fneutron-specs+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%252cjenkins+NOT+label%3ACode-Review%3E%3D-2%252cself+branch%3Amaster&title=Neutron+Specs&&Your+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself&Needs+final+%2B2=label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Aneutron-specs-core+label%3ACode-Review%3C%3D-1%29+limit%3A100&Passed+Jenkins%2C+Positive+Neutron-Core+Feedback=NOT+label%3ACode-Review%3E%3D2+%28reviewerin%3Aneutron-core+label%3ACode-Review%3E%3D1%29+NOT%28reviewerin%3Aneutron-core+label%3ACode-Review%3C%3D-1%29+limit%3A100&Passed+Jenkins%2C+No+Positive+Neutron-Core+Feedback%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3C%3D-1+NOT+label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Aneutron-core+label%3ACode-Review%3E%3D1%29+limit%3A100&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+7+days%29=NOT+label%3ACode-Review%3C%3D2+a ge%3A7d&Some+negative+feedback%2C+might+still+be+worth+commenting=label%3ACode-Review%3D-1+NOT+label%3ACode-Review%3D-2+limit%3A100&Dead+Specs=label%3ACode-Review%3C%3D-2 On 12/09/2014 06:08 AM, Kyle Mestery wrote: > The neutron-drivers team has started the process of both accepting and > rejecting specs for Kilo now. If you've submitted a spec, you will soon > see the spec either approved or land in either the abandoned or -2 > category. We're doing our best to put helpful messages when we do > abandon or -2 specs, but for more detail, see the neutron-drivers wiki > page [1]. Also, you can find me on IRC with questions as well. > > Thanks! > Kyle > > [1] https://wiki.openstack.org/wiki/Neutron-drivers > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- OpenStack Evangelist - Community Ask and answer questions on https://ask.openstack.org From m4d.coder at gmail.com Fri Dec 12 20:54:44 2014 From: m4d.coder at gmail.com (W Chan) Date: Fri, 12 Dec 2014 12:54:44 -0800 Subject: [openstack-dev] [Mistral] Global Context and Execution Environment Message-ID: Renat, Dmitri, On supplying the global context into the workflow execution... In addition to Renat's proposal, I have a few here. 1) Pass them implicitly in start_workflow as another kwargs in the **params. But on thinking, we should probably make global context explicitly defined in the WF spec. Passing them implicitly may be hard to follow during troubleshooting where the value comes from by looking at the WF spec. Plus there will be cases where WF authors want it explicitly defined. Still debating here... inputs = {...} globals = {...} start_workflow('my_workflow', inputs, globals=globals) 2) Instead of adding to the WF spec, what if we change the scope in existing input params? For example, inputs defined in the top workflow by default is visible to all subflows (pass down to workflow task on run_workflow) and tasks (passed to action on execution). 3) Add to the WF spec workflow: type: direct global: - global1 - global2 input: - input1 - input2 Winson -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.griffith8 at gmail.com Fri Dec 12 21:22:20 2014 From: john.griffith8 at gmail.com (John Griffith) Date: Fri, 12 Dec 2014 14:22:20 -0700 Subject: [openstack-dev] [devstack] localrc for mutli-node setup In-Reply-To: References: Message-ID: On Fri, Dec 12, 2014 at 1:03 PM, Danny Choi (dannchoi) wrote: > Hi, > > I would like to use devstack to deploy OpenStack on a multi-node setup, > i.e. separate Controller, Network and Compute nodes > > What is the localrc for each node? > > I would assume, for example, we don?t need to enable neutron service at the > Controller node, etc? > > Does anyone has the localrc file for each node type that can share? > > Thanks, > Danny > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Rather than send my sample, here's a link to a wiki from the Neutron team that has what you want (mine's different, but uses nova-net). https://wiki.openstack.org/wiki/NeutronDevstack From rochelle.grober at huawei.com Fri Dec 12 21:36:56 2014 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Fri, 12 Dec 2014 21:36:56 +0000 Subject: [openstack-dev] [Fuel] Logs format on UI (High/6.0) In-Reply-To: References: Message-ID: (apologies for my pushiness, but getting all the projects using olso.log for logging will really help the ops experience) All projects (except maybe Swift) should be moving to use oslo.logging. Therefore, your longterm fix is the right way to go. When you find log issues like the one in ./neutron-metadata-agent.log, file it as a bug so that the code can be refactored as either a bug fix or a ?while you?re in there? step. --rocky From: Dmitry Pyzhov [mailto:dpyzhov at mirantis.com] Sent: Friday, December 12, 2014 10:35 AM To: OpenStack Dev Subject: [openstack-dev] [Fuel] Logs format on UI (High/6.0) We have a high priority bug in 6.0: https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story. Our openstack services use to send logs in strange format with extra copy of timestamp and loglevel: ==> ./neutron-metadata-agent.log <== 2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 INFO neutron.common.config [-] Logging enabled! And we have a workaround for this. We hide extra timestamp and use second loglevel. In Juno some of services have updated oslo.logging and now send logs in simple format: ==> ./nova-api.log <== 2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from /etc/nova/api-paste.ini In order to keep backward compatibility and deal with both formats we have a dirty workaround for our workaround: https://review.openstack.org/#/c/141450/ As I see, our best choice here is to throw away all workarounds and show logs on UI as is. If service sends duplicated data - we should show duplicated data. Long term fix here is to update oslo.logging in all packages. We can do it in 6.1. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Fri Dec 12 21:37:09 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 12 Dec 2014 16:37:09 -0500 Subject: [openstack-dev] [Fuel] Logs format on UI (High/6.0) In-Reply-To: References: Message-ID: <548B6005.10302@gmail.com> On 12/12/2014 01:35 PM, Dmitry Pyzhov wrote: > We have a high priority bug in 6.0: > https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story. > > Our openstack services use to send logs in strange format with extra > copy of timestamp and loglevel: > ==> ./neutron-metadata-agent.log <== > 2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 > INFO neutron.common.config [-] Logging enabled! > > And we have a workaround for this. We hide extra timestamp and use > second loglevel. > > In Juno some of services have updated oslo.logging and now send logs in > simple format: > ==> ./nova-api.log <== > 2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from > /etc/nova/api-paste.ini > > In order to keep backward compatibility and deal with both formats we > have a dirty workaround for our workaround: > https://review.openstack.org/#/c/141450/ > > As I see, our best choice here is to throw away all workarounds and show > logs on UI as is. If service sends duplicated data - we should show > duplicated data. ++ Best, -jay From sean at dague.net Fri Dec 12 21:40:32 2014 From: sean at dague.net (Sean Dague) Date: Fri, 12 Dec 2014 16:40:32 -0500 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> <548A1A34.40105@dague.net> Message-ID: <548B60D0.7090600@dague.net> On 12/12/2014 01:05 PM, Maru Newby wrote: > > On Dec 11, 2014, at 2:27 PM, Sean Dague wrote: > >> On 12/11/2014 04:16 PM, Jay Pipes wrote: >>> On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote: >>>> On Dec 11, 2014, at 1:04 PM, Jay Pipes wrote: >>>>> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: >>>>>> >>>>>> On Dec 11, 2014, at 8:00 AM, Henry Gessau wrote: >>>>>> >>>>>>> On Thu, Dec 11, 2014, Mark McClain wrote: >>>>>>>> >>>>>>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes >>>>>>>> > wrote: >>>>>>>>> >>>>>>>>> I'm generally in favor of making name attributes opaque, utf-8 >>>>>>>>> strings that >>>>>>>>> are entirely user-defined and have no constraints on them. I >>>>>>>>> consider the >>>>>>>>> name to be just a tag that the user places on some resource. It >>>>>>>>> is the >>>>>>>>> resource's ID that is unique. >>>>>>>>> >>>>>>>>> I do realize that Nova takes a different approach to *some* >>>>>>>>> resources, >>>>>>>>> including the security group name. >>>>>>>>> >>>>>>>>> End of the day, it's probably just a personal preference whether >>>>>>>>> names >>>>>>>>> should be unique to a tenant/user or not. >>>>>>>>> >>>>>>>>> Maru had asked me my opinion on whether names should be unique and I >>>>>>>>> answered my personal opinion that no, they should not be, and if >>>>>>>>> Neutron >>>>>>>>> needed to ensure that there was one and only one default security >>>>>>>>> group for >>>>>>>>> a tenant, that a way to accomplish such a thing in a race-free >>>>>>>>> way, without >>>>>>>>> use of SELECT FOR UPDATE, was to use the approach I put into the >>>>>>>>> pastebin on >>>>>>>>> the review above. >>>>>>>>> >>>>>>>> >>>>>>>> I agree with Jay. We should not care about how a user names the >>>>>>>> resource. >>>>>>>> There other ways to prevent this race and Jay?s suggestion is a >>>>>>>> good one. >>>>>>> >>>>>>> However we should open a bug against Horizon because the user >>>>>>> experience there >>>>>>> is terrible with duplicate security group names. >>>>>> >>>>>> The reason security group names are unique is that the ec2 api >>>>>> supports source >>>>>> rule specifications by tenant_id (user_id in amazon) and name, so >>>>>> not enforcing >>>>>> uniqueness means that invocation in the ec2 api will either fail or be >>>>>> non-deterministic in some way. >>>>> >>>>> So we should couple our API evolution to EC2 API then? >>>>> >>>>> -jay >>>> >>>> No I was just pointing out the historical reason for uniqueness, and >>>> hopefully >>>> encouraging someone to find the best behavior for the ec2 api if we >>>> are going >>>> to keep the incompatibility there. Also I personally feel the ux is >>>> better >>>> with unique names, but it is only a slight preference. >>> >>> Sorry for snapping, you made a fair point. >> >> Yeh, honestly, I agree with Vish. I do feel that the UX of that >> constraint is useful. Otherwise you get into having to show people UUIDs >> in a lot more places. While those are good for consistency, they are >> kind of terrible to show to people. > > While there is a good case for the UX of unique names - it also makes orchestration via tools like puppet a heck of a lot simpler - the fact is that most OpenStack resources do not require unique names. That being the case, why would we want security groups to deviate from this convention? Maybe the other ones are the broken ones? Honestly, any sanely usable system makes names unique inside a container. Like files in a directory. In this case the tenant is the container, which makes sense. It is one of many places that OpenStack is not consistent. But I'd rather make things consistent and more usable than consistent and less. -Sean -- Sean Dague http://dague.net From mgagne at iweb.com Fri Dec 12 21:59:45 2014 From: mgagne at iweb.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=) Date: Fri, 12 Dec 2014 16:59:45 -0500 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <548B60D0.7090600@dague.net> References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> <548A1A34.40105@dague.net> <548B60D0.7090600@dague.net> Message-ID: <548B6551.2020402@iweb.com> On 2014-12-12 4:40 PM, Sean Dague wrote: >> >> While there is a good case for the UX of unique names - it also makes orchestration via tools like puppet a heck of a lot simpler - the fact is that most OpenStack resources do not require unique names. That being the case, why would we want security groups to deviate from this convention? > > Maybe the other ones are the broken ones? > > Honestly, any sanely usable system makes names unique inside a > container. Like files in a directory. In this case the tenant is the > container, which makes sense. +1 It makes as much sense as a filesystem accepting 2 files in the same folder with the same name but allows you to distinguish them by their inode number. -- Mathieu From morgan.fainberg at gmail.com Fri Dec 12 22:00:36 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Fri, 12 Dec 2014 16:00:36 -0600 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <548B60D0.7090600@dague.net> References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> <548A1A34.40105@dague.net> <548B60D0.7090600@dague.net> Message-ID: On Friday, December 12, 2014, Sean Dague wrote: > On 12/12/2014 01:05 PM, Maru Newby wrote: > > > > On Dec 11, 2014, at 2:27 PM, Sean Dague > > wrote: > > > >> On 12/11/2014 04:16 PM, Jay Pipes wrote: > >>> On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote: > >>>> On Dec 11, 2014, at 1:04 PM, Jay Pipes > wrote: > >>>>> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: > >>>>>> > >>>>>> On Dec 11, 2014, at 8:00 AM, Henry Gessau > wrote: > >>>>>> > >>>>>>> On Thu, Dec 11, 2014, Mark McClain wrote: > >>>>>>>> > >>>>>>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes > >>>>>>>>> >> wrote: > >>>>>>>>> > >>>>>>>>> I'm generally in favor of making name attributes opaque, utf-8 > >>>>>>>>> strings that > >>>>>>>>> are entirely user-defined and have no constraints on them. I > >>>>>>>>> consider the > >>>>>>>>> name to be just a tag that the user places on some resource. It > >>>>>>>>> is the > >>>>>>>>> resource's ID that is unique. > >>>>>>>>> > >>>>>>>>> I do realize that Nova takes a different approach to *some* > >>>>>>>>> resources, > >>>>>>>>> including the security group name. > >>>>>>>>> > >>>>>>>>> End of the day, it's probably just a personal preference whether > >>>>>>>>> names > >>>>>>>>> should be unique to a tenant/user or not. > >>>>>>>>> > >>>>>>>>> Maru had asked me my opinion on whether names should be unique > and I > >>>>>>>>> answered my personal opinion that no, they should not be, and if > >>>>>>>>> Neutron > >>>>>>>>> needed to ensure that there was one and only one default security > >>>>>>>>> group for > >>>>>>>>> a tenant, that a way to accomplish such a thing in a race-free > >>>>>>>>> way, without > >>>>>>>>> use of SELECT FOR UPDATE, was to use the approach I put into the > >>>>>>>>> pastebin on > >>>>>>>>> the review above. > >>>>>>>>> > >>>>>>>> > >>>>>>>> I agree with Jay. We should not care about how a user names the > >>>>>>>> resource. > >>>>>>>> There other ways to prevent this race and Jay?s suggestion is a > >>>>>>>> good one. > >>>>>>> > >>>>>>> However we should open a bug against Horizon because the user > >>>>>>> experience there > >>>>>>> is terrible with duplicate security group names. > >>>>>> > >>>>>> The reason security group names are unique is that the ec2 api > >>>>>> supports source > >>>>>> rule specifications by tenant_id (user_id in amazon) and name, so > >>>>>> not enforcing > >>>>>> uniqueness means that invocation in the ec2 api will either fail or > be > >>>>>> non-deterministic in some way. > >>>>> > >>>>> So we should couple our API evolution to EC2 API then? > >>>>> > >>>>> -jay > >>>> > >>>> No I was just pointing out the historical reason for uniqueness, and > >>>> hopefully > >>>> encouraging someone to find the best behavior for the ec2 api if we > >>>> are going > >>>> to keep the incompatibility there. Also I personally feel the ux is > >>>> better > >>>> with unique names, but it is only a slight preference. > >>> > >>> Sorry for snapping, you made a fair point. > >> > >> Yeh, honestly, I agree with Vish. I do feel that the UX of that > >> constraint is useful. Otherwise you get into having to show people UUIDs > >> in a lot more places. While those are good for consistency, they are > >> kind of terrible to show to people. > > > > While there is a good case for the UX of unique names - it also makes > orchestration via tools like puppet a heck of a lot simpler - the fact is > that most OpenStack resources do not require unique names. That being the > case, why would we want security groups to deviate from this convention? > > Maybe the other ones are the broken ones? > > Honestly, any sanely usable system makes names unique inside a > container. Like files in a directory. In this case the tenant is the > container, which makes sense. > > It is one of many places that OpenStack is not consistent. But I'd > rather make things consistent and more usable than consistent and less. > > +1. More consistent and more usable is a good approach. The name uniqueness has prior art in OpenStack - keystone keeps project names unique within a domain (domain is the container), similar usernames can't be duplicated in the same domain. It would be silly to auth with the user ID, likewise unique names for the security group in the container (tenant) makes a lot of sense from a UX Perspective. --Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Fri Dec 12 22:11:45 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 12 Dec 2014 17:11:45 -0500 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> <548A1A34.40105@dague.net> <548B60D0.7090600@dague.net> Message-ID: <548B6821.5090200@gmail.com> On 12/12/2014 05:00 PM, Morgan Fainberg wrote: > On Friday, December 12, 2014, Sean Dague > wrote: > > On 12/12/2014 01:05 PM, Maru Newby wrote: > > > > On Dec 11, 2014, at 2:27 PM, Sean Dague > wrote: > > > >> On 12/11/2014 04:16 PM, Jay Pipes wrote: > >>> On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote: > >>>> On Dec 11, 2014, at 1:04 PM, Jay Pipes > wrote: > >>>>> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: > >>>>>> > >>>>>> On Dec 11, 2014, at 8:00 AM, Henry Gessau > wrote: > >>>>>> > >>>>>>> On Thu, Dec 11, 2014, Mark McClain wrote: > >>>>>>>> > >>>>>>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes > > >>>>>>>>> >> wrote: > >>>>>>>>> > >>>>>>>>> I'm generally in favor of making name attributes opaque, > utf-8 > >>>>>>>>> strings that > >>>>>>>>> are entirely user-defined and have no constraints on them. I > >>>>>>>>> consider the > >>>>>>>>> name to be just a tag that the user places on some > resource. It > >>>>>>>>> is the > >>>>>>>>> resource's ID that is unique. > >>>>>>>>> > >>>>>>>>> I do realize that Nova takes a different approach to *some* > >>>>>>>>> resources, > >>>>>>>>> including the security group name. > >>>>>>>>> > >>>>>>>>> End of the day, it's probably just a personal preference > whether > >>>>>>>>> names > >>>>>>>>> should be unique to a tenant/user or not. > >>>>>>>>> > >>>>>>>>> Maru had asked me my opinion on whether names should be > unique and I > >>>>>>>>> answered my personal opinion that no, they should not be, > and if > >>>>>>>>> Neutron > >>>>>>>>> needed to ensure that there was one and only one default > security > >>>>>>>>> group for > >>>>>>>>> a tenant, that a way to accomplish such a thing in a > race-free > >>>>>>>>> way, without > >>>>>>>>> use of SELECT FOR UPDATE, was to use the approach I put > into the > >>>>>>>>> pastebin on > >>>>>>>>> the review above. > >>>>>>>>> > >>>>>>>> > >>>>>>>> I agree with Jay. We should not care about how a user > names the > >>>>>>>> resource. > >>>>>>>> There other ways to prevent this race and Jay?s suggestion > is a > >>>>>>>> good one. > >>>>>>> > >>>>>>> However we should open a bug against Horizon because the user > >>>>>>> experience there > >>>>>>> is terrible with duplicate security group names. > >>>>>> > >>>>>> The reason security group names are unique is that the ec2 api > >>>>>> supports source > >>>>>> rule specifications by tenant_id (user_id in amazon) and > name, so > >>>>>> not enforcing > >>>>>> uniqueness means that invocation in the ec2 api will either > fail or be > >>>>>> non-deterministic in some way. > >>>>> > >>>>> So we should couple our API evolution to EC2 API then? > >>>>> > >>>>> -jay > >>>> > >>>> No I was just pointing out the historical reason for > uniqueness, and > >>>> hopefully > >>>> encouraging someone to find the best behavior for the ec2 api > if we > >>>> are going > >>>> to keep the incompatibility there. Also I personally feel the > ux is > >>>> better > >>>> with unique names, but it is only a slight preference. > >>> > >>> Sorry for snapping, you made a fair point. > >> > >> Yeh, honestly, I agree with Vish. I do feel that the UX of that > >> constraint is useful. Otherwise you get into having to show > people UUIDs > >> in a lot more places. While those are good for consistency, they are > >> kind of terrible to show to people. > > > > While there is a good case for the UX of unique names - it also > makes orchestration via tools like puppet a heck of a lot simpler - > the fact is that most OpenStack resources do not require unique > names. That being the case, why would we want security groups to > deviate from this convention? > > Maybe the other ones are the broken ones? > > Honestly, any sanely usable system makes names unique inside a > container. Like files in a directory. In this case the tenant is the > container, which makes sense. > > It is one of many places that OpenStack is not consistent. But I'd > rather make things consistent and more usable than consistent and less. > > > +1. > > More consistent and more usable is a good approach. The name uniqueness > has prior art in OpenStack - keystone keeps project names unique within > a domain (domain is the container), similar usernames can't be > duplicated in the same domain. It would be silly to auth with the user > ID, likewise unique names for the security group in the container > (tenant) makes a lot of sense from a UX Perspective. Sounds like Maru and I are pretty heavily in the minority on this one, and you all make good points about UX and consistency with other pieces of the OpenStack APIs. Maru, I'm backing off here and will support putting a unique constraint on the security group name and project_id field. Best, -jay From gleebix at gmail.com Fri Dec 12 22:12:20 2014 From: gleebix at gmail.com (pcrews) Date: Fri, 12 Dec 2014 14:12:20 -0800 Subject: [openstack-dev] [qa] How to delete a VM which is in ERROR state? In-Reply-To: References: Message-ID: <548B6844.4020804@gmail.com> On 12/09/2014 03:54 PM, Ken'ichi Ohmichi wrote: > Hi, > > This case is always tested by Tempest on the gate. > > https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_delete_server.py#L152 > > So I guess this problem wouldn't happen on the latest version at least. > > Thanks > Ken'ichi Ohmichi > > --- > > 2014-12-10 6:32 GMT+09:00 Joe Gordon : >> >> >> On Sat, Dec 6, 2014 at 5:08 PM, Danny Choi (dannchoi) >> wrote: >>> >>> Hi, >>> >>> I have a VM which is in ERROR state. >>> >>> >>> +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ >>> >>> | ID | Name >>> | Status | Task State | Power State | Networks | >>> >>> >>> +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ >>> >>> | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | >>> cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR | - | NOSTATE >>> | | >>> >>> >>> I tried in both CLI ?nova delete? and Horizon ?terminate instance?. >>> Both accepted the delete command without any error. >>> However, the VM never got deleted. >>> >>> Is there a way to remove the VM? >> >> >> What version of nova are you using? This is definitely a serious bug, you >> should be able to delete an instance in error state. Can you file a bug that >> includes steps on how to reproduce the bug along with all relevant logs. >> >> bugs.launchpad.net/nova >> >>> >>> >>> Thanks, >>> Danny >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Hi, I've encountered this in my own testing and have found that it appears to be tied to libvirt. When I hit this, reset-state as the admin user reports success (and state is set), *but* things aren't really working as advertised and subsequent attempts to do anything with the errant vm's will send them right back into 'FLAIL' / can't delete / endless DELETING mode. restarting libvirt-bin on my machine fixes this - after restart, the deleting vm's are properly wiped without any further user input to nova/horizon and all seems right in the world. using: devstack ubuntu 14.04 libvirtd (libvirt) 1.2.2 triggered via: lots of random create/reboot/resize/delete requests of varying validity and sanity. Am in the process of cleaning up my test code so as not to hurt anyone's brain with the ugly and will file a bug once done, but thought this worth sharing. Thanks, Patrick From rochelle.grober at huawei.com Fri Dec 12 22:15:38 2014 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Fri, 12 Dec 2014 22:15:38 +0000 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> <548A1A34.40105@dague.net> <548B60D0.7090600@dague.net> Message-ID: Morgan Fainberg [mailto:morgan.fainberg at gmail.com] on Friday, December 12, 2014 2:01 PM wrote: On Friday, December 12, 2014, Sean Dague > wrote: On 12/12/2014 01:05 PM, Maru Newby wrote: > > On Dec 11, 2014, at 2:27 PM, Sean Dague > wrote: > >> On 12/11/2014 04:16 PM, Jay Pipes wrote: >>> On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote: >>>> On Dec 11, 2014, at 1:04 PM, Jay Pipes > wrote: >>>>> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: >>>>>> >>>>>> On Dec 11, 2014, at 8:00 AM, Henry Gessau > wrote: >>>>>> >>>>>>> On Thu, Dec 11, 2014, Mark McClain wrote: >>>>>>>> >>>>>>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes >>>>>>>>> >> wrote: >>>>>>>>> >>>>>>>>> I'm generally in favor of making name attributes opaque, utf-8 >>>>>>>>> strings that >>>>>>>>> are entirely user-defined and have no constraints on them. I >>>>>>>>> consider the >>>>>>>>> name to be just a tag that the user places on some resource. It >>>>>>>>> is the >>>>>>>>> resource's ID that is unique. >>>>>>>>> >>>>>>>>> I do realize that Nova takes a different approach to *some* >>>>>>>>> resources, >>>>>>>>> including the security group name. >>>>>>>>> >>>>>>>>> End of the day, it's probably just a personal preference whether >>>>>>>>> names >>>>>>>>> should be unique to a tenant/user or not. >>>>>>>>> >>>>>>>>> Maru had asked me my opinion on whether names should be unique and I >>>>>>>>> answered my personal opinion that no, they should not be, and if >>>>>>>>> Neutron >>>>>>>>> needed to ensure that there was one and only one default security >>>>>>>>> group for >>>>>>>>> a tenant, that a way to accomplish such a thing in a race-free >>>>>>>>> way, without >>>>>>>>> use of SELECT FOR UPDATE, was to use the approach I put into the >>>>>>>>> pastebin on >>>>>>>>> the review above. >>>>>>>>> >>>>>>>> >>>>>>>> I agree with Jay. We should not care about how a user names the >>>>>>>> resource. >>>>>>>> There other ways to prevent this race and Jay?s suggestion is a >>>>>>>> good one. >>>>>>> >>>>>>> However we should open a bug against Horizon because the user >>>>>>> experience there >>>>>>> is terrible with duplicate security group names. >>>>>> >>>>>> The reason security group names are unique is that the ec2 api >>>>>> supports source >>>>>> rule specifications by tenant_id (user_id in amazon) and name, so >>>>>> not enforcing >>>>>> uniqueness means that invocation in the ec2 api will either fail or be >>>>>> non-deterministic in some way. >>>>> >>>>> So we should couple our API evolution to EC2 API then? >>>>> >>>>> -jay >>>> >>>> No I was just pointing out the historical reason for uniqueness, and >>>> hopefully >>>> encouraging someone to find the best behavior for the ec2 api if we >>>> are going >>>> to keep the incompatibility there. Also I personally feel the ux is >>>> better >>>> with unique names, but it is only a slight preference. >>> >>> Sorry for snapping, you made a fair point. >> >> Yeh, honestly, I agree with Vish. I do feel that the UX of that >> constraint is useful. Otherwise you get into having to show people UUIDs >> in a lot more places. While those are good for consistency, they are >> kind of terrible to show to people. > > While there is a good case for the UX of unique names - it also makes orchestration via tools like puppet a heck of a lot simpler - the fact is that most OpenStack resources do not require unique names. That being the case, why would we want security groups to deviate from this convention? Maybe the other ones are the broken ones? Honestly, any sanely usable system makes names unique inside a container. Like files in a directory. In this case the tenant is the container, which makes sense. It is one of many places that OpenStack is not consistent. But I'd rather make things consistent and more usable than consistent and less. +1. More consistent and more usable is a good approach. The name uniqueness has prior art in OpenStack - keystone keeps project names unique within a domain (domain is the container), similar usernames can't be duplicated in the same domain. It would be silly to auth with the user ID, likewise unique names for the security group in the container (tenant) makes a lot of sense from a UX Perspective. [Rockyg] +1 Especially when dealing with domain data that are managed by Humans, human visible unique is important for understanding *and* efficiency. Tenant security is expected to be managed by the tenant admin, not some automated ?robot admin? and as such needs to be clear , memorable and seperable between instances. Unique names is the most straightforward (and easiest to enforce) way do this for humans. Humans read differentiate alphanumerics, so that should be the standard differentiator when humans are expected to interact and reason about containers. --Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ikalnitsky at mirantis.com Fri Dec 12 22:17:06 2014 From: ikalnitsky at mirantis.com (Igor Kalnitsky) Date: Sat, 13 Dec 2014 00:17:06 +0200 Subject: [openstack-dev] [Fuel] Logs format on UI (High/6.0) In-Reply-To: References: Message-ID: +1 to stop parsing logs on UI and show them "as is". I think it's more than enough for all users. On Fri, Dec 12, 2014 at 8:35 PM, Dmitry Pyzhov wrote: > We have a high priority bug in 6.0: > https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story. > > Our openstack services use to send logs in strange format with extra copy of > timestamp and loglevel: > ==> ./neutron-metadata-agent.log <== > 2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 INFO > neutron.common.config [-] Logging enabled! > > And we have a workaround for this. We hide extra timestamp and use second > loglevel. > > In Juno some of services have updated oslo.logging and now send logs in > simple format: > ==> ./nova-api.log <== > 2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from > /etc/nova/api-paste.ini > > In order to keep backward compatibility and deal with both formats we have a > dirty workaround for our workaround: > https://review.openstack.org/#/c/141450/ > > As I see, our best choice here is to throw away all workarounds and show > logs on UI as is. If service sends duplicated data - we should show > duplicated data. > > Long term fix here is to update oslo.logging in all packages. We can do it > in 6.1. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From brian.rosmaita at RACKSPACE.COM Fri Dec 12 22:39:22 2014 From: brian.rosmaita at RACKSPACE.COM (Brian Rosmaita) Date: Fri, 12 Dec 2014 22:39:22 +0000 Subject: [openstack-dev] [api] Usage of the PATCH verb In-Reply-To: <4C649DDC-99D0-4842-80B6-668CF3528E93@rackspace.com> References: <4C649DDC-99D0-4842-80B6-668CF3528E93@rackspace.com> Message-ID: <4D27423EB14FEE4E9E5F88A15764C51A9DBE1938@ORD1EXD02.RACKSPACE.CORP> The Images API v2 has been using PATCH to update an image record for quite a long time now. We pretty much follow the IETF docs. Here's how it's documented: http://docs.openstack.org/api/openstack-image-service/2.0/content/update-an-image.html And here's the info about the media type used for the request body: http://docs.openstack.org/api/openstack-image-service/2.0/content/appendix-b-http-patch-media-types.html Not surprisingly, we feel that this is the correct way to implement PATCH! cheers, brian ________________________________ From: Amit Gandhi [amit.gandhi at RACKSPACE.COM] Sent: Friday, December 12, 2014 3:17 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [api] Usage of the PATCH verb Hi We are currently using PATCH in the Poppy API to update existing resources. However, we have recently had some discussions on how this should have been implemented. I would like to get the advise of the Openstack Community and the API working group on how PATCH semantics should work. The following RFC documents [1][2] (and a blog post [3]) advise the using PATCH as the following: 2.1. A Simple PATCH Example PATCH /file.txt HTTP/1.1 Host: www.example.com Content-Type: application/example If-Match: "e0023aa4e" Content-Length: 100 [ { "op": "test", "path": "/a/b/c", "value": "foo" }, { "op": "remove", "path": "/a/b/c" }, { "op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ] }, { "op": "replace", "path": "/a/b/c", "value": 42 }, { "op": "move", "from": "/a/b/c", "path": "/a/b/d" }, { "op": "copy", "from": "/a/b/d", "path": "/a/b/e" } ] Basically, the changes consist of an operation, the path in the json object to modify, and the new value. The way we currently have it implemented is to submit just the changes, and the server applies the change to the resource. This means passing entire lists to change etc. I would like to hear some feedback from others on how PATCH should be implemented. Thanks Amit Gandhi - Rackspace [1] https://tools.ietf.org/html/rfc5789 [2] http://tools.ietf.org/html/rfc6902 [3] http://williamdurand.fr/2014/02/14/please-do-not-patch-like-an-idiot/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis.tripp at hp.com Fri Dec 12 23:09:36 2014 From: travis.tripp at hp.com (Tripp, Travis S) Date: Fri, 12 Dec 2014 23:09:36 +0000 Subject: [openstack-dev] [horizon] REST and Django In-Reply-To: References: <547DD08A.7000402@redhat.com> Message-ID: Tihomir, Today I added one glance call based on Richard?s decorator pattern[1] and started to play with incorporating some of your ideas. Please note, I only had limited time today. That is passing the kwargs through to the glance client. This was an interesting first choice, because it immediately highlighted a concrete example of the horizon glance wrapper post-processing still being useful (rather than be a direct pass-through with no wrapper). See below. If you have some some concrete code examples of your ideas it would be helpful. [1] https://review.openstack.org/#/c/141273/2/openstack_dashboard/api/rest/glance.py With the patch, basically, you can call the following and all of the GET parameters get passed directly through to the horizon glance client and you get results back as expected. http://localhost:8002/api/glance/images/?sort_dir=desc&sort_key=created_at&paginate=True&marker=bb2cfb1c-2234-4f54-aec5-b4916fe2d747 If you pass in an incorrect sort_key, the glance client returns the following error message which propagates back to the REST caller as an error with the message: "sort_key must be one of the following: name, status, container_format, disk_format, size, id, created_at, updated_at." This is done by passing **request.GET.dict() through. Please note, that if you try this (with POSTMAN, for example), you need to set the header of X-Requested-With = XMLHttpRequest So, what issues did it immediately call out with directly invoking the client? The python-glanceclient internally handles pagination by returning a generator. Each iteration on the generator will handle making a request for the next page of data. If you were to just do something like return list(image_generator) to serialize it back out to the caller, it would actually end up making a call back to the server X times to fetch all pages before serializing back (thereby not really paginating). The horizon glance client wrapper today handles this by using islice intelligently along with honoring the API_RESULT_LIMIT setting in Horizon. So, this gives a direct example of where the wrapper does something that a direct passthrough to the client would not allow. That said, I can see a few ways that we could use the same REST decorator code and provide direct access to the API. We?d simply provide a class where the url_regex maps to the desired path and gives direct passthrough. Maybe that kind of passthrough could always be provided for ease of customization / extensibility and additional methods with wrappers provided when necessary. I need to leave for today, so can?t actually try that out at the moment. Thanks, Travis From: Thai Q Tran > Reply-To: OpenStack List > Date: Friday, December 12, 2014 at 11:05 AM To: OpenStack List > Subject: Re: [openstack-dev] [horizon] REST and Django In your previous example, you are posting to a certain URL (i.e. /keystone/{ver:=x.0}/{method:=update}). => => Correct me if I'm wrong, but it looks like you have a unique URL for each /service/version/method. I fail to see how that is different than what we have today? Is there a view for each service? each version? Let's say for argument sake that you have a single view that takes care of all URL routing. All requests pass through this view and contain a JSON that contains instruction on which API to invoke and what parameters to pass. And lets also say that you wrote some code that uses reflection to map the JSON to an action. What you end up with is a client-centric application, where all of the logic resides client-side. If there are things we want to accomplish server-side, it will be extremely hard to pull off. Things like caching, websocket, aggregation, batch actions, translation, etc.... What you end up with is a client with no help from the server. Obviously the other extreme is what we have today, where we do everything server-side and only using client-side for binding events. I personally prefer a more balance approach where we can leverage both the server and client. There are things that client can do well, and there are things that server can do well. Going the RPC way restrict us to just client technologies and may hamper any additional future functionalities we want to bring server-side. In other words, using REST over RPC gives us the opportunity to use server-side technologies to help solve problems should the need for it arises. I would also argue that the REST approach is NOT what we have today. What we have today is a static webpage that is generated server-side, where API is hidden from the client. What we end up with using the REST approach is a dynamic webpage generated client-side, two very different things. We have essentially striped out the rendering logic from Django templating and replaced it with Angular. -----Tihomir Trifonov > wrote: ----- To: "OpenStack Development Mailing List (not for usage questions)" > From: Tihomir Trifonov > Date: 12/12/2014 04:53AM Subject: Re: [openstack-dev] [horizon] REST and Django Here's an example: Admin user Joe has an Domain open and stares at it for 15 minutes while he updates the description. Admin user Bob is asked to go ahead and enable it. He opens the record, edits it, and then saves it. Joe finished perfecting the description and saves it. Doing this action would mean that the Domain is enabled and the description gets updated. Last man in still wins if he updates the same fields, but if they update different fields then both of their changes will take affect without them stomping on each other. Whether that is good or bad may depend on the situation? That's a great example. I believe that all of the Openstack APIs support PATCH updates of arbitrary fields. This way - the frontend(AngularJS) can detect which fields are being modified, and to submit only these fields for update. If we however use a form with POST, although we will load the object before updating it, the middleware cannot find which fields are actually modified, and will update them all, which is more likely what PUT should do. Thus having full control in the frontend part, we can submit only changed fields. If however a service API doesn't support PATCH, it is actually a problem in the API and not in the client... The service API documentation almost always lags (although, helped by specs now) and the service team takes on the burden of exposing a programmatic way to access the API. This is tested and easily consumable via the python clients, which removes some guesswork from using the service. True. But what if the service team modifies a method signature from let's say: def add_something(self, request, ? field1, field2): to def add_something(self, request, ? field1, field2, field3): and in the middleware we have the old signature: ?def add_something(self, request, ? field1, field2): we still need to modify the middleware to add the new field. If however the middleware is transparent and just passes **kwargs, it will pass through whatever the frontend sends. So we just need to update the frontend, which can be done using custom views, and not necessary going through an upstream change. My point is why do we need to hide some features of the backend service API behind a "firewall" what the middleware in fact is? On Fri, Dec 12, 2014 at 8:08 AM, Tripp, Travis S > wrote: I just re-read and I apologize for the hastily written email I previously sent. I?ll try to salvage it with a bit of a revision below (please ignore the previous email). On 12/11/14, 7:02 PM, "Tripp, Travis S" > wrote (REVISED): >Tihomir, > >Your comments in the patch were very helpful for me to understand your >concerns about the ease of customizing without requiring upstream >changes. It also reminded me that I?ve also previously questioned the >python middleman. > >However, here are a couple of bullet points for Devil?s Advocate >consideration. > > > * Will we take on auto-discovery of API extensions in two spots >(python for legacy and JS for new)? > * The Horizon team will have to keep an even closer eye on every >single project and be ready to react if there are changes to the API that >break things. Right now in Glance, for example, they are working on some >fixes to the v2 API (soon to become v2.3) that will allow them to >deprecate v1 somewhat transparently to users of the client library. > * The service API documentation almost always lags (although, helped >by specs now) and the service team takes on the burden of exposing a >programmatic way to access the API. This is tested and easily consumable >via the python clients, which removes some guesswork from using the >service. > * This is going to be an incremental approach with legacy support >requirements anyway. So, incorporating python side changes won?t just go >away. > * Which approach would be better if we introduce a server side >caching mechanism or a new source of data such as elastic search to >improve performance? Would the client side code have to be changed >dramatically to take advantage of those improvements or could it be done >transparently on the server side if we own the exposed API? > >I?m not sure I fully understood your example about Cinder. Was it the >cinder client that held up delivery of horizon support, the cinder API or >both? If the API isn?t in, then it would hold up delivery of the feature >in any case. There still would be timing pressures to react and build a >new view that supports it. For customization, with Richard?s approach new >views could be supported by just dropping in a new REST API decorated >module with the APIs you want, including direct pass through support if >desired to new APIs. Downstream customizations / Upstream changes to >views seem a bit like a bit of a related, but different issue to me as >long as their is an easy way to drop in new API support. > >Finally, regarding the client making two calls to do an update: > >?>>Do we really need the lines:? > >>> project = api.keystone.tenant_get(request, id) >>> kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None) >? >I agree that if you already have all the data it may be bad to have to do >another call. I do think there is room for discussing the reasoning, >though. >As far as I can tell, they do this so that if you are updating an entity, >you have to be very specific about the fields you are changing. I >actually see this as potentially a protectionary measure against data >loss and sometimes a very nice to have feature. It perhaps was intended >to *help* guard against race conditions (no locking and no transactions >with many users simultaneously accessing the data). > >Here's an example: Admin user Joe has a Domain open and stares at it for >15 minutes while he updates just the description. Admin user Bob is asked >to go ahead and enable it. He opens the record, edits it, and then saves >it. Joe finished perfecting the description and saves it. They could in >effect both edit the same domain independently. Last man in still wins if >he updates the same fields, but if they update different fields then both >of their changes will take affect without them stomping on each other. Or >maybe it is intended to encourage client users to compare their current >and previous to see if they should issue a warning if the data changed >between getting and updating the data. Or maybe like you said, it is just >overhead API calls. From os.lcheng at gmail.com Fri Dec 12 23:17:16 2014 From: os.lcheng at gmail.com (Lin Hua Cheng) Date: Fri, 12 Dec 2014 15:17:16 -0800 Subject: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard In-Reply-To: References: <547DD08A.7000402@redhat.com> Message-ID: Breaking something for existing user is progress, but not forward. :) I don't mind moving the code around to _scripts file, but simply dropping the _conf file is my concern since it might already be extended from. Perhaps document it first that it will be deprecated, and remove it on later release. On Fri, Dec 12, 2014 at 10:43 AM, Thai Q Tran wrote: > > As is the case with anything we change, but that should not stop us from > making improvements/progress. I would argue that it would make life easier > for them since all scripts are now in one place. > > -----Lin Hua Cheng wrote: ----- > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > From: Lin Hua Cheng > Date: 12/12/2014 10:28AM > > Subject: Re: [openstack-dev] [Horizon] Moving _conf and _scripts to > dashboard > > Consolidating them would break it for users that have customization and > extension on the two templates. > > -Lin > > On Fri, Dec 12, 2014 at 9:20 AM, David Lyle wrote: >> >> Not entirely sure why they both exist either. >> >> So by move, you meant override (nuance). That's different and I have no >> issue with that. >> >> I'm also fine with attempting to consolidate _conf and _scripts. >> >> David >> >> On Thu, Dec 11, 2014 at 1:22 PM, Thai Q Tran wrote: >> >>> >>> It would not create a circular dependency, dashboard would depend on >>> horizon - not the latter. >>> Scripts that are library specific will live in horizon while scripts >>> that are panel specific will live in dashboard. >>> Let me draw a more concrete example. >>> >>> In Horizon >>> We know that _script and _conf are included in the base.html >>> We create a _script and _conf placeholder file for project overrides >>> (similar to _stylesheets and _header) >>> In Dashboard >>> We create a _script and _conf file with today's content >>> It overrides the _script and _conf file in horizon >>> Now we can include panel specific scripts without causing circular >>> dependency. >>> >>> In fact, I would like to go further and suggest that _script and _conf >>> be combine into a single file. >>> Not sure why we need two places to include scripts. >>> >>> >>> -----David Lyle wrote: ----- >>> To: "OpenStack Development Mailing List (not for usage questions)" < >>> openstack-dev at lists.openstack.org> >>> From: David Lyle >>> Date: 12/11/2014 09:23AM >>> Subject: Re: [openstack-dev] [Horizon] Moving _conf and _scripts to >>> dashboard >>> >>> >>> I'm probably not understanding the nuance of the question but moving the >>> _scripts.html file to openstack_dashboard creates some circular >>> dependencies, does it not? templates/base.html in the horizon side of the >>> repo includes _scripts.html and insures that the javascript needed by the >>> existing horizon framework is present. >>> >>> _conf.html seems like a better candidate for moving as it's more closely >>> tied to the application code. >>> >>> David >>> >>> >>> On Wed, Dec 10, 2014 at 7:20 PM, Thai Q Tran wrote: >>> >>>> Sorry for duplicate mail, forgot the subject. >>>> >>>> -----Thai Q Tran/Silicon Valley/IBM wrote: ----- >>>> To: "OpenStack Development Mailing List \(not for usage questions\)" < >>>> openstack-dev at lists.openstack.org> >>>> From: Thai Q Tran/Silicon Valley/IBM >>>> Date: 12/10/2014 03:37PM >>>> Subject: Moving _conf and _scripts to dashboard >>>> >>>> The way we are structuring our javascripts today is complicated. All of >>>> our static javascripts reside in /horizon/static and are imported through >>>> _conf.html and _scripts.html. Notice that there are already some panel >>>> specific javascripts like: horizon.images.js, horizon.instances.js, >>>> horizon.users.js. They do not belong in horizon. They belong in >>>> openstack_dashboard because they are specific to a panel. >>>> >>>> Why am I raising this issue now? In Angular, we need controllers >>>> written in javascript for each panel. As we angularize more and more >>>> panels, we need to store them in a way that make sense. To me, it make >>>> sense for us to move _conf and _scripts to openstack_dashboard. Or if this >>>> is not possible, then provide a mechanism to override them in >>>> openstack_dashboard. >>>> >>>> Thoughts? >>>> Thai >>>> >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tqtran at us.ibm.com Sat Dec 13 00:06:48 2014 From: tqtran at us.ibm.com (Thai Q Tran) Date: Fri, 12 Dec 2014 17:06:48 -0700 Subject: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard In-Reply-To: References: , <547DD08A.7000402@redhat.com> Message-ID: An HTML attachment was scrubbed... URL: From zbitter at redhat.com Sat Dec 13 00:12:48 2014 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 12 Dec 2014 19:12:48 -0500 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> <548A3FB8.9030007@redhat.com> <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> Message-ID: <548B8480.9010506@redhat.com> On 12/12/14 05:29, Murugan, Visnusaran wrote: > > >> -----Original Message----- >> From: Zane Bitter [mailto:zbitter at redhat.com] >> Sent: Friday, December 12, 2014 6:37 AM >> To: openstack-dev at lists.openstack.org >> Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept >> showdown >> >> On 11/12/14 08:26, Murugan, Visnusaran wrote: >>>>> [Murugan, Visnusaran] >>>>> In case of rollback where we have to cleanup earlier version of >>>>> resources, >>>> we could get the order from old template. We'd prefer not to have a >>>> graph table. >>>> >>>> In theory you could get it by keeping old templates around. But that >>>> means keeping a lot of templates, and it will be hard to keep track >>>> of when you want to delete them. It also means that when starting an >>>> update you'll need to load every existing previous version of the >>>> template in order to calculate the dependencies. It also leaves the >>>> dependencies in an ambiguous state when a resource fails, and >>>> although that can be worked around it will be a giant pain to implement. >>>> >>> >>> Agree that looking to all templates for a delete is not good. But >>> baring Complexity, we feel we could achieve it by way of having an >>> update and a delete stream for a stack update operation. I will >>> elaborate in detail in the etherpad sometime tomorrow :) >>> >>>> I agree that I'd prefer not to have a graph table. After trying a >>>> couple of different things I decided to store the dependencies in the >>>> Resource table, where we can read or write them virtually for free >>>> because it turns out that we are always reading or updating the >>>> Resource itself at exactly the same time anyway. >>>> >>> >>> Not sure how this will work in an update scenario when a resource does >>> not change and its dependencies do. >> >> We'll always update the requirements, even when the properties don't >> change. >> > > Can you elaborate a bit on rollback. I didn't do anything special to handle rollback. It's possible that we need to - obviously the difference in the UpdateReplace + rollback case is that the replaced resource is now the one we want to keep, and yet the replaced_by/replaces dependency will force the newer (replacement) resource to be checked for deletion first, which is an inversion of the usual order. However, I tried to think of a scenario where that would cause problems and I couldn't come up with one. Provided we know the actual, real-world dependencies of each resource I don't think the ordering of those two checks matters. In fact, I currently can't think of a case where the dependency order between replacement and replaced resources matters at all. It matters in the current Heat implementation because resources are artificially segmented into the current and backup stacks, but with a holistic view of dependencies that may well not be required. I tried taking that line out of the simulator code and all the tests still passed. If anybody can think of a scenario in which it would make a difference, I would be very interested to hear it. In any event though, it should be no problem to reverse the direction of that one edge in these particular circumstances if it does turn out to be a problem. > We had an approach with depends_on > and needed_by columns in ResourceTable. But dropped it when we figured out > we had too many DB operations for Update. Yeah, I initially ran into this problem too - you have a bunch of nodes that are waiting on the current node, and now you have to go look them all up in the database to see what else they're waiting on in order to tell if they're ready to be triggered. It turns out the answer is to distribute the writes but centralise the reads. So at the start of the update, we read all of the Resources, obtain their dependencies and build one central graph[1]. We than make that graph available to each resource (either by passing it as a notification parameter, or storing it somewhere central in the DB that they will all have to read anyway, i.e. the Stack). But when we update a dependency we don't update the central graph, we update the individual Resource so there's no global lock required. [1] https://github.com/zaneb/heat-convergence-prototype/blob/distributed-graph/converge/stack.py#L166-L168 >>> Also taking care of deleting resources in order will be an issue. >> >> It works fine. >> >>> This implies that there will be different versions of a resource which >>> will even complicate further. >> >> No it doesn't, other than the different versions we already have due to >> UpdateReplace. >> >>>>>> This approach reduces DB queries by waiting for completion >>>>>> notification >>>> on a topic. The drawback I see is that delete stack stream will be >>>> huge as it will have the entire graph. We can always dump such data >>>> in ResourceLock.data Json and pass a simple flag >>>> "load_stream_from_db" to converge RPC call as a workaround for delete >> operation. >>>>> >>>>> This seems to be essentially equivalent to my 'SyncPoint' >>>>> proposal[1], with >>>> the key difference that the data is stored in-memory in a Heat engine >>>> rather than the database. >>>>> >>>>> I suspect it's probably a mistake to move it in-memory for similar >>>>> reasons to the argument Clint made against synchronising the marking >>>>> off >>>> of dependencies in-memory. The database can handle that and the >>>> problem of making the DB robust against failures of a single machine >>>> has already been solved by someone else. If we do it in-memory we are >>>> just creating a single point of failure for not much gain. (I guess >>>> you could argue it doesn't matter, since if any Heat engine dies >>>> during the traversal then we'll have to kick off another one anyway, >>>> but it does limit our options if that changes in the >>>> future.) [Murugan, Visnusaran] Resource completes, removes itself >>>> from resource_lock and notifies engine. Engine will acquire parent >>>> lock and initiate parent only if all its children are satisfied (no child entry in >> resource_lock). >>>> This will come in place of Aggregator. >>>> >>>> Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly what I >> did. >>>> The three differences I can see are: >>>> >>>> 1) I think you are proposing to create all of the sync points at the >>>> start of the traversal, rather than on an as-needed basis. This is >>>> probably a good idea. I didn't consider it because of the way my >>>> prototype evolved, but there's now no reason I can see not to do this. >>>> If we could move the data to the Resource table itself then we could >>>> even get it for free from an efficiency point of view. >>> >>> +1. But we will need engine_id to be stored somewhere for recovery >> purpose (easy to be queried format). >> >> Yeah, so I'm starting to think you're right, maybe the/a Lock table is the right >> thing to use there. We could probably do it within the resource table using >> the same select-for-update to set the engine_id, but I agree that we might >> be starting to jam too much into that one table. >> > > yeah. Unrelated values in resource table. Upon resource completion we have to > unset engine_id as well as compared to dropping a row from resource lock. > Both are good. Having engine_id in resource_table will reduce db operaions > in half. We should go with just resource table along with engine_id. OK >>> Sync points are created as-needed. Single resource is enough to restart >> that entire stream. >>> I think there is a disconnect in our understanding. I will detail it as well in >> the etherpad. >> >> OK, that would be good. >> >>>> 2) You're using a single list from which items are removed, rather >>>> than two lists (one static, and one to which items are added) that get >> compared. >>>> Assuming (1) then this is probably a good idea too. >>> >>> Yeah. We have a single list per active stream which work by removing >>> Complete/satisfied resources from it. >> >> I went to change this and then remembered why I did it this way: the sync >> point is also storing data about the resources that are triggering it. Part of this >> is the RefID and attributes, and we could replace that by storing that data in >> the Resource itself and querying it rather than having it passed in via the >> notification. But the other part is the ID/key of those resources, which we >> _need_ to know in order to update the requirements in case one of them >> has been replaced and thus the graph doesn't reflect it yet. (Or, for that >> matter, we need it to know where to go looking for the RefId and/or >> attributes if they're in the >> DB.) So we have to store some data, we can't just remove items from the >> required list (although we could do that as well). >> >>>> 3) You're suggesting to notify the engine unconditionally and let the >>>> engine decide if the list is empty. That's probably not a good idea - >>>> not only does it require extra reads, it introduces a race condition >>>> that you then have to solve (it can be solved, it's just more work). >>>> Since the update to remove a child from the list is atomic, it's best >>>> to just trigger the engine only if the list is now empty. >>>> >>> >>> No. Notify only if stream has something to be processed. The newer >>> Approach based on db lock will be that the last resource will initiate its >> parent. >>> This is opposite to what our Aggregator model had suggested. >> >> OK, I think we're on the same page on this one then. >> > > > Yeah. > >>>>> It's not clear to me how the 'streams' differ in practical terms >>>>> from just passing a serialisation of the Dependencies object, other >>>>> than being incomprehensible to me ;). The current Dependencies >>>>> implementation >>>>> (1) is a very generic implementation of a DAG, (2) works and has >>>>> plenty of >>>> unit tests, (3) has, with I think one exception, a pretty >>>> straightforward API, >>>> (4) has a very simple serialisation, returned by the edges() method, >>>> which can be passed back into the constructor to recreate it, and (5) >>>> has an API that is to some extent relied upon by resources, and so >>>> won't likely be removed outright in any event. >>>>> Whatever code we need to handle dependencies ought to just build on >>>> this existing implementation. >>>>> [Murugan, Visnusaran] Our thought was to reduce payload size >>>> (template/graph). Just planning for worst case scenario (million >>>> resource >>>> stack) We could always dump them in ResourceLock.data to be loaded by >>>> Worker. With the latest updates to the Etherpad, I'm even more confused by streams than I was before. One thing I never understood is why do you need to store the whole path to reach each node in the graph? Surely you only need to know the nodes this one is waiting on, the nodes waiting on this one and the ones those are waiting on, not the entire history up to this point. The size of each stream is theoretically up to O(n^2) and you're storing n of them - that's going to get painful in this million-resource stack. >>>> If there's a smaller representation of a graph than a list of edges >>>> then I don't know what it is. The proposed stream structure certainly >>>> isn't it, unless you mean as an alternative to storing the entire >>>> graph once for each resource. A better alternative is to store it >>>> once centrally - in my current implementation it is passed down >>>> through the trigger messages, but since only one traversal can be in >>>> progress at a time it could just as easily be stored in the Stack table of the >> database at the slight cost of an extra write. >>>> >>> >>> Agree that edge is the smallest representation of a graph. But it does >>> not give us a complete picture without doing a DB lookup. Our >>> assumption was to store streams in IN_PROGRESS resource_lock.data >>> column. This could be in resource table instead. >> >> That's true, but I think in practice at any point where we need to look at this >> we will always have already loaded the Stack from the DB for some other >> reason, so we actually can get it for free. (See detailed discussion in my reply >> to Anant.) >> > > Aren't we planning to stop loading stack with all resource objects in future to > Address scalability concerns we currently have? We plan on not loading all of the Resource objects each time we load the Stack object, but I think we will always need to have loaded the Stack object (for example, we'll need to check the current traversal ID, amongst other reasons). So if the serialised dependency graph is stored in the Stack it will be no big deal. >>>> I'm not opposed to doing that, BTW. In fact, I'm really interested in >>>> your input on how that might help make recovery from failure more >>>> robust. I know Anant mentioned that not storing enough data to >>>> recover when a node dies was his big concern with my current approach. >>>> >>> >>> With streams, We feel recovery will be easier. All we need is a >>> trigger :) >>> >>>> I can see that by both creating all the sync points at the start of >>>> the traversal and storing the dependency graph in the database >>>> instead of letting it flow through the RPC messages, we would be able >>>> to resume a traversal where it left off, though I'm not sure what that buys >> us. >>>> >>>> And I guess what you're suggesting is that by having an explicit lock >>>> with the engine ID specified, we can detect when a resource is stuck >>>> in IN_PROGRESS due to an engine going down? That's actually pretty >> interesting. >>>> >>> >>> Yeah :) >>> >>>>> Based on our call on Thursday, I think you're taking the idea of the >>>>> Lock >>>> table too literally. The point of referring to locks is that we can >>>> use the same concepts as the Lock table relies on to do atomic >>>> updates on a particular row of the database, and we can use those >>>> atomic updates to prevent race conditions when implementing >>>> SyncPoints/Aggregators/whatever you want to call them. It's not that >>>> we'd actually use the Lock table itself, which implements a mutex and >>>> therefore offers only a much slower and more stateful way of doing >>>> what we want (lock mutex, change data, unlock mutex). >>>>> [Murugan, Visnusaran] Are you suggesting something like a >>>>> select-for- >>>> update in resource table itself without having a lock table? >>>> >>>> Yes, that's exactly what I was suggesting. >>> >>> DB is always good for sync. But we need to be careful not to overdo it. >> >> Yeah, I see what you mean now, it's starting to _feel_ like there'd be too >> many things mixed together in the Resource table. Are you aware of some >> concrete harm that might cause though? What happens if we overdo it? Is >> select-for-update on a huge row more expensive than the whole overhead >> of manipulating the Lock? >> >> Just trying to figure out if intuition is leading me astray here. >> > > You are right. There should be no difference apart from little bump > In memory usage. But I think it should be fine. > >>> Will update etherpad by tomorrow. >> >> OK, thanks. >> >> cheers, >> Zane. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From melwittt at gmail.com Sat Dec 13 01:54:37 2014 From: melwittt at gmail.com (melanie witt) Date: Fri, 12 Dec 2014 17:54:37 -0800 Subject: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken Message-ID: <2709D857-FAB3-4C2D-A7FD-941501A15224@gmail.com> Hi everybody, At some point, our db archiving functionality got broken because there was a change to stop ever deleting instance system metadata [1]. For those unfamiliar, the 'nova-manage db archive_deleted_rows' is the thing that moves all soft-deleted (deleted=nonzero) rows to the shadow tables. This is a periodic cleaning that operators can do to maintain performance (as things can get sluggish when deleted=nonzero rows accumulate). The change was made because instance_type data still needed to be read even after instances had been deleted, because we allow admin to view deleted instances. I saw a bug [2] and two patches [3][4] which aimed to fix this by changing back to soft-deleting instance sysmeta when instances are deleted, and instead allow read_deleted="yes" for the things that need to read instance_type for deleted instances present in the db. My question is, is this approach okay? If so, I'd like to see these patches revive so we can have our db archiving working again. :) I think there's likely something I'm missing about the approach, so I'm hoping people who know more about instance sysmeta than I do, can chime in on how/if we can fix this for db archiving. Thanks. [1] https://bugs.launchpad.net/nova/+bug/1185190 [2] https://bugs.launchpad.net/nova/+bug/1226049 [3] https://review.openstack.org/#/c/110875/ [4] https://review.openstack.org/#/c/109201/ melanie (melwitt) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dzimine at stackstorm.com Sat Dec 13 03:16:36 2014 From: dzimine at stackstorm.com (Dmitri Zimine) Date: Fri, 12 Dec 2014 19:16:36 -0800 Subject: [openstack-dev] [Mistral] Global Context and Execution Environment In-Reply-To: References: Message-ID: <38A58752-A625-414F-9416-62ACD983E607@stackstorm.com> Winson, Lakshmi, Renat: It looked good and I began to write down the summary: https://etherpad.openstack.org/p/mistral-global-context But than realized that it?s not safe to assume from the action, that the global context will be supplied as part of API call. Check it out in the etherpad. What problems are we trying to solve: 1) reduce passing the same parameters over and over from parent to child 2) ?automatically? make a parameter accessible to most actions without typing it all over (like auth token) Can #1 be solved by passing input to subworkflows automatically Can #2 be solved somehow else? Default passing of arbitrary parameters to action seems like breaking abstraction. Thoughts? need to brainstorm further?. DZ> On Dec 12, 2014, at 12:54 PM, W Chan wrote: > Renat, Dmitri, > > On supplying the global context into the workflow execution... > > In addition to Renat's proposal, I have a few here. > > 1) Pass them implicitly in start_workflow as another kwargs in the **params. But on thinking, we should probably make global context explicitly defined in the WF spec. Passing them implicitly may be hard to follow during troubleshooting where the value comes from by looking at the WF spec. Plus there will be cases where WF authors want it explicitly defined. Still debating here... > > inputs = {...} > globals = {...} > start_workflow('my_workflow', inputs, globals=globals) > > 2) Instead of adding to the WF spec, what if we change the scope in existing input params? For example, inputs defined in the top workflow by default is visible to all subflows (pass down to workflow task on run_workflow) and tasks (passed to action on execution). > > 3) Add to the WF spec > > workflow: > type: direct > global: > - global1 > - global2 > input: > - input1 > - input2 > > Winson > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From joehuang at huawei.com Sat Dec 13 03:23:46 2014 From: joehuang at huawei.com (joehuang) Date: Sat, 13 Dec 2014 03:23:46 +0000 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <548B128C.40504@rackspace.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> <548B00B9.3090609@redhat.com>,<548B128C.40504@rackspace.com> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541FE941@szxema505-mbs.china.huawei.com> Hello, Andrew, > I do consider this to be out of scope for cells, for at least the medium > term as you've said. There is additional complexity in making that a > supported configuration that is not being addressed in the cells > effort. I am just making the statement that this is something cells > could address if desired, and therefore doesn't need an additional solution 1. Does your solution include Cinder,Neutron, Glance, Ceilometer, or only Nova involved? Could you describe it more clear how your solution works. 2. The tenant's resources need to be distributed in different data centers, how these resources are connected through L2/L3 networing, and isolated from other tenants, including provide advanced service like LB/FW/VPN, and service chainning? 3. How to distribute the image to geo-distributed data-centers when the user upload image, or you mean all VM will boot from the remote image data? 4. How would the metering and monitoring function will work in geo-distributed data-centers? Or say, if we use Ceilometer, how to hadnle the sampling data/alarm? 5. How to support multi-vendor's OpenStack distribution in one multi-site cloud? If only support one vendor's OpenStack distribution, and use RPC for inter-dc communication, how to do the cross data center integration and troubleshooting, upgrade with RPC if the driver/agent/backend(storage/network/sever) from different vendor I have lots of doubts how the cells would address these challenges, these questions are only part of them. It would be the best if cells can address all challenges, then of course no need an adtional solution. Best regards Chaoyi Huang ( joehuang ) ____________________________ From: Andrew Laski [andrew.laski at rackspace.com] Sent: 13 December 2014 0:06 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward On 12/12/2014 09:50 AM, Russell Bryant wrote: > On 12/11/2014 12:55 PM, Andrew Laski wrote: >> Cells can handle a single API on top of globally distributed DCs. I >> have spoken with a group that is doing exactly that. But it requires >> that the API is a trusted part of the OpenStack deployments in those >> distributed DCs. > And the way the rest of the components fit into that scenario is far > from clear to me. Do you consider this more of a "if you can make it > work, good for you", or something we should aim to be more generally > supported over time? Personally, I see the globally distributed > OpenStack under a single API case much more complex, and worth > considering out of scope for the short to medium term, at least. I do consider this to be out of scope for cells, for at least the medium term as you've said. There is additional complexity in making that a supported configuration that is not being addressed in the cells effort. I am just making the statement that this is something cells could address if desired, and therefore doesn't need an additional solution. > For me, this discussion boils down to ... > > 1) Do we consider these use cases in scope at all? > > 2) If we consider it in scope, is it enough of a priority to warrant a > cross-OpenStack push in the near term to work on it? > > 3) If yes to #2, how would we do it? Cascading, or something built > around cells? > > I haven't worried about #3 much, because I consider #2 or maybe even #1 > to be a show stopper here. > _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From joehuang at huawei.com Sat Dec 13 03:29:35 2014 From: joehuang at huawei.com (joehuang) Date: Sat, 13 Dec 2014 03:29:35 +0000 Subject: [openstack-dev] =?windows-1252?q?=5Ball=5D_=5Btc=5D_=5BPTL=5D_Cas?= =?windows-1252?q?cading_vs=2E_Cells_=96_summit_recap_and_move_forward?= In-Reply-To: <548B00B9.3090609@redhat.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com>, <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com>,<548B00B9.3090609@redhat.com> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541FE953@szxema505-mbs.china.huawei.com> Hello, Russell, > Personally, I see the globally distributed > OpenStack under a single API case much more complex, and worth > considering out of scope for the short to medium term, at least. Thanks for your thougths. Do you mean it could be set in the roadmap, but not a scope in short or medium term ( for example, Kilo and L release )? Or we need more discussion to include it in the roadmap, then I would like to know how to do that. Best Regards Chaoyi Huang ( joehuang ) ________________________________________ From: Russell Bryant [rbryant at redhat.com] Sent: 12 December 2014 22:50 To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward On 12/11/2014 12:55 PM, Andrew Laski wrote: > Cells can handle a single API on top of globally distributed DCs. I > have spoken with a group that is doing exactly that. But it requires > that the API is a trusted part of the OpenStack deployments in those > distributed DCs. And the way the rest of the components fit into that scenario is far from clear to me. Do you consider this more of a "if you can make it work, good for you", or something we should aim to be more generally supported over time? Personally, I see the globally distributed OpenStack under a single API case much more complex, and worth considering out of scope for the short to medium term, at least. For me, this discussion boils down to ... 1) Do we consider these use cases in scope at all? 2) If we consider it in scope, is it enough of a priority to warrant a cross-OpenStack push in the near term to work on it? 3) If yes to #2, how would we do it? Cascading, or something built around cells? I haven't worried about #3 much, because I consider #2 or maybe even #1 to be a show stopper here. -- Russell Bryant _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ryu at midokura.com Sat Dec 13 06:18:15 2014 From: ryu at midokura.com (Ryu Ishimoto) Date: Sat, 13 Dec 2014 15:18:15 +0900 Subject: [openstack-dev] [neutron] Vendor Plugin Decomposition and NeutronClient vendor extension Message-ID: Hi All, It's great to see the vendor plugin decomposition spec[1] finally getting merged! Now that the spec is completed, I have a question on how this may impact neutronclient, and in particular, its handling of vendor extensions. One of the great things about splitting out the plugins is that it will allow vendors to implement vendor extensions more rapidly. Looking at the neutronclient code, however, it seems that these vendor extension commands are embedded inside the project, and doesn't seem easily extensible. It feels natural that, now that neutron vendor code is split out, neutronclient should also do the same. Of course, you could always fork neutronclient yourself, but I'm wondering if there is any plan on improving this. Admittedly, I don't have a great solution myself but I'm thinking something along the line of allowing neutronclient to load commands from an external directory. I am not familiar enough with neutronclient to know if there are technical limitation to what I'm suggesting, but I would love to hear thoughts of others on this. Thanks in advance! Best, Ryu [1] https://review.openstack.org/#/c/134680/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From armamig at gmail.com Sat Dec 13 06:32:16 2014 From: armamig at gmail.com (Armando M.) Date: Fri, 12 Dec 2014 22:32:16 -0800 Subject: [openstack-dev] [neutron] Vendor Plugin Decomposition and NeutronClient vendor extension In-Reply-To: References: Message-ID: On 12 December 2014 at 22:18, Ryu Ishimoto wrote: > > > Hi All, > > It's great to see the vendor plugin decomposition spec[1] finally getting > merged! Now that the spec is completed, I have a question on how this may > impact neutronclient, and in particular, its handling of vendor extensions. > Thanks for the excitement :) > > One of the great things about splitting out the plugins is that it will > allow vendors to implement vendor extensions more rapidly. Looking at the > neutronclient code, however, it seems that these vendor extension commands > are embedded inside the project, and doesn't seem easily extensible. It > feels natural that, now that neutron vendor code is split out, > neutronclient should also do the same. > > Of course, you could always fork neutronclient yourself, but I'm wondering > if there is any plan on improving this. Admittedly, I don't have a great > solution myself but I'm thinking something along the line of allowing > neutronclient to load commands from an external directory. I am not > familiar enough with neutronclient to know if there are technical > limitation to what I'm suggesting, but I would love to hear thoughts of > others on this. > There is quite a bit of road ahead of us. We haven't thought or yet considered how to handle extensions client side. Server side, the extension mechanism is already quite flexible, but we gotta learn to walk before we can run! Having said that your points are well taken, but most likely we won't be making much progress on these until we have provided and guaranteed a smooth transition for all plugins and drivers as suggested by the spec referenced below. Stay tuned! Cheers, Armando > > Thanks in advance! > > Best, > Ryu > > [1] https://review.openstack.org/#/c/134680/ > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yshovkoplias at mirantis.com Sat Dec 13 07:01:31 2014 From: yshovkoplias at mirantis.com (Yuriy Shovkoplias) Date: Fri, 12 Dec 2014 23:01:31 -0800 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: <54897709.5090305@redhat.com> Message-ID: Dear neutron community, Can you please clarify couple points on the vendor code decomposition? - Assuming I would like to create the new driver now (Kilo development cycle) - is it already allowed (or mandatory) to follow the new process? https://review.openstack.org/#/c/134680/ - Assuming the new process is already in place, are the following guidelines still applicable for the vendor integration code (not for vendor library)? https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers The following is a list of requirements for inclusion of code upstream: - Participation in Neutron meetings, IRC channels, and email lists. - A member of the plugin/driver team participating in code reviews of other upstream code. Regards, Yuri On Thu, Dec 11, 2014 at 3:23 AM, Gary Kotton wrote: > > > On 12/11/14, 12:50 PM, "Ihar Hrachyshka" wrote: > > >-----BEGIN PGP SIGNED MESSAGE----- > >Hash: SHA512 > > > >+100. I vote -1 there and would like to point out that we *must* keep > >history during the split, and split from u/s code base, not random > >repositories. If you don't know how to achieve this, ask oslo people, > >they did it plenty of times when graduating libraries from oslo-incubator. > >/Ihar > > > >On 10/12/14 19:18, Cedric OLLIVIER wrote: > >> > >> > >> 2014-12-09 18:32 GMT+01:00 Armando M. >> >: > >> > >> > >> By the way, if Kyle can do it in his teeny tiny time that he has > >> left after his PTL duties, then anyone can do it! :) > >> > >> https://review.openstack.org/#/c/140191/ > > This patch looses the recent hacking changes that we have made. This is a > slight example to try and highly the problem that we may incur as a > community. > > >> > >> Fully cloning Dave Tucker's repository [1] and the outdated fork of > >> the ODL ML2 MechanismDriver included raises some questions (e.g. > >> [2]). I wish the next patch set removes some files. At least it > >> should take the mainstream work into account (e.g. [3]) . > >> > >> [1] https://github.com/dave-tucker/odl-neutron-drivers [2] > >> https://review.openstack.org/#/c/113330/ [3] > >> https://review.openstack.org/#/c/96459/ > >> > >> > >> _______________________________________________ OpenStack-dev > >> mailing list OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >-----BEGIN PGP SIGNATURE----- > >Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > > > >iQEcBAEBCgAGBQJUiXcIAAoJEC5aWaUY1u57dBMH/17unffokpb0uxqewPYrPNMI > >ukDzG4dW8mIP3yfbVNsHQXe6gWj/kj/SkBWJrO13BusTu8hrr+DmOmmfF/42s3vY > >E+6EppQDoUjR+QINBwE46nU+E1w9hIHyAZYbSBtaZQ32c8aQbmHmF+rgoeEQq349 > >PfpPLRI6MamFWRQMXSgF11VBTg8vbz21PXnN3KbHbUgzI/RS2SELv4SWmPgKZCEl > >l1K5J1/Vnz2roJn4pr/cfc7vnUIeAB5a9AuBHC6o+6Je2RDy79n+oBodC27kmmIx > >lVGdypoxZ9tF3yfRM9nngjkOtozNzZzaceH0Sc/5JR4uvNReVN4exzkX5fDH+SM= > >=dfe/ > >-----END PGP SIGNATURE----- > > > >_______________________________________________ > >OpenStack-dev mailing list > >OpenStack-dev at lists.openstack.org > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Sat Dec 13 07:12:55 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Fri, 12 Dec 2014 23:12:55 -0800 Subject: [openstack-dev] =?utf-8?b?W2FsbF0gW3RjXSBbUFRMXSBDYXNjYWRpbmcg?= =?utf-8?q?vs=2E_Cells_=E2=80=93_summit_recap_and_move_forward?= In-Reply-To: References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> <548B00B9.3090609@redhat.com> Message-ID: > On Dec 12, 2014, at 10:30, Joe Gordon wrote: > > > >> On Fri, Dec 12, 2014 at 6:50 AM, Russell Bryant wrote: >> On 12/11/2014 12:55 PM, Andrew Laski wrote: >> > Cells can handle a single API on top of globally distributed DCs. I >> > have spoken with a group that is doing exactly that. But it requires >> > that the API is a trusted part of the OpenStack deployments in those >> > distributed DCs. >> >> And the way the rest of the components fit into that scenario is far >> from clear to me. Do you consider this more of a "if you can make it >> work, good for you", or something we should aim to be more generally >> supported over time? Personally, I see the globally distributed >> OpenStack under a single API case much more complex, and worth >> considering out of scope for the short to medium term, at least. >> >> For me, this discussion boils down to ... >> >> 1) Do we consider these use cases in scope at all? >> >> 2) If we consider it in scope, is it enough of a priority to warrant a >> cross-OpenStack push in the near term to work on it? >> >> 3) If yes to #2, how would we do it? Cascading, or something built >> around cells? >> >> I haven't worried about #3 much, because I consider #2 or maybe even #1 >> to be a show stopper here. > > Agreed I agree with Russell as well. I also am curious on how identity will work in these cases. As it stands identity provides authoritative information only for the deployment it runs. There is a lot of concern I have from a security standpoint when I start needing to address what the central api can do on the other providers. We have had this discussion a number of times in Keystone, specifically when designing the keystone-to-keystone identity federation, and we came to the conclusion that we needed to ensure that the keystone local to a given cloud is the only source of authoritative authz information. While it may, in some cases, accept authn from a source that is trusted, it still controls the local set of roles and grants. Second, we only guarantee that a tenan_id / project_id is unique within a single deployment of keystone (e.g. Shared/replicated backends such as a percona cluster, which cannot be when crossing between differing IAAS deployers/providers). If there is ever a tenant_id conflict (in theory possible with ldap assignment or an unlucky random uuid generation) between installations, you end up with potentially granting access that should not exist to a given user. With that in mind, how does Keystone fit into this conversation? What is expected of identity? What would keystone need to actually support to make this a reality? I ask because I've only seen information on nova, glance, cinder, and ceilometer in the documentation. Based upon the above information I outlined, I would be concerned with an assumption that identity would "just work" without also being part of this conversation. Thanks, Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From armamig at gmail.com Sat Dec 13 07:22:08 2014 From: armamig at gmail.com (Armando M.) Date: Fri, 12 Dec 2014 23:22:08 -0800 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: <54897709.5090305@redhat.com> Message-ID: On 12 December 2014 at 23:01, Yuriy Shovkoplias wrote: > > Dear neutron community, > > Can you please clarify couple points on the vendor code decomposition? > - Assuming I would like to create the new driver now (Kilo development > cycle) - is it already allowed (or mandatory) to follow the new process? > > https://review.openstack.org/#/c/134680/ > > Yes. See [1] for more details. > - Assuming the new process is already in place, are the following > guidelines still applicable for the vendor integration code (not for vendor > library)? > > https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers > The following is a list of requirements for inclusion of code upstream: > > - Participation in Neutron meetings, IRC channels, and email lists. > - A member of the plugin/driver team participating in code reviews of > other upstream code. > > I see no reason why you wouldn't follow those guidelines, as a general rule of thumb. Having said that, some of the wording would need to be tweaked to take into account of the new contribution model. Bear in mind, that I started adding some developer documentation in [2], to give a practical guide to the proposal. More to follow. Cheers, Armando [1] http://docs-draft.openstack.org/80/134680/17/check/gate-neutron-specs-docs/2a7afdd/doc/build/html/specs/kilo/core-vendor-decomposition.html#adoption-and-deprecation-policy [2] https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/core-vendor-decomposition,n,z > Regards, > Yuri > > On Thu, Dec 11, 2014 at 3:23 AM, Gary Kotton wrote: >> >> >> On 12/11/14, 12:50 PM, "Ihar Hrachyshka" wrote: >> >> >-----BEGIN PGP SIGNED MESSAGE----- >> >Hash: SHA512 >> > >> >+100. I vote -1 there and would like to point out that we *must* keep >> >history during the split, and split from u/s code base, not random >> >repositories. If you don't know how to achieve this, ask oslo people, >> >they did it plenty of times when graduating libraries from >> oslo-incubator. >> >/Ihar >> > >> >On 10/12/14 19:18, Cedric OLLIVIER wrote: >> >> >> >> >> >> 2014-12-09 18:32 GMT+01:00 Armando M. > >> >: >> >> >> >> >> >> By the way, if Kyle can do it in his teeny tiny time that he has >> >> left after his PTL duties, then anyone can do it! :) >> >> >> >> https://review.openstack.org/#/c/140191/ >> >> This patch looses the recent hacking changes that we have made. This is a >> slight example to try and highly the problem that we may incur as a >> community. >> >> >> >> >> Fully cloning Dave Tucker's repository [1] and the outdated fork of >> >> the ODL ML2 MechanismDriver included raises some questions (e.g. >> >> [2]). I wish the next patch set removes some files. At least it >> >> should take the mainstream work into account (e.g. [3]) . >> >> >> >> [1] https://github.com/dave-tucker/odl-neutron-drivers [2] >> >> https://review.openstack.org/#/c/113330/ [3] >> >> https://review.openstack.org/#/c/96459/ >> >> >> >> >> >> _______________________________________________ OpenStack-dev >> >> mailing list OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >-----BEGIN PGP SIGNATURE----- >> >Version: GnuPG/MacGPG2 v2.0.22 (Darwin) >> > >> >iQEcBAEBCgAGBQJUiXcIAAoJEC5aWaUY1u57dBMH/17unffokpb0uxqewPYrPNMI >> >ukDzG4dW8mIP3yfbVNsHQXe6gWj/kj/SkBWJrO13BusTu8hrr+DmOmmfF/42s3vY >> >E+6EppQDoUjR+QINBwE46nU+E1w9hIHyAZYbSBtaZQ32c8aQbmHmF+rgoeEQq349 >> >PfpPLRI6MamFWRQMXSgF11VBTg8vbz21PXnN3KbHbUgzI/RS2SELv4SWmPgKZCEl >> >l1K5J1/Vnz2roJn4pr/cfc7vnUIeAB5a9AuBHC6o+6Je2RDy79n+oBodC27kmmIx >> >lVGdypoxZ9tF3yfRM9nngjkOtozNzZzaceH0Sc/5JR4uvNReVN4exzkX5fDH+SM= >> >=dfe/ >> >-----END PGP SIGNATURE----- >> > >> >_______________________________________________ >> >OpenStack-dev mailing list >> >OpenStack-dev at lists.openstack.org >> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From keyvan.m.sadeghi at gmail.com Sat Dec 13 08:04:43 2014 From: keyvan.m.sadeghi at gmail.com (Keyvan Mir Mohammad Sadeghi) Date: Sat, 13 Dec 2014 08:04:43 +0000 Subject: [openstack-dev] [Solum] Contributing to Solum References: Message-ID: Hi Adrian, Roshan, etc, We tried to reach you via the IRC channel Tuesdays night 2100 UTC, but guess that meeting got cancelled? We'd like to set a start for our contribution ASAP, so going through the list of bugs, the below one looked interesting: https :// bugs.launchpad.net / solum /+bug/1308690 Though I'm not sure this is the right one. Would you approve? If positive, are there any sources on the topic other than the bug page itself? And about the conventions you mentioned in the Skype call; what is the starting point, do we just submit a pull request referencing the bug number? Kind regards, Keyvan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp at jamezpolley.com Sat Dec 13 09:29:43 2014 From: jp at jamezpolley.com (James Polley) Date: Sat, 13 Dec 2014 10:29:43 +0100 Subject: [openstack-dev] [TripleO] CI report: 2014-12-5 - 2014-12-11 Message-ID: Resending with correct subject tag. Never send email before coffee. On Fri, Dec 12, 2014 at 9:33 AM, James Polley wrote: > In the week since the last email we've had no major CI failures. This > makes it very easy for me to write my first CI report. > > There was a brief period where all the Ubuntu tests failed while an update > was rolling out to various mirrors. DerekH worked around this quickly by > dropping in a DNS hack, which remains in place. A long term fix for this > problem probably involves setting up our own apt mirrors. > > check-tripleo-ironic-overcloud-precise-ha remains flaky, and hence > non-voting. > > As always more details can be found here (although this week there's > nothing to see) > https://etherpad.openstack.org/p/tripleo-ci-breakages > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpkshetty at gmail.com Sat Dec 13 10:46:23 2014 From: dpkshetty at gmail.com (Deepak Shetty) Date: Sat, 13 Dec 2014 16:16:23 +0530 Subject: [openstack-dev] [nova][cinder][infra] Ceph CI status update In-Reply-To: <548B49E2.1040005@anteaya.info> References: <20141211163643.GA10911@helmut> <5489CE60.2040102@anteaya.info> <548B49E2.1040005@anteaya.info> Message-ID: I think you completely mis-understood my Q I am completely in agreement for _not_ putting CI status in mailing list. Let me rephrase: As of now, I see 2 places where CI status is being tracked: https://wiki.openstack.org/wiki/ThirdPartySystems (clicking on the Link tells u the status) and https://wiki.openstack.org/wiki/Cinder/third-party-ci-status (one of column is status column) How are the 2 different ? Do we need to update both ? thanx, deepak On Sat, Dec 13, 2014 at 1:32 AM, Anita Kuno wrote: > > On 12/12/2014 03:28 AM, Deepak Shetty wrote: > > On Thu, Dec 11, 2014 at 10:33 PM, Anita Kuno > wrote: > > > >> On 12/11/2014 09:36 AM, Jon Bernard wrote: > >>> Heya, quick Ceph CI status update. Once the test_volume_boot_pattern > >>> was marked as skipped, only the revert_resize test was failing. I have > >>> submitted a patch to nova for this [1], and that yields an all green > >>> ceph ci run [2]. So at the moment, and with my revert patch, we're in > >>> good shape. > >>> > >>> I will fix up that patch today so that it can be properly reviewed and > >>> hopefully merged. From there I'll submit a patch to infra to move the > >>> job to the check queue as non-voting, and we can go from there. > >>> > >>> [1] https://review.openstack.org/#/c/139693/ > >>> [2] > >> > http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html > >>> > >>> Cheers, > >>> > >> Please add the name of your CI account to this table: > >> https://wiki.openstack.org/wiki/ThirdPartySystems > >> > >> As outlined in the third party CI requirements: > >> http://ci.openstack.org/third_party.html#requirements > >> > >> Please post system status updates to your individual CI wikipage that is > >> linked to this table. > >> > > > > How is posting status there different than here : > > https://wiki.openstack.org/wiki/Cinder/third-party-ci-status > > > > thanx, > > deepak > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > There are over 100 CI accounts now and growing. > > Searching the email archives to evaluate the status of a CI is not > something that infra will do, we will look on that wikipage or we will > check the third-party-announce email list (which all third party CI > systems should be subscribed to, as outlined in the third_party.html > page lined above). > > If we do not find information where we have asked you to put it and were > we expect it, we may disable your system until you have fulfilled the > requirements as outlined in the third_party.html page linked above. > > Sprinkling status updates amongst the emails posted to -dev and > expecting the infra team and other -devs to find them when needed is > unsustainable and has been for some time, which is why we came up with > the wikipage to aggregate them. > > Please direct all further questions about this matter to one of the two > third-party meetings as linked above. > > Thank you, > Anita. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From henry4hly at gmail.com Sat Dec 13 11:23:26 2014 From: henry4hly at gmail.com (Henry) Date: Sat, 13 Dec 2014 19:23:26 +0800 Subject: [openstack-dev] =?utf-8?b?W2FsbF0gW3RjXSBbUFRMXSBDYXNjYWRpbmcg?= =?utf-8?q?vs=2E_Cells_=E2=80=93_summit_recap_and_move_forward?= In-Reply-To: References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> <548B00B9.3090609@redhat.com> Message-ID: <1994EE26-22EF-4CBD-A7F1-58E8F83C27EF@gmail.com> Hi Morgan, A good question about keystone. In fact, keystone is naturally suitable for multi-region deployment. It has only REST service interface, and PKI based token greatly reduce the central service workload. So, unlike other openstack service, it would not be set to cascade mode. Best regards Henry Sent from my iPad On 2014-12-13, at ??3:12, Morgan Fainberg wrote: > > > On Dec 12, 2014, at 10:30, Joe Gordon wrote: > >> >> >> On Fri, Dec 12, 2014 at 6:50 AM, Russell Bryant wrote: >> On 12/11/2014 12:55 PM, Andrew Laski wrote: >> > Cells can handle a single API on top of globally distributed DCs. I >> > have spoken with a group that is doing exactly that. But it requires >> > that the API is a trusted part of the OpenStack deployments in those >> > distributed DCs. >> >> And the way the rest of the components fit into that scenario is far >> from clear to me. Do you consider this more of a "if you can make it >> work, good for you", or something we should aim to be more generally >> supported over time? Personally, I see the globally distributed >> OpenStack under a single API case much more complex, and worth >> considering out of scope for the short to medium term, at least. >> >> For me, this discussion boils down to ... >> >> 1) Do we consider these use cases in scope at all? >> >> 2) If we consider it in scope, is it enough of a priority to warrant a >> cross-OpenStack push in the near term to work on it? >> >> 3) If yes to #2, how would we do it? Cascading, or something built >> around cells? >> >> I haven't worried about #3 much, because I consider #2 or maybe even #1 >> to be a show stopper here. >> >> Agreed > > I agree with Russell as well. I also am curious on how identity will work in these cases. As it stands identity provides authoritative information only for the deployment it runs. There is a lot of concern I have from a security standpoint when I start needing to address what the central api can do on the other providers. We have had this discussion a number of times in Keystone, specifically when designing the keystone-to-keystone identity federation, and we came to the conclusion that we needed to ensure that the keystone local to a given cloud is the only source of authoritative authz information. While it may, in some cases, accept authn from a source that is trusted, it still controls the local set of roles and grants. > > Second, we only guarantee that a tenan_id / project_id is unique within a single deployment of keystone (e.g. Shared/replicated backends such as a percona cluster, which cannot be when crossing between differing IAAS deployers/providers). If there is ever a tenant_id conflict (in theory possible with ldap assignment or an unlucky random uuid generation) between installations, you end up with potentially granting access that should not exist to a given user. > > With that in mind, how does Keystone fit into this conversation? What is expected of identity? What would keystone need to actually support to make this a reality? > > I ask because I've only seen information on nova, glance, cinder, and ceilometer in the documentation. Based upon the above information I outlined, I would be concerned with an assumption that identity would "just work" without also being part of this conversation. > > Thanks, > Morgan > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Sat Dec 13 11:42:53 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Sat, 13 Dec 2014 03:42:53 -0800 Subject: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells =?utf-8?Q?=E2=80=93_?=summit recap and move forward In-Reply-To: <1994EE26-22EF-4CBD-A7F1-58E8F83C27EF@gmail.com> References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> <548B00B9.3090609@redhat.com> <1994EE26-22EF-4CBD-A7F1-58E8F83C27EF@gmail.com> Message-ID: On December 13, 2014 at 3:26:34 AM, Henry (henry4hly at gmail.com) wrote: Hi Morgan, A good question about keystone. In fact, keystone is naturally suitable for multi-region deployment. It has only REST service interface, and PKI based token greatly reduce the central service workload. So, unlike other openstack service, it would not be set to cascade mode. I agree that Keystone is suitable for multi-region in some cases, I am still concerned from a security standpoint. The cascade examples all assert a *global* tenant_id / project_id in a lot of comments/documentation. The answer you gave me doesn?t quite address this issue nor the issue of a disparate deployment having a wildly different role-set or security profile. A PKI token is not (as of today) possible to use with a Keystone (or OpenStack deployment) that it didn?t come from. This is like this because Keystone needs to control the AuthZ for it?s local deployment (same design as the keystone-to-keystone federation).? So I have to direct questions: * Is there something specific you expect to happen with the cascading that makes resolving a project_id to something globally unique or am I mis-reading this as part of the design?? * Does the cascade centralization just ask for Keystone tokens for each of the deployments or is there something else being done? Essentially how does one work with a Nova from cloud XXX and cloud YYY from an authorization standpoint? You don?t need to answer these right away, but they are clarification points that need to be thought about as this design moves forward. There are a number of security / authorization questions I can expand on, but the above two are the really big ones to start with. As you scale up (or utilize deployments owned by different providers) it isn?t always possible to replicate the Keystone data around. Cheers, Morgan Best regards Henry Sent from my iPad On 2014-12-13, at ??3:12, Morgan Fainberg wrote: On Dec 12, 2014, at 10:30, Joe Gordon wrote: On Fri, Dec 12, 2014 at 6:50 AM, Russell Bryant wrote: On 12/11/2014 12:55 PM, Andrew Laski wrote: > Cells can handle a single API on top of globally distributed DCs.? I > have spoken with a group that is doing exactly that.? But it requires > that the API is a trusted part of the OpenStack deployments in those > distributed DCs. And the way the rest of the components fit into that scenario is far from clear to me.? Do you consider this more of a "if you can make it work, good for you", or something we should aim to be more generally supported over time?? Personally, I see the globally distributed OpenStack under a single API case much more complex, and worth considering out of scope for the short to medium term, at least. For me, this discussion boils down to ... 1) Do we consider these use cases in scope at all? 2) If we consider it in scope, is it enough of a priority to warrant a cross-OpenStack push in the near term to work on it? 3) If yes to #2, how would we do it?? Cascading, or something built around cells? I haven't worried about #3 much, because I consider #2 or maybe even #1 to be a show stopper here. Agreed I agree with Russell as well. I also am curious on how identity will work in these cases. As it stands identity provides authoritative information only for the deployment it runs. There is a lot of concern I have from a security standpoint when I start needing to address what the central api can do on the other providers. We have had this discussion a number of times in Keystone, specifically when designing the keystone-to-keystone identity federation, and we came to the conclusion that we needed to ensure that the keystone local to a given cloud is the only source of authoritative authz information. While it may, in some cases, accept authn from a source that is trusted, it still controls the local set of roles and grants.? Second, we only guarantee that a tenan_id / project_id is unique within a single deployment of keystone (e.g. Shared/replicated backends such as a percona cluster, which cannot be when crossing between differing IAAS deployers/providers). If there is ever a tenant_id conflict (in theory possible with ldap assignment or an unlucky random uuid generation) between installations, you end up with potentially granting access that should not exist to a given user.? With that in mind, how does Keystone fit into this conversation? What is expected of identity? What would keystone need to actually support to make this a reality? I ask because I've only seen information on nova, glance, cinder, and ceilometer in the documentation. Based upon the above information I outlined, I would be concerned with an assumption that identity would "just work" without also being part of this conversation.? Thanks, Morgan? _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dannchoi at cisco.com Sat Dec 13 16:43:10 2014 From: dannchoi at cisco.com (Danny Choi (dannchoi)) Date: Sat, 13 Dec 2014 16:43:10 +0000 Subject: [openstack-dev] [qa] Question about "nova boot --min-count " Message-ID: Hi, According to the help text, ??min-count ? boot at least servers (limited by quota): --min-count Boot at least servers (limited by quota). I used devstack to deploy OpenStack (version Kilo) in a multi-node setup: 1 Controller/Network + 2 Compute nodes I update the tenant demo default quota ?instances" and ?cores" from ?10? and ?20? to ?100? and ?200?: localadmin at qa4:~/devstack$ nova quota-show --tenant 62fe9a8a2d58407d8aee860095f11550 --user eacb7822ccf545eab9398b332829b476 +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 100 | <<<<< | cores | 200 | <<<<< | ram | 51200 | | floating_ips | 10 | | fixed_ips | -1 | | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes | 255 | | key_pairs | 100 | | security_groups | 10 | | security_group_rules | 20 | | server_groups | 10 | | server_group_members | 10 | +-----------------------------+-------+ When I boot 50 VMs using ??min-count 50?, only 48 VMs come up. localadmin at qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=5b464333-bad0-4fc1-a2f0-310c47b77a17 --min-count 50 vm- There is no error in logs; and it happens consistently. I also tried ??min-count 60? and only 48 VMs com up. In Horizon, left pane ?Admin? -> ?System? -> ?Hypervisors?, it shows both Compute hosts, each with 32 total VCPUs for a grand total of 64, but only 48 used. Is this normal behavior or is there any other setting to change in order to use all 64 VCPUs? Thanks, Danny -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian.otto at rackspace.com Sat Dec 13 17:03:56 2014 From: adrian.otto at rackspace.com (Adrian Otto) Date: Sat, 13 Dec 2014 17:03:56 +0000 Subject: [openstack-dev] [Solum] Contributing to Solum In-Reply-To: References: Message-ID: <564B26C5-06FC-4C51-B9EC-E8B834D418C0@rackspace.com> Keyvan, The meeting did happen at 2100 UTC. Are you sure you went to #openstack-meeting-alt on Freenode? http://lists.openstack.org/pipermail/openstack-dev/2014-December/052009.html I think bug 1308690 would be a great contribution. Please take ownership of that bug ticket, and begin! Thanks, Adrian On Dec 13, 2014, at 12:04 AM, Keyvan Mir Mohammad Sadeghi > wrote: Hi Adrian, Roshan, etc, We tried to reach you via the IRC channel Tuesdays night 2100 UTC, but guess that meeting got cancelled? We'd like to set a start for our contribution ASAP, so going through the list of bugs, the below one looked interesting: https://bugs.launchpad.net/solum/+bug/1308690 Though I'm not sure this is the right one. Would you approve? If positive, are there any sources on the topic other than the bug page itself? And about the conventions you mentioned in the Skype call; what is the starting point, do we just submit a pull request referencing the bug number? Kind regards, Keyvan -------------- next part -------------- An HTML attachment was scrubbed... URL: From armamig at gmail.com Sat Dec 13 18:23:32 2014 From: armamig at gmail.com (Armando M.) Date: Sat, 13 Dec 2014 10:23:32 -0800 Subject: [openstack-dev] [neutron] Services are now split out and neutron is open for commits! In-Reply-To: References: Message-ID: This was more of a brute force fix! I didn't have time to go with finesse, and instead I went in with the hammer :) That said, we want to make sure that the upgrade path to Kilo is as painless as possible, so we'll need to review the Release Notes [1] to reflect the fact that we'll be providing a seamless migration to the new adv services structure. [1] https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Upgrade_Notes_6 Cheers, Armando On 12 December 2014 at 09:33, Kyle Mestery wrote: > > This has merged now, FYI. > > On Fri, Dec 12, 2014 at 10:28 AM, Doug Wiegley > wrote: > >> Hi all, >> >> Neutron grenade jobs have been failing since late afternoon Thursday, >> due to split fallout. Armando has a fix, and it?s working it?s way through >> the gate: >> >> https://review.openstack.org/#/c/141256/ >> >> Get your rechecks ready! >> >> Thanks, >> Doug >> >> >> From: Douglas Wiegley >> Date: Wednesday, December 10, 2014 at 10:29 PM >> To: "OpenStack Development Mailing List (not for usage questions)" < >> openstack-dev at lists.openstack.org> >> Subject: Re: [openstack-dev] [neutron] Services are now split out and >> neutron is open for commits! >> >> Hi all, >> >> I?d like to echo the thanks to all involved, and thanks for the >> patience during this period of transition. >> >> And a logistical note: if you have any outstanding reviews against the >> now missing files/directories (db/{loadbalancer,firewall,vpn}, services/, >> or tests/unit/services), you must re-submit your review against the new >> repos. Existing neutron reviews for service code will be summarily >> abandoned in the near future. >> >> Lbaas folks, hold off on re-submitting feature/lbaasv2 reviews. I?ll >> have that branch merged in the morning, and ping in channel when it?s ready >> for submissions. >> >> Finally, if any tempest lovers want to take a crack at splitting the >> tempest runs into four, perhaps using salv?s reviews of splitting them in >> two as a guide, and then creating jenkins jobs, we need some help getting >> those going. Please ping me directly (IRC: dougwig). >> >> Thanks, >> doug >> >> >> From: Kyle Mestery >> Reply-To: "OpenStack Development Mailing List (not for usage questions)" >> >> Date: Wednesday, December 10, 2014 at 4:10 PM >> To: "OpenStack Development Mailing List (not for usage questions)" < >> openstack-dev at lists.openstack.org> >> Subject: [openstack-dev] [neutron] Services are now split out and >> neutron is open for commits! >> >> Folks, just a heads up that we have completed splitting out the >> services (FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3]. >> This was all done in accordance with the spec approved here [4]. Thanks to >> all involved, but a special thanks to Doug and Anita, as well as infra. >> Without all of their work and help, this wouldn't have been possible! >> >> Neutron and the services repositories are now open for merges again. >> We're going to be landing some major L3 agent refactoring across the 4 >> repositories in the next four days, look for Carl to be leading that work >> with the L3 team. >> >> In the meantime, please report any issues you have in launchpad [5] as >> bugs, and find people in #openstack-neutron or send an email. We've >> verified things come up and all the tempest and API tests for basic neutron >> work fine. >> >> In the coming week, we'll be getting all the tests working for the >> services repositories. Medium term, we need to also move all the advanced >> services tempest tests out of tempest and into the respective repositories. >> We also need to beef these tests up considerably, so if you want to help >> out on a critical project for Neutron, please let me know. >> >> Thanks! >> Kyle >> >> [1] http://git.openstack.org/cgit/openstack/neutron-fwaas >> [2] http://git.openstack.org/cgit/openstack/neutron-lbaas >> [3] http://git.openstack.org/cgit/openstack/neutron-vpnaas >> [4] >> http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo/services-split.rst >> [5] https://bugs.launchpad.net/neutron >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbayer at redhat.com Sat Dec 13 18:43:11 2014 From: mbayer at redhat.com (Mike Bayer) Date: Sat, 13 Dec 2014 13:43:11 -0500 Subject: [openstack-dev] [all] [oslo] [ha] potential issue with implicit async-compatible mysql drivers In-Reply-To: References: <783B3168-BD43-42CB-90EC-B55D10EAB5EA@redhat.com> <548AFB49.8080802@redhat.com> Message-ID: <6D7CEB8D-34B8-4359-983E-3B9C4516D12B@redhat.com> > On Dec 12, 2014, at 1:16 PM, Mike Bayer wrote: > > >> On Dec 12, 2014, at 9:27 AM, Ihar Hrachyshka wrote: >> >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA512 >> >> Reading the latest comments at >> https://github.com/PyMySQL/PyMySQL/issues/275, it seems to me that the >> issue is not to be solved in drivers themselves but instead in >> libraries that arrange connections (sqlalchemy/oslo.db), correct? >> >> Will the proposed connection reopening help? > > disagree, this is absolutely a driver bug. I?ve re-read that last comment and now I see that the developer is suggesting that this condition not be flagged in any way, so I?ve responded. The connection should absolutely blow up and if it wants to refuse to be usable afterwards, that?s fine (it?s the same as MySQLdb ?commands out of sync?). It just has to *not* emit any further SQL as though nothing is wrong. > > It doesn?t matter much for PyMySQL anyway, I don?t know that PyMySQL is up to par for openstack in any case (look at the entries in their changelog: https://github.com/PyMySQL/PyMySQL/blob/master/CHANGELOG "Several other bug fixes?, ?Many bug fixes"- really? is this an iphone app?) > > We really should be looking to get this fixed in MySQL-connector, which seems to have a similar issue. It?s just so difficult to get responses from MySQL-connector that the PyMySQL thread is at least informative. so I spent the rest of yesterday continuing to stare at that example case and also continued the thread on that list. Where I think it?s at is that, while I think this is a huge issue in any one or all of: 1. a gevent-style ?timeout? puts a monkeypatched socket in an entirely unknown state, 2. MySQL?s protocol doesn?t have any provision for matching an OK response to the request that it corresponds to, 3. the MySQL drivers we?re dealing with don?t have actual ?async? APIs, which could then be easily tailored to work with eventlet/gevent safely (see https://github.com/zacharyvoase/gevent-psycopg2 https://bitbucket.org/dvarrazzo/psycogreen for the PG examples of these, problem solved), at the moment I?m not fully confident the drivers are going to feasibly be able to provide a complete fix here. MySQL sends a status message that is essentially, ?OK?, and there?s not really any way to tell that this ?OK? is actually from a different statement. What we need at the very basic level is that, if we call connection.rollback(), it either fails with an exception, or it succeeds. Right now, the core of the test case is that we see connection.rollback() silently failing, which then causes the next statement (the INSERT) to also fail - then the connection rights itself and continues to be usable to complete the transaction. There might be some other variants of this. So in the interim I have added for SQLA 0.9.9, which I can also make available as part of oslo.db.sqlalchemy.compat if we?d like, a session.invalidate() method that will just call connection.invalidate() on the current bound connection(s); this is then caught within the block where we know that eventlet/gevent is in a ?timeout? status. Within the oslo.db.sqlalchemy.enginefacade system, we can potentially add direct awareness of eventlet.Timeout (http://eventlet.net/doc/modules/timeout.html) as a distinct error condition within a transactional block, and invalidate the known connection(s) when this is caught. This would insulate us from this particular issue regardless of driver, with the key assumption that it is in fact only a ?timeout? condition under which this issue actually occurs. From pcm at cisco.com Sat Dec 13 18:58:40 2014 From: pcm at cisco.com (Paul Michali (pcm)) Date: Sat, 13 Dec 2014 18:58:40 +0000 Subject: [openstack-dev] [neutron] OT: Utah Mid-cycle sprint photos Message-ID: <7C3D1552-8CC7-49A7-AB5D-9A42FAA464CD@cisco.com> I put a link in the Etherpad, to some photos I took from the mid-cycle sprint in Utah. Here?s the direct link? http://media.michali.net/21/198/ That gallery doesn?t correctly scale portrait oriented photos initially, but you can select a size and it?ll resize it. I?ve got to get to that some day. PCM (Paul Michali) MAIL ?..?. pcm at cisco.com IRC ??..? pc_m (irc.freenode.com) TW ???... @pmichali GPG Key ? 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pcm at cisco.com Sat Dec 13 19:06:38 2014 From: pcm at cisco.com (Paul Michali (pcm)) Date: Sat, 13 Dec 2014 19:06:38 +0000 Subject: [openstack-dev] [neutron][vpnaas] Working on unit tests... Message-ID: For the new VPNaaS repo, I have created https://review.openstack.org/#/c/141532/ to move the tests from tests.skip and modify the imports. This has Brandon?s change to setup policy.json, and Ihar?s one-liner for moving get_admin_context() in one test (should we upstream his, and I rebase mine?). Please look it over, as I?m not sure if I put the override_nvalues() calls in the right places or not. It passes unit tests in Jenkins. The Tempest tests all fail. Is that expected? What?s the plan for functional and tempest tests with these other repos? Thanks! PCM (Paul Michali) MAIL ?..?. pcm at cisco.com IRC ??..? pc_m (irc.freenode.com) TW ???... @pmichali GPG Key ? 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From brandon.logan at RACKSPACE.COM Sun Dec 14 05:31:57 2014 From: brandon.logan at RACKSPACE.COM (Brandon Logan) Date: Sun, 14 Dec 2014 05:31:57 +0000 Subject: [openstack-dev] [neutron][vpnaas] Working on unit tests... In-Reply-To: References: Message-ID: <1418535117.10675.1.camel@localhost> Paul, It looks like you put that method call in all the right places. You would know if you didn't, because the unit tests would fail bc of the policy.json. Not sure on the tempests tests. I'm sure Doug and Kyle know more about that, so hopefully they can chime in. Thanks, Brandon On Sat, 2014-12-13 at 19:06 +0000, Paul Michali (pcm) wrote: > For the new VPNaaS repo, I have > created https://review.openstack.org/#/c/141532/ to move the tests > from tests.skip and modify the imports. This has Brandon?s change to > setup policy.json, and Ihar?s one-liner for moving get_admin_context() > in one test (should we upstream his, and I rebase mine?). > > > Please look it over, as I?m not sure if I put the override_nvalues() > calls in the right places or not. > > > It passes unit tests in Jenkins. The Tempest tests all fail. Is that > expected? What?s the plan for functional and tempest tests with these > other repos? > > > Thanks! > > > > > PCM (Paul Michali) > > > MAIL ?..?. pcm at cisco.com > IRC ??..? pc_m (irc.freenode.com) > TW ???... @pmichali > GPG Key ? 4525ECC253E31A83 > Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 > > > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lin.tan at intel.com Sun Dec 14 06:07:21 2014 From: lin.tan at intel.com (Tan, Lin) Date: Sun, 14 Dec 2014 06:07:21 +0000 Subject: [openstack-dev] [Ironic] deadline for specs Message-ID: Hi, A quick question, do we have a SpecProposalDeadline for Ironic, 18th Dec or ? Thanks Best Regards, Tan From cbkyeoh at gmail.com Sun Dec 14 07:00:52 2014 From: cbkyeoh at gmail.com (Christopher Yeoh) Date: Sun, 14 Dec 2014 07:00:52 +0000 Subject: [openstack-dev] [qa] How to delete a VM which is in ERROR state? References: <548B6844.4020804@gmail.com> Message-ID: If force delete doesn't work please do submit the bug report along with as much of the relevant nova logs as you can. Even better if it's easily repeatable with devstack. Chris On Sat, 13 Dec 2014 at 8:43 am, pcrews wrote: > On 12/09/2014 03:54 PM, Ken'ichi Ohmichi wrote: > > Hi, > > > > This case is always tested by Tempest on the gate. > > > > https://github.com/openstack/tempest/blob/master/tempest/ > api/compute/servers/test_delete_server.py#L152 > > > > So I guess this problem wouldn't happen on the latest version at least. > > > > Thanks > > Ken'ichi Ohmichi > > > > --- > > > > 2014-12-10 6:32 GMT+09:00 Joe Gordon : > >> > >> > >> On Sat, Dec 6, 2014 at 5:08 PM, Danny Choi (dannchoi) < > dannchoi at cisco.com> > >> wrote: > >>> > >>> Hi, > >>> > >>> I have a VM which is in ERROR state. > >>> > >>> > >>> +--------------------------------------+-------------------- > --------------------------+--------+------------+----------- > --+--------------------+ > >>> > >>> | ID | Name > >>> | Status | Task State | Power State | Networks | > >>> > >>> > >>> +--------------------------------------+-------------------- > --------------------------+--------+------------+----------- > --+--------------------+ > >>> > >>> | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | > >>> cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR | - | > NOSTATE > >>> | | > >>> > >>> > >>> I tried in both CLI ?nova delete? and Horizon ?terminate instance?. > >>> Both accepted the delete command without any error. > >>> However, the VM never got deleted. > >>> > >>> Is there a way to remove the VM? > >> > >> > >> What version of nova are you using? This is definitely a serious bug, > you > >> should be able to delete an instance in error state. Can you file a bug > that > >> includes steps on how to reproduce the bug along with all relevant logs. > >> > >> bugs.launchpad.net/nova > >> > >>> > >>> > >>> Thanks, > >>> Danny > >>> > >>> _______________________________________________ > >>> OpenStack-dev mailing list > >>> OpenStack-dev at lists.openstack.org > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Hi, > > I've encountered this in my own testing and have found that it appears > to be tied to libvirt. > > When I hit this, reset-state as the admin user reports success (and > state is set), *but* things aren't really working as advertised and > subsequent attempts to do anything with the errant vm's will send them > right back into 'FLAIL' / can't delete / endless DELETING mode. > > restarting libvirt-bin on my machine fixes this - after restart, the > deleting vm's are properly wiped without any further user input to > nova/horizon and all seems right in the world. > > using: > devstack > ubuntu 14.04 > libvirtd (libvirt) 1.2.2 > > triggered via: > lots of random create/reboot/resize/delete requests of varying validity > and sanity. > > Am in the process of cleaning up my test code so as not to hurt anyone's > brain with the ugly and will file a bug once done, but thought this > worth sharing. > > Thanks, > Patrick > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From devananda.vdv at gmail.com Sun Dec 14 07:13:03 2014 From: devananda.vdv at gmail.com (Devananda van der Veen) Date: Sun, 14 Dec 2014 07:13:03 +0000 Subject: [openstack-dev] [Ironic] deadline for specs References: Message-ID: Hi, Tan, No, ironic is not having an early feature proposal freeze for this cycle. Dec 18 is the kilo-1 milestone, and that is all. Please see the release schedule here: https://wiki.openstack.org/wiki/Kilo_Release_Schedule That being said, the earlier you can propose a spec, the better your chances for it landing in any given cycle. Regards, Devananda On Sat, Dec 13, 2014, 10:10 PM Tan, Lin wrote: Hi, A quick question, do we have a SpecProposalDeadline for Ironic, 18th Dec or ? Thanks Best Regards, Tan _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sun Dec 14 08:45:41 2014 From: zigo at debian.org (Thomas Goirand) Date: Sun, 14 Dec 2014 16:45:41 +0800 Subject: [openstack-dev] Do all OpenStack daemons support sd_notify? Message-ID: <548D4E35.6070101@debian.org> Hi, As I am slowing fixing all systemd issues for the daemons of OpenStack in Debian (and hopefully, have this ready before the freeze of Jessie), I was wondering what kind of Type= directive to put on the systemd .service files. I have noticed that in Fedora, there's Type=notify. So my question is: Do all OpenStack daemons, as a rule, support the DBus sd_notify thing? Should I always use Type=notify for systemd .service files? Can this be called a general rule with no exception? Cheers, Thomas Goirand (zigo) From lin.tan at intel.com Sun Dec 14 09:02:26 2014 From: lin.tan at intel.com (Tan, Lin) Date: Sun, 14 Dec 2014 09:02:26 +0000 Subject: [openstack-dev] [Ironic] deadline for specs In-Reply-To: References: Message-ID: Hi Devananda, It?s good news for me. Many thanks for your quick response. B.R Tan From: Devananda van der Veen [mailto:devananda.vdv at gmail.com] Sent: Sunday, December 14, 2014 3:13 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Ironic] deadline for specs Hi, Tan, No, ironic is not having an early feature proposal freeze for this cycle. Dec 18 is the kilo-1 milestone, and that is all. Please see the release schedule here: https://wiki.openstack.org/wiki/Kilo_Release_Schedule That being said, the earlier you can propose a spec, the better your chances for it landing in any given cycle. Regards, Devananda On Sat, Dec 13, 2014, 10:10 PM Tan, Lin > wrote: Hi, A quick question, do we have a SpecProposalDeadline for Ironic, 18th Dec or ? Thanks Best Regards, Tan _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Sun Dec 14 13:40:56 2014 From: gkotton at vmware.com (Gary Kotton) Date: Sun, 14 Dec 2014 13:40:56 +0000 Subject: [openstack-dev] [nova] 'SetuptoolsVersion' object does not support indexing 2014-12-14 12:38:38.563 | Message-ID: Hi, Anyone familiar with the cause of this - seems like all unit tests are failing. An example is below: Traceback (most recent call last): 2014-12-14 12:38:38.595 | File "nova/tests/unit/virt/xenapi/test_xenapi.py", line 1945, in test_resize_ensure_vm_is_shutdown_fails 2014-12-14 12:38:38.595 | conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) 2014-12-14 12:38:38.595 | File "nova/virt/xenapi/driver.py", line 119, in __init__ 2014-12-14 12:38:38.596 | self._session = session.XenAPISession(url, username, password) 2014-12-14 12:38:38.596 | File "nova/virt/xenapi/client/session.py", line 96, in __init__ 2014-12-14 12:38:38.596 | self._verify_plugin_version() 2014-12-14 12:38:38.596 | File "nova/virt/xenapi/client/session.py", line 105, in _verify_plugin_version 2014-12-14 12:38:38.596 | if not versionutils.is_compatible(requested_version, current_version): 2014-12-14 12:38:38.596 | File "nova/openstack/common/versionutils.py", line 200, in is_compatible 2014-12-14 12:38:38.596 | if same_major and (requested_parts[0] != current_parts[0]): 2014-12-14 12:38:38.596 | TypeError: 'SetuptoolsVersion' object does not support indexing Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From keyvan.m.sadeghi at gmail.com Sun Dec 14 14:01:55 2014 From: keyvan.m.sadeghi at gmail.com (Keyvan Mir Mohammad Sadeghi) Date: Sun, 14 Dec 2014 14:01:55 +0000 Subject: [openstack-dev] [Solum] Contributing to Solum References: <564B26C5-06FC-4C51-B9EC-E8B834D418C0@rackspace.com> Message-ID: Adrian, Ah we thought meetings are held at #solum. Thanks, so we'll commence with 1308690. Cheers, Keyvan On Sat, Dec 13, 2014, 8:33 PM Adrian Otto wrote: > Keyvan, > > The meeting did happen at 2100 UTC. Are you sure you went to > #openstack-meeting-alt on Freenode? > > > http://lists.openstack.org/pipermail/openstack-dev/2014-December/052009.html > > I think bug 1308690 would be a great contribution. Please take ownership > of that bug ticket, and begin! > > Thanks, > > Adrian > > On Dec 13, 2014, at 12:04 AM, Keyvan Mir Mohammad Sadeghi < > keyvan.m.sadeghi at gmail.com> wrote: > > Hi Adrian, Roshan, etc, > > We tried to reach you via the IRC channel Tuesdays night 2100 UTC, but > guess that meeting got cancelled? > > We'd like to set a start for our contribution ASAP, so going through the > list of bugs, the below one looked interesting: > > https :// > bugs.launchpad.net > / > solum > /+bug/1308690 > > > Though I'm not sure this is the right one. Would you approve? If positive, > are there any sources on the topic other than the bug page itself? And > about the conventions you mentioned in the Skype call; what is the starting > point, do we just submit a pull request referencing the bug number? > > Kind regards, > > Keyvan > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun Dec 14 15:09:51 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 14 Dec 2014 15:09:51 +0000 Subject: [openstack-dev] [nova] 'SetuptoolsVersion' object does not support indexing 2014-12-14 12:38:38.563 | In-Reply-To: References: Message-ID: <20141214150950.GI2497@yuggoth.org> On 2014-12-14 13:40:56 +0000 (+0000), Gary Kotton wrote: > Anyone familiar with the cause of this ? seems like all unit tests > are failing. An example is below: [...] > 2014-12-14 12:38:38.596 | File "nova/openstack/common/versionutils.py", line 200, in is_compatible > 2014-12-14 12:38:38.596 | if same_major and (requested_parts[0] != current_parts[0]): > 2014-12-14 12:38:38.596 | TypeError: 'SetuptoolsVersion' object does not support indexing Setuptools 8 was released to PyPI yesterday, so versionutils.py is probably tickling some behavior change in it. -- Jeremy Stanley From davanum at gmail.com Sun Dec 14 16:08:53 2014 From: davanum at gmail.com (Davanum Srinivas) Date: Sun, 14 Dec 2014 11:08:53 -0500 Subject: [openstack-dev] [nova] 'SetuptoolsVersion' object does not support indexing 2014-12-14 12:38:38.563 | In-Reply-To: <20141214150950.GI2497@yuggoth.org> References: <20141214150950.GI2497@yuggoth.org> Message-ID: Jeremy, Gary, I have a patch in progress: Nova - https://review.openstack.org/141654/ Oslo-incubator - https://review.openstack.org/#/c/141653/ thanks, dims On Sun, Dec 14, 2014 at 10:09 AM, Jeremy Stanley wrote: > On 2014-12-14 13:40:56 +0000 (+0000), Gary Kotton wrote: >> Anyone familiar with the cause of this ? seems like all unit tests >> are failing. An example is below: > [...] >> 2014-12-14 12:38:38.596 | File "nova/openstack/common/versionutils.py", line 200, in is_compatible >> 2014-12-14 12:38:38.596 | if same_major and (requested_parts[0] != current_parts[0]): >> 2014-12-14 12:38:38.596 | TypeError: 'SetuptoolsVersion' object does not support indexing > > Setuptools 8 was released to PyPI yesterday, so versionutils.py is > probably tickling some behavior change in it. > -- > Jeremy Stanley > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From pcm at cisco.com Sun Dec 14 16:25:05 2014 From: pcm at cisco.com (Paul Michali (pcm)) Date: Sun, 14 Dec 2014 16:25:05 +0000 Subject: [openstack-dev] [neutron][vpnaas] Working on unit tests... In-Reply-To: <1418535117.10675.1.camel@localhost> References: <1418535117.10675.1.camel@localhost> Message-ID: Thanks for checking the changes and confirming! BTW, my change set had a fix from Ihar (141405). Rather than approve this as-is, which would void Ihar's, I approved Ihar?s change, and then will rebase mine on his, once it is upstream (wanted to give him credit for his changes). Regards, PCM (Paul Michali) MAIL ?..?. pcm at cisco.com IRC ??..? pc_m (irc.freenode.com) TW ???... @pmichali GPG Key ? 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 On Dec 14, 2014, at 12:31 AM, Brandon Logan wrote: > Paul, > It looks like you put that method call in all the right places. You > would know if you didn't, because the unit tests would fail bc of the > policy.json. > > Not sure on the tempests tests. I'm sure Doug and Kyle know more about > that, so hopefully they can chime in. > > Thanks, > Brandon > > On Sat, 2014-12-13 at 19:06 +0000, Paul Michali (pcm) wrote: >> For the new VPNaaS repo, I have >> created https://review.openstack.org/#/c/141532/ to move the tests >> from tests.skip and modify the imports. This has Brandon?s change to >> setup policy.json, and Ihar?s one-liner for moving get_admin_context() >> in one test (should we upstream his, and I rebase mine?). >> >> >> Please look it over, as I?m not sure if I put the override_nvalues() >> calls in the right places or not. >> >> >> It passes unit tests in Jenkins. The Tempest tests all fail. Is that >> expected? What?s the plan for functional and tempest tests with these >> other repos? >> >> >> Thanks! >> >> >> >> >> PCM (Paul Michali) >> >> >> MAIL ?..?. pcm at cisco.com >> IRC ??..? pc_m (irc.freenode.com) >> TW ???... @pmichali >> GPG Key ? 4525ECC253E31A83 >> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 >> >> >> >> >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From gkotton at vmware.com Sun Dec 14 16:43:11 2014 From: gkotton at vmware.com (Gary Kotton) Date: Sun, 14 Dec 2014 16:43:11 +0000 Subject: [openstack-dev] [nova] 'SetuptoolsVersion' object does not support indexing 2014-12-14 12:38:38.563 | In-Reply-To: References: <20141214150950.GI2497@yuggoth.org> Message-ID: Gracias! On 12/14/14, 6:08 PM, "Davanum Srinivas" wrote: >Jeremy, Gary, > >I have a patch in progress: >Nova - https://review.openstack.org/141654/ >Oslo-incubator - https://review.openstack.org/#/c/141653/ > >thanks, >dims > >On Sun, Dec 14, 2014 at 10:09 AM, Jeremy Stanley >wrote: >> On 2014-12-14 13:40:56 +0000 (+0000), Gary Kotton wrote: >>> Anyone familiar with the cause of this ? seems like all unit tests >>> are failing. An example is below: >> [...] >>> 2014-12-14 12:38:38.596 | File >>>"nova/openstack/common/versionutils.py", line 200, in is_compatible >>> 2014-12-14 12:38:38.596 | if same_major and >>>(requested_parts[0] != current_parts[0]): >>> 2014-12-14 12:38:38.596 | TypeError: 'SetuptoolsVersion' object >>>does not support indexing >> >> Setuptools 8 was released to PyPI yesterday, so versionutils.py is >> probably tickling some behavior change in it. >> -- >> Jeremy Stanley >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > >-- >Davanum Srinivas :: >https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_dims&d=AA >IGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=VlZxHpZBmzzkWT5jqz9JY >Bk8YTeq9N3-diTlNj4GyNc&m=aO5lTmiEFSMO9DXZby3HziV46zYcIriWWYauy2ulOwY&s=Nsq >jWETAVZVBzfmTYj1RjTmBuhiA67bt3nYpT7CR3ZU&e= > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dougw at a10networks.com Sun Dec 14 17:45:42 2014 From: dougw at a10networks.com (Doug Wiegley) Date: Sun, 14 Dec 2014 17:45:42 +0000 Subject: [openstack-dev] [neutron][lbaas] Canceling lbaas meeting 12/16 Message-ID: Unless someone has an urgent agenda item, and due to the mid-cycle for Octavia, which has a bunch of overlap with the lbaas team, let?s cancel this week. If you have post-split lbaas v2 questions, please find me in #openstack-lbaas. The only announcement was going to be: If you are waiting to re-submit/submit lbaasv2 changes for the new repo, please monitor this review, or make your change dependent on it: https://review.openstack.org/#/c/141247/ Thanks, Doug From gilmeir at mellanox.com Sun Dec 14 18:14:21 2014 From: gilmeir at mellanox.com (Gil Meir) Date: Sun, 14 Dec 2014 18:14:21 +0000 Subject: [openstack-dev] [Fuel] [Mellanox] [6.0] Mellanox feature disfunction due to kernel upgrade on CentOS Message-ID: Mellanox feature is broken again due to another kernel replacement done by MOS team on CentOS. This was found on Fuel 6.0 RC2. Since this is a reoccurring issue I reopened the existing bug: https://bugs.launchpad.net/fuel/+bug/1393414 The previous solution was upgrading the kickstart stage kernel + replacing the OFED rpm package. I'm checking if there is any way to solve this without the kernel upgrade in order to minimize the impact so close to GA, I'll update ASAP. Regards, Gil Meir SW Cloud Solutions Mellanox Technologies -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsbryant at electronicjungle.net Sun Dec 14 22:20:16 2014 From: jsbryant at electronicjungle.net (Jay Bryant) Date: Sun, 14 Dec 2014 16:20:16 -0600 Subject: [openstack-dev] [nova][cinder][infra] Ceph CI status update In-Reply-To: References: <20141211163643.GA10911@helmut> <5489CE60.2040102@anteaya.info> <548B49E2.1040005@anteaya.info> Message-ID: On Saturday, December 13, 2014, Deepak Shetty wrote: > I think you completely mis-understood my Q > I am completely in agreement for _not_ putting CI status in mailing list. > > Let me rephrase: > > As of now, I see 2 places where CI status is being tracked: > > https://wiki.openstack.org/wiki/ThirdPartySystems (clicking on the Link > tells u the status) > and > https://wiki.openstack.org/wiki/Cinder/third-party-ci-status (one of > column is status column) > > How are the 2 different ? Do we need to update both ? > > thanx, > deepak > > On Sat, Dec 13, 2014 at 1:32 AM, Anita Kuno > wrote: >> >> On 12/12/2014 03:28 AM, Deepak Shetty wrote: >> > On Thu, Dec 11, 2014 at 10:33 PM, Anita Kuno > > wrote: >> > >> >> On 12/11/2014 09:36 AM, Jon Bernard wrote: >> >>> Heya, quick Ceph CI status update. Once the test_volume_boot_pattern >> >>> was marked as skipped, only the revert_resize test was failing. I >> have >> >>> submitted a patch to nova for this [1], and that yields an all green >> >>> ceph ci run [2]. So at the moment, and with my revert patch, we're in >> >>> good shape. >> >>> >> >>> I will fix up that patch today so that it can be properly reviewed and >> >>> hopefully merged. From there I'll submit a patch to infra to move the >> >>> job to the check queue as non-voting, and we can go from there. >> >>> >> >>> [1] https://review.openstack.org/#/c/139693/ >> >>> [2] >> >> >> http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html >> >>> >> >>> Cheers, >> >>> >> >> Please add the name of your CI account to this table: >> >> https://wiki.openstack.org/wiki/ThirdPartySystems >> >> >> >> As outlined in the third party CI requirements: >> >> http://ci.openstack.org/third_party.html#requirements >> >> >> >> Please post system status updates to your individual CI wikipage that >> is >> >> linked to this table. >> >> >> > >> > How is posting status there different than here : >> > https://wiki.openstack.org/wiki/Cinder/third-party-ci-status >> > >> > thanx, >> > deepak >> > >> >> > >> > >> > Deepak, We have been using the Cinder wiki page to track development of each CI and to communicate issues within the Cinder community. The general wiki is for tracking the state of all the different CI systems. So, I would say that both should be updated right now, eventually the Cinder one should be deprecated. Jay > > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> There are over 100 CI accounts now and growing. >> >> Searching the email archives to evaluate the status of a CI is not >> something that infra will do, we will look on that wikipage or we will >> check the third-party-announce email list (which all third party CI >> systems should be subscribed to, as outlined in the third_party.html >> page lined above). >> >> If we do not find information where we have asked you to put it and were >> we expect it, we may disable your system until you have fulfilled the >> requirements as outlined in the third_party.html page linked above. >> >> Sprinkling status updates amongst the emails posted to -dev and >> expecting the infra team and other -devs to find them when needed is >> unsustainable and has been for some time, which is why we came up with >> the wikipage to aggregate them. >> >> Please direct all further questions about this matter to one of the two >> third-party meetings as linked above. >> >> Thank you, >> Anita. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- jsbryant at electronicjungle.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Sun Dec 14 23:05:58 2014 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 15 Dec 2014 10:05:58 +1100 Subject: [openstack-dev] [devstack] localrc for mutli-node setup In-Reply-To: References: Message-ID: <548E17D6.2020909@redhat.com> On 12/13/2014 07:03 AM, Danny Choi (dannchoi) wrote: > I would like to use devstack to deploy OpenStack on a multi-node setup, > i.e. separate Controller, Network and Compute nodes Did you see [1]? Contributions to make that better of course welcome. -i [1] http://docs.openstack.org/developer/devstack/guides/multinode-lab.html From steven at wedontsleep.org Mon Dec 15 01:13:34 2014 From: steven at wedontsleep.org (Steve Kowalik) Date: Mon, 15 Dec 2014 12:13:34 +1100 Subject: [openstack-dev] [diskimage-builder] ramdisk-image-create fails for creating Centos/rhel images. In-Reply-To: References: Message-ID: <548E35BE.7060508@wedontsleep.org> On 11/12/14 17:51, Harshada Kakad wrote: > I am trying to build Centos/rhel image for baremetal deployment using > ramdisk-image-create. I am using my build host as CentOS release 6.5 > (Final). > It fails saying no busybox available. The 'ramdisk' element in diskimage-builder requires busybox, which CentOS/RHEL do not ship. You need to add '--ramdisk-element dracut-ramdisk' to your command line. -- Steve In the beginning was the word, and the word was content-type: text/plain From hzwangpan at corp.netease.com Mon Dec 15 02:14:04 2014 From: hzwangpan at corp.netease.com (hzwangpan) Date: Mon, 15 Dec 2014 10:14:04 +0800 Subject: [openstack-dev] [nova] pylint failure in havana stable 2013.2.4 Message-ID: <1614252448.23708342.1418609647315.JavaMail.hzwangpan@corp.netease.com> Hi all? While I'm cherry-picking havana stable 2013.2.4(nova) commits to our private branch? I found that this backport commit 97028309ff7a13ea5e9b8cc353d56ec9678caaff causes a pylint failure: nova/api/openstack/compute/contrib/floating_ips.py:203: [E1101, FloatingIPController.delete] Module 'nova.exception' has no 'Forbidden' member. Then I downloaded the tar package on launched(https://launchpad.net/nova/havana/2013.2.4/+download/nova-2013.2.4.tar.gz), I checked the nova/exception.py, there is no 'Forbidden' class or other member in it, so I guess this is a mis-denpendency backport commit, this denpended commit c75a15a48981628e77d4178476c121693a656814 should be backported. 2014-12-15 09:51 (UTC+8) Wangpan -------------- next part -------------- An HTML attachment was scrubbed... URL: From raghavendra.lad at accenture.com Mon Dec 15 04:10:25 2014 From: raghavendra.lad at accenture.com (raghavendra.lad at accenture.com) Date: Mon, 15 Dec 2014 04:10:25 +0000 Subject: [openstack-dev] [Murano] Murano Agent Message-ID: <6e0a979ee08e436f8b2b374855c4b6c3@BY2PR42MB101.048d.mgd.msft.net> Hi Team, I am installing Murano on the Ubuntu 14.04 Juno setup and would like to know what components need to be installed in a separate VM for Murano agent. Please let me kow why Murano-agent is required and the components that needs to be installed in it. Warm Regards, Raghavendra Lad ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sean_Collins2 at cable.comcast.com Mon Dec 15 04:17:17 2014 From: Sean_Collins2 at cable.comcast.com (Collins, Sean) Date: Mon, 15 Dec 2014 04:17:17 +0000 Subject: [openstack-dev] [neutron] OT: Utah Mid-cycle sprint photos In-Reply-To: <7C3D1552-8CC7-49A7-AB5D-9A42FAA464CD@cisco.com> References: <7C3D1552-8CC7-49A7-AB5D-9A42FAA464CD@cisco.com> Message-ID: <7EB180D009B1A6428D376906754127CB2E45ABBF@PACDCEXMB22.cable.comcast.com> Thanks Paul! Great pictures! Sean M. Collins ________________________________ From: Paul Michali (pcm) [pcm at cisco.com] Sent: Saturday, December 13, 2014 1:58 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [neutron] OT: Utah Mid-cycle sprint photos I put a link in the Etherpad, to some photos I took from the mid-cycle sprint in Utah. Here?s the direct link? http://media.michali.net/21/198/ That gallery doesn?t correctly scale portrait oriented photos initially, but you can select a size and it?ll resize it. I?ve got to get to that some day. PCM (Paul Michali) MAIL ?..?. pcm at cisco.com IRC ??..? pc_m (irc.freenode.com) TW ???... @pmichali GPG Key ? 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkamaldinov at mirantis.com Mon Dec 15 04:54:39 2014 From: rkamaldinov at mirantis.com (Ruslan Kamaldinov) Date: Mon, 15 Dec 2014 08:54:39 +0400 Subject: [openstack-dev] [Murano] Murano Agent In-Reply-To: <6e0a979ee08e436f8b2b374855c4b6c3@BY2PR42MB101.048d.mgd.msft.net> References: <6e0a979ee08e436f8b2b374855c4b6c3@BY2PR42MB101.048d.mgd.msft.net> Message-ID: On Mon, Dec 15, 2014 at 7:10 AM, wrote: > Please let me kow why Murano-agent is required and the components that needs > to be installed in it. You can find more details about murano agent at: https://wiki.openstack.org/wiki/Murano/UnifiedAgent It can be installed with diskimage-builder: http://git.openstack.org/cgit/stackforge/murano-agent/tree/README.rst#n34 - Ruslan From swati.shukla1 at tcs.com Mon Dec 15 05:29:19 2014 From: swati.shukla1 at tcs.com (Swati Shukla1) Date: Mon, 15 Dec 2014 10:59:19 +0530 Subject: [openstack-dev] #PERSONAL# : Facing problem in installing new python dependencies for Horizon- Pls help Message-ID: Hi, I want to install 2 new modules in Horizon but have no clue how it installs in its virtualenv. I mentioned pisa >= 3.0.33 and reportlab >= 2.5 in requirements.txt file, ran ./unstack.sh and ./stack.sh, but still do not get these installed in its virtualenv. As a result, when I do ./run_tests.sh, I get " ImportError: No module named ho.pisa" Please suggest me if I am going wrong somewhere or how to proceed with this. Thanks in advance. Regards, Swati Shukla Tata Consultancy Services Mailto: swati.shukla1 at tcs.com Website: http://www.tcs.com ____________________________________________ Experience certainty. IT Services Business Solutions Consulting ____________________________________________ =====-----=====-----===== Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From djhon9813 at gmail.com Mon Dec 15 07:20:33 2014 From: djhon9813 at gmail.com (david jhon) Date: Mon, 15 Dec 2014 12:20:33 +0500 Subject: [openstack-dev] SRIOV failures error- In-Reply-To: <547D67CD.20605@redhat.com> References: <547D67CD.20605@redhat.com> Message-ID: Hi, I am doing the same i.e., configuring SR-IOV with Openstack juno, I have manually done this but now I am doing it for openstack. Can you please list down steps you used for configuring SR-IOV. I am currently following this link but getting many errors: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking Please share some tutorial you found for this configuration. I'll be real grateful for your help. Thanking you in anticipation Looking forward seeing your response.... Thanks and Regards, On Tue, Dec 2, 2014 at 12:18 PM, Itzik Brown wrote: > > Hi, > Seems like you don't have available devices for allocation. > > What's the output of: > #echo 'use nova;select hypervisor_hostname,pci_stats from compute_nodes;' > | mysql -u root > > Itzik > > > On 12/02/2014 08:21 AM, Murali B wrote: > > Hi > > we are trying to bring-up the SRIOV on set-up. > > facing the below error when we tried create the vm. > > > * Still during creating instance ERROR appears . PciDeviceRequestFailed: > PCI device request ({'requests': > [InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=58584ee1-8a41-4979-9905-4d18a3df3425,spec=[{physical_network='physnet1'}])], > 'code': 500}equests)s failed* > > followed the steps from the > https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking > > Please help us to get rid this error. let us know if any configuration > is required at hardware in order to work properly. > > Thanks > -Murali > > > _______________________________________________ > OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From irenab at mellanox.com Mon Dec 15 07:47:20 2014 From: irenab at mellanox.com (Irena Berezovsky) Date: Mon, 15 Dec 2014 07:47:20 +0000 Subject: [openstack-dev] SRIOV failures error- In-Reply-To: References: <547D67CD.20605@redhat.com> Message-ID: Hi David, One configuration option is missing that you should be aware of: In /etc/neutron/plugins/ml2/ml2_conf_sriov.ini: In [ml2_sriov] section set PCI Device vendor and product IDs you use, in format vendor_id:product_id supported_pci_vendor_devs = Example: supported_pci_vendor_devs = 8086:10ca Then restart neutron-server. May ask you to post this question at ask.openstack.org? BR, Irena From: david jhon [mailto:djhon9813 at gmail.com] Sent: Monday, December 15, 2014 9:21 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] SRIOV failures error- Hi, I am doing the same i.e., configuring SR-IOV with Openstack juno, I have manually done this but now I am doing it for openstack. Can you please list down steps you used for configuring SR-IOV. I am currently following this link but getting many errors: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking Please share some tutorial you found for this configuration. I'll be real grateful for your help. Thanking you in anticipation Looking forward seeing your response.... Thanks and Regards, On Tue, Dec 2, 2014 at 12:18 PM, Itzik Brown > wrote: Hi, Seems like you don't have available devices for allocation. What's the output of: #echo 'use nova;select hypervisor_hostname,pci_stats from compute_nodes;' | mysql -u root Itzik On 12/02/2014 08:21 AM, Murali B wrote: Hi we are trying to bring-up the SRIOV on set-up. facing the below error when we tried create the vm. Still during creating instance ERROR appears . PciDeviceRequestFailed: PCI device request ({'requests': [InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=58584ee1-8a41-4979-9905-4d18a3df3425,spec=[{physical_network='physnet1'}])], 'code': 500}equests)s failed followed the steps from the https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking Please help us to get rid this error. let us know if any configuration is required at hardware in order to work properly. Thanks -Murali _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From r-mibu at cq.jp.nec.com Mon Dec 15 09:01:55 2014 From: r-mibu at cq.jp.nec.com (Ryota Mibu) Date: Mon, 15 Dec 2014 09:01:55 +0000 Subject: [openstack-dev] [nova][neutron] bridge name generator for vif plugging Message-ID: <1FAA045BFF455543B55A30E48415AA0808C128FF@BPXM06GP.gisp.nec.co.jp> Hi all, We are proposing a change to move bridge name generator (creating bridge name from net-id or reading integration bridge name from nova.conf) from Nova to Neutron. The followings are BPs in Nova and Neutron. https://blueprints.launchpad.net/nova/+spec/neutron-vif-bridge-details https://blueprints.launchpad.net/neutron/+spec/vif-plugging-metadata I'd like to get your comments on this change whether this is relevant direction. I found related comment in Nova code [3] and guess these discussion had in context of vif-plugging and port-binding, but I'm not sure there was consensus about bridge name. https://github.com/openstack/nova/blob/2014.2/nova/network/neutronv2/api.py#L1298-1299 Thanks, Ryota From jichenjc at cn.ibm.com Mon Dec 15 09:04:59 2014 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Mon, 15 Dec 2014 17:04:59 +0800 Subject: [openstack-dev] anyway to pep8 check on a specified file Message-ID: tox -e pep8 usually takes several minutes on my test server, actually I only changes one file and I knew something might wrong there anyway to only check that file? Thanks a lot Best Regards! Kevin (Chen) Ji ? ? Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82454158 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon Dec 15 09:27:05 2014 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 15 Dec 2014 10:27:05 +0100 Subject: [openstack-dev] anyway to pep8 check on a specified file In-Reply-To: References: Message-ID: <548EA969.7030707@redhat.com> Le 15/12/2014 10:04, Chen CH Ji a ?crit : > > tox -e pep8 usually takes several minutes on my test server, actually > I only changes one file and I knew something might wrong there > anyway to only check that file? Thanks a lot > That's really non necessary to check all the files if you only modified a single one. You can just take the files you modified and run a check by doing this : git diff HEAD^ --name-only | xargs tools/with_venv.sh flake8 Hope it helps, -Sylvain > Best Regards! > > Kevin (Chen) Ji ? ? > > Engineer, zVM Development, CSTL > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > Phone: +86-10-82454158 > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian > District, Beijing 100193, PRC > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From berrange at redhat.com Mon Dec 15 09:37:23 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Mon, 15 Dec 2014 09:37:23 +0000 Subject: [openstack-dev] anyway to pep8 check on a specified file In-Reply-To: References: Message-ID: <20141215093723.GB11566@redhat.com> On Mon, Dec 15, 2014 at 05:04:59PM +0800, Chen CH Ji wrote: > > tox -e pep8 usually takes several minutes on my test server, actually I > only changes one file and I knew something might wrong there > anyway to only check that file? Thanks a lot Use ./run_tests.sh -8 That will only check pep8 against the files listed in the current commit. If you want to check an entire branch patch series then git rebase -i master -x './run_tests.sh -8' Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From sbauza at redhat.com Mon Dec 15 09:37:58 2014 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 15 Dec 2014 10:37:58 +0100 Subject: [openstack-dev] anyway to pep8 check on a specified file In-Reply-To: <548EA969.7030707@redhat.com> References: <548EA969.7030707@redhat.com> Message-ID: <548EABF6.40705@redhat.com> Le 15/12/2014 10:27, Sylvain Bauza a ?crit : > > Le 15/12/2014 10:04, Chen CH Ji a ?crit : >> >> tox -e pep8 usually takes several minutes on my test server, actually >> I only changes one file and I knew something might wrong there >> anyway to only check that file? Thanks a lot >> > That's really non necessary to check all the files if you only > modified a single one. > You can just take the files you modified and run a check by doing this : > > git diff HEAD^ --name-only | xargs tools/with_venv.sh flake8 > > Eh, just replying to myself, I just saw there is a recent commit which added a -8 flag to run_tests for checking PEP8 only against HEAD. https://review.openstack.org/#/c/110746/ That's worth it :-) > Hope it helps, > -Sylvain > > >> Best Regards! >> >> Kevin (Chen) Ji ? ? >> >> Engineer, zVM Development, CSTL >> Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com >> Phone: +86-10-82454158 >> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian >> District, Beijing 100193, PRC >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmakhotkin at mirantis.com Mon Dec 15 10:00:30 2014 From: nmakhotkin at mirantis.com (Nikolay Makhotkin) Date: Mon, 15 Dec 2014 14:00:30 +0400 Subject: [openstack-dev] [Mistral] For-each Message-ID: Hi, Here is the doc with suggestions on specification for for-each feature. You are free to comment and ask questions. https://docs.google.com/document/d/1iw0OgQcU0LV_i3Lnbax9NqAJ397zSYA3PMvl6F_uqm0/edit?usp=sharing -- Best Regards, Nikolay -------------- next part -------------- An HTML attachment was scrubbed... URL: From mathieu.rohon at gmail.com Mon Dec 15 10:15:50 2014 From: mathieu.rohon at gmail.com (Mathieu Rohon) Date: Mon, 15 Dec 2014 11:15:50 +0100 Subject: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration In-Reply-To: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP> References: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP> Message-ID: Hi Ryan, We have been working on similar Use cases to announce /32 with the Bagpipe BGPSpeaker that supports EVPN. Please have a look at use case B in [1][2]. Note also that the L2population Mechanism driver for ML2, that is compatible with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN, and I'm sure it could help in your use case [1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe [2]https://www.youtube.com/watch?v=q5z0aPrUZYc&sns [3]https://blueprints.launchpad.net/neutron/+spec/l2-population Mathieu On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger wrote: > Hi, > > At Rackspace, we have a need to create a higher level networking service > primarily for the purpose of creating a Floating IP solution in our > environment. The current solutions for Floating IPs, being tied to plugin > implementations, does not meet our needs at scale for the following reasons: > > 1. Limited endpoint H/A mainly targeting failover only and not multi-active > endpoints, > 2. Lack of noisy neighbor and DDOS mitigation, > 3. IP fragmentation (with cells, public connectivity is terminated inside > each cell leading to fragmentation and IP stranding when cell CPU/Memory use > doesn't line up with allocated IP blocks. Abstracting public connectivity > away from nova installations allows for much more efficient use of those > precious IPv4 blocks). > 4. Diversity in transit (multiple encapsulation and transit types on a per > floating ip basis). > > We realize that network infrastructures are often unique and such a solution > would likely diverge from provider to provider. However, we would love to > collaborate with the community to see if such a project could be built that > would meet the needs of providers at scale. We believe that, at its core, > this solution would boil down to terminating north<->south traffic > temporarily at a massively horizontally scalable centralized core and then > encapsulating traffic east<->west to a specific host based on the > association setup via the current L3 router's extension's 'floatingips' > resource. > > Our current idea, involves using Open vSwitch for header rewriting and > tunnel encapsulation combined with a set of Ryu applications for management: > > https://i.imgur.com/bivSdcC.png > > The Ryu application uses Ryu's BGP support to announce up to the Public > Routing layer individual floating ips (/32's or /128's) which are then > summarized and announced to the rest of the datacenter. If a particular > floating ip is experiencing unusually large traffic (DDOS, slashdot effect, > etc.), the Ryu application could change the announcements up to the Public > layer to shift that traffic to dedicated hosts setup for that purpose. It > also announces a single /32 "Tunnel Endpoint" ip downstream to the TunnelNet > Routing system which provides transit to and from the cells and their > hypervisors. Since traffic from either direction can then end up on any of > the FLIP hosts, a simple flow table to modify the MAC and IP in either the > SRC or DST fields (depending on traffic direction) allows the system to be > completely stateless. We have proven this out (with static routing and > flows) to work reliably in a small lab setup. > > On the hypervisor side, we currently plumb networks into separate OVS > bridges. Another Ryu application would control the bridge that handles > overlay networking to selectively divert traffic destined for the default > gateway up to the FLIP NAT systems, taking into account any configured > logical routing and local L2 traffic to pass out into the existing overlay > fabric undisturbed. > > Adding in support for L2VPN EVPN > (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN > Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to the > Ryu BGP speaker will allow the hypervisor side Ryu application to advertise > up to the FLIP system reachability information to take into account VM > failover, live-migrate, and supported encapsulation types. We believe that > decoupling the tunnel endpoint discovery from the control plane > (Nova/Neutron) will provide for a more robust solution as well as allow for > use outside of openstack if desired. > > ________________________________________ > > Ryan Clevenger > Manager, Cloud Engineering - US > m: 678.548.7261 > e: ryan.clevenger at rackspace.com > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ijw.ubuntu at cack.org.uk Mon Dec 15 10:15:56 2014 From: ijw.ubuntu at cack.org.uk (Ian Wells) Date: Mon, 15 Dec 2014 11:15:56 +0100 Subject: [openstack-dev] [nova][neutron] bridge name generator for vif plugging In-Reply-To: <1FAA045BFF455543B55A30E48415AA0808C128FF@BPXM06GP.gisp.nec.co.jp> References: <1FAA045BFF455543B55A30E48415AA0808C128FF@BPXM06GP.gisp.nec.co.jp> Message-ID: Hey Ryota, A better way of describing it would be that the bridge name is, at present, generated in *both* Nova *and* Neutron, and the VIF type semantics define how it's calculated. I think you're right that in both cases it would make more sense for Neutron to tell Nova what the connection endpoint was going to be rather than have Nova calculate it independently. I'm not sure that that necessarily requires two blueprints, and you don't have a spec there at the moment, which is a problem because the Neutron spec deadline is upon us, but the idea's a good one. (You might get away without a Neutron spec, since the change to Neutron to add the information should be small and backward compatible, but that's not something I can make judgement on.) If we changed this, then your options are to make new plugging types where the name is exchanged rather than calculated or use the old plugging types and provide (from Neutron) and use if provided (in Nova) the name. You'd need to think carefully about upgrade scenarios to make sure that changing version on either side is going to work. VIF_TYPE_TAP, while somewhat different in its focus, is also moving in the same direction of having a more logical interface between Nova and Neutron. That plus this points that we should have VIF_TYPE_TAP handing over the TAP device name to use, and similarly create a VIF_TYPE_BRIDGE (passing bridge name) and slightly modify VIF_TYPE_VHOSTUSER before it gets established (to add the socket name). Did you have any thoughts on how the metadata should be stored on the port? -- Ian. On 15 December 2014 at 10:01, Ryota Mibu wrote: > > Hi all, > > > We are proposing a change to move bridge name generator (creating bridge > name from net-id or reading integration bridge name from nova.conf) from > Nova to Neutron. The followings are BPs in Nova and Neutron. > > https://blueprints.launchpad.net/nova/+spec/neutron-vif-bridge-details > https://blueprints.launchpad.net/neutron/+spec/vif-plugging-metadata > > I'd like to get your comments on this change whether this is relevant > direction. I found related comment in Nova code [3] and guess these > discussion had in context of vif-plugging and port-binding, but I'm not > sure there was consensus about bridge name. > > > https://github.com/openstack/nova/blob/2014.2/nova/network/neutronv2/api.py#L1298-1299 > > > Thanks, > Ryota > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berrange at redhat.com Mon Dec 15 10:28:33 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Mon, 15 Dec 2014 10:28:33 +0000 Subject: [openstack-dev] [nova][neutron] bridge name generator for vif plugging In-Reply-To: References: <1FAA045BFF455543B55A30E48415AA0808C128FF@BPXM06GP.gisp.nec.co.jp> Message-ID: <20141215102833.GC11566@redhat.com> On Mon, Dec 15, 2014 at 11:15:56AM +0100, Ian Wells wrote: > Hey Ryota, > > A better way of describing it would be that the bridge name is, at present, > generated in *both* Nova *and* Neutron, and the VIF type semantics define > how it's calculated. I think you're right that in both cases it would make > more sense for Neutron to tell Nova what the connection endpoint was going > to be rather than have Nova calculate it independently. I'm not sure that > that necessarily requires two blueprints, and you don't have a spec there > at the moment, which is a problem because the Neutron spec deadline is upon > us, but the idea's a good one. (You might get away without a Neutron spec, > since the change to Neutron to add the information should be small and > backward compatible, but that's not something I can make judgement on.) Yep, the fact that both Nova & Neutron calculat the bridge name is a historical accident. Originally Nova did it, because nova-network was the only solution. Then Neutron did it too, so it matched what Nova was doing. Clearly if we had Neutron right from the start, then it would have been Neutrons responsibility todo this. Nothing in Nova cares what the names are from a functional POV - it just needs to be told what to use. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From azemlyanov at mirantis.com Mon Dec 15 10:35:52 2014 From: azemlyanov at mirantis.com (Anton Zemlyanov) Date: Mon, 15 Dec 2014 14:35:52 +0400 Subject: [openstack-dev] [Fuel] Building Fuel plugins with UI part Message-ID: My experience with building Fuel plugins with UI part is following. To build a ui-less plugin, it takes 3 seconds and those commands: git clone https://github.com/AlgoTrader/test-plugin.git cd ./test-plugin fpb --build ./ When UI added, build start to look like this and takes many minutes: git clone https://github.com/AlgoTrader/test-plugin.git git clone https://github.com/stackforge/fuel-web.git cd ./fuel-web git fetch https://review.openstack.org/stackforge/fuel-web refs/changes/00/112600/24 && git checkout FETCH_HEAD cd .. mkdir -p ./fuel-web/nailgun/static/plugins/test-plugin cp -R ./test-plugin/ui/* ./fuel-web/nailgun/static/plugins/test-plugin cd ./fuel-web/nailgun npm install && npm update grunt build --static-dir=static_compressed cd ../.. rm -rf ./test-plugin/ui mkdir ./test-plugin/ui cp -R ./fuel-web/nailgun/static_compressed/plugins/test-plugin/* ./test-plugin/ui cd ./test-plugin fpb --build ./ I think we need something not so complex and fragile Anton -------------- next part -------------- An HTML attachment was scrubbed... URL: From Neil.Jerram at metaswitch.com Mon Dec 15 10:36:59 2014 From: Neil.Jerram at metaswitch.com (Neil Jerram) Date: Mon, 15 Dec 2014 10:36:59 +0000 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: <20141211104137.GD23831@redhat.com> (Daniel P. Berrange's message of "Thu, 11 Dec 2014 10:41:37 +0000") References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> <20141211104137.GD23831@redhat.com> Message-ID: <878ui9rygk.fsf@metaswitch.com> "Daniel P. Berrange" writes: > Failing that though, I could see a way to accomplish a similar thing > without a Neutron launched agent. If one of the VIF type binding > parameters were the name of a script, we could run that script on > plug & unplug. So we'd have a finite number of VIF types, and each > new Neutron mechanism would merely have to provide a script to invoke > > eg consider the existing midonet & iovisor VIF types as an example. > Both of them use the libvirt "ethernet" config, but have different > things running in their plug methods. If we had a mechanism for > associating a "plug script" with a vif type, we could use a single > VIF type for both. > > eg iovisor port binding info would contain > > vif_type=ethernet > vif_plug_script=/usr/bin/neutron-iovisor-vif-plug > > while midonet would contain > > vif_type=ethernet > vif_plug_script=/usr/bin/neutron-midonet-vif-plug > > > And so you see implementing a new Neutron mechanism in this way would > not require *any* changes in Nova whatsoever. The work would be entirely > self-contained within the scope of Neutron. It is simply a packaging > task to get the vif script installed on the compute hosts, so that Nova > can execute it. > > This is essentially providing a flexible VIF plugin system for Nova, > without having to have it plug directly into the Nova codebase with > the API & RPC stability constraints that implies. I agree that this is a very promising idea. But... what about the problem that it is libvirt-specific? Does that matter? Regards, Neil From sahid.ferdjaoui at redhat.com Mon Dec 15 10:44:49 2014 From: sahid.ferdjaoui at redhat.com (Sahid Orentino Ferdjaoui) Date: Mon, 15 Dec 2014 11:44:49 +0100 Subject: [openstack-dev] anyway to pep8 check on a specified file In-Reply-To: <20141215093723.GB11566@redhat.com> References: <20141215093723.GB11566@redhat.com> Message-ID: <20141215104449.GA7535@redhat.redhat.com> On Mon, Dec 15, 2014 at 09:37:23AM +0000, Daniel P. Berrange wrote: > On Mon, Dec 15, 2014 at 05:04:59PM +0800, Chen CH Ji wrote: > > > > tox -e pep8 usually takes several minutes on my test server, actually I > > only changes one file and I knew something might wrong there > > anyway to only check that file? Thanks a lot > > Use > > ./run_tests.sh -8 > > > That will only check pep8 against the files listed in the current > commit. If you want to check an entire branch patch series then > > git rebase -i master -x './run_tests.sh -8' Really useful point! s. > Regards, > Daniel > -- > |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| > |: http://libvirt.org -o- http://virt-manager.org :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From keshava.a at hp.com Mon Dec 15 10:52:38 2014 From: keshava.a at hp.com (A, Keshava) Date: Mon, 15 Dec 2014 10:52:38 +0000 Subject: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration In-Reply-To: References: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP> Message-ID: <891761EAFA335D44AD1FFDB9B4A8C063DA025B@G9W0733.americas.hpqcorp.net> Mathieua, I have been thinking of "Starting MPLS right from CN" for L2VPN/EVPN scenario also. Below are my queries w.r.t supporting MPLS from OVS : 1. MPLS will be used even for VM-VM traffic across CNs generated by OVS ? 2. MPLS will be originated right from OVS and will be mapped at Gateway (it may be NN/Hardware router ) to SP network ? So MPLS will carry 2 Labels ? (one for hop-by-hop, and other one for end to identify network ?) 3. MPLS will go over even the "network physical infrastructure" also ? 4. How the Labels will be mapped a/c virtual and physical world ? 5. Who manages the label space ? Virtual world or physical world or both ? (OpenStack + ODL ?) 6. The labels are nested (i.e. Like L3 VPN end to end MPLS connectivity ) will be established ? 7. Or it will be label stitching between Virtual-Physical network ? How the end-to-end path will be setup ? Let me know your opinion for the same. regards, keshava -----Original Message----- From: Mathieu Rohon [mailto:mathieu.rohon at gmail.com] Sent: Monday, December 15, 2014 3:46 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration Hi Ryan, We have been working on similar Use cases to announce /32 with the Bagpipe BGPSpeaker that supports EVPN. Please have a look at use case B in [1][2]. Note also that the L2population Mechanism driver for ML2, that is compatible with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN, and I'm sure it could help in your use case [1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe [2]https://www.youtube.com/watch?v=q5z0aPrUZYc&sns [3]https://blueprints.launchpad.net/neutron/+spec/l2-population Mathieu On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger wrote: > Hi, > > At Rackspace, we have a need to create a higher level networking > service primarily for the purpose of creating a Floating IP solution > in our environment. The current solutions for Floating IPs, being tied > to plugin implementations, does not meet our needs at scale for the following reasons: > > 1. Limited endpoint H/A mainly targeting failover only and not > multi-active endpoints, 2. Lack of noisy neighbor and DDOS mitigation, > 3. IP fragmentation (with cells, public connectivity is terminated > inside each cell leading to fragmentation and IP stranding when cell > CPU/Memory use doesn't line up with allocated IP blocks. Abstracting > public connectivity away from nova installations allows for much more > efficient use of those precious IPv4 blocks). > 4. Diversity in transit (multiple encapsulation and transit types on a > per floating ip basis). > > We realize that network infrastructures are often unique and such a > solution would likely diverge from provider to provider. However, we > would love to collaborate with the community to see if such a project > could be built that would meet the needs of providers at scale. We > believe that, at its core, this solution would boil down to > terminating north<->south traffic temporarily at a massively > horizontally scalable centralized core and then encapsulating traffic > east<->west to a specific host based on the association setup via the current L3 router's extension's 'floatingips' > resource. > > Our current idea, involves using Open vSwitch for header rewriting and > tunnel encapsulation combined with a set of Ryu applications for management: > > https://i.imgur.com/bivSdcC.png > > The Ryu application uses Ryu's BGP support to announce up to the > Public Routing layer individual floating ips (/32's or /128's) which > are then summarized and announced to the rest of the datacenter. If a > particular floating ip is experiencing unusually large traffic (DDOS, > slashdot effect, etc.), the Ryu application could change the > announcements up to the Public layer to shift that traffic to > dedicated hosts setup for that purpose. It also announces a single /32 > "Tunnel Endpoint" ip downstream to the TunnelNet Routing system which > provides transit to and from the cells and their hypervisors. Since > traffic from either direction can then end up on any of the FLIP > hosts, a simple flow table to modify the MAC and IP in either the SRC > or DST fields (depending on traffic direction) allows the system to be > completely stateless. We have proven this out (with static routing and > flows) to work reliably in a small lab setup. > > On the hypervisor side, we currently plumb networks into separate OVS > bridges. Another Ryu application would control the bridge that handles > overlay networking to selectively divert traffic destined for the > default gateway up to the FLIP NAT systems, taking into account any > configured logical routing and local L2 traffic to pass out into the > existing overlay fabric undisturbed. > > Adding in support for L2VPN EVPN > (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN > Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) > to the Ryu BGP speaker will allow the hypervisor side Ryu application > to advertise up to the FLIP system reachability information to take > into account VM failover, live-migrate, and supported encapsulation > types. We believe that decoupling the tunnel endpoint discovery from > the control plane > (Nova/Neutron) will provide for a more robust solution as well as > allow for use outside of openstack if desired. > > ________________________________________ > > Ryan Clevenger > Manager, Cloud Engineering - US > m: 678.548.7261 > e: ryan.clevenger at rackspace.com > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From berrange at redhat.com Mon Dec 15 10:54:14 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Mon, 15 Dec 2014 10:54:14 +0000 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: <878ui9rygk.fsf@metaswitch.com> References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> <20141211104137.GD23831@redhat.com> <878ui9rygk.fsf@metaswitch.com> Message-ID: <20141215105414.GD11566@redhat.com> On Mon, Dec 15, 2014 at 10:36:59AM +0000, Neil Jerram wrote: > "Daniel P. Berrange" writes: > > > Failing that though, I could see a way to accomplish a similar thing > > without a Neutron launched agent. If one of the VIF type binding > > parameters were the name of a script, we could run that script on > > plug & unplug. So we'd have a finite number of VIF types, and each > > new Neutron mechanism would merely have to provide a script to invoke > > > > eg consider the existing midonet & iovisor VIF types as an example. > > Both of them use the libvirt "ethernet" config, but have different > > things running in their plug methods. If we had a mechanism for > > associating a "plug script" with a vif type, we could use a single > > VIF type for both. > > > > eg iovisor port binding info would contain > > > > vif_type=ethernet > > vif_plug_script=/usr/bin/neutron-iovisor-vif-plug > > > > while midonet would contain > > > > vif_type=ethernet > > vif_plug_script=/usr/bin/neutron-midonet-vif-plug > > > > > > And so you see implementing a new Neutron mechanism in this way would > > not require *any* changes in Nova whatsoever. The work would be entirely > > self-contained within the scope of Neutron. It is simply a packaging > > task to get the vif script installed on the compute hosts, so that Nova > > can execute it. > > > > This is essentially providing a flexible VIF plugin system for Nova, > > without having to have it plug directly into the Nova codebase with > > the API & RPC stability constraints that implies. > > I agree that this is a very promising idea. But... what about the > problem that it is libvirt-specific? Does that matter? Libvirt defines terminology that is generally applicable to different hypervisors. Of course not all hypevisors will be capable of supporting all VIF types, but that's true no matter what terminology you choose, so I don't see any problem here. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From Neil.Jerram at metaswitch.com Mon Dec 15 10:54:31 2014 From: Neil.Jerram at metaswitch.com (Neil Jerram) Date: Mon, 15 Dec 2014 10:54:31 +0000 Subject: [openstack-dev] [nova] Kilo specs review day In-Reply-To: (Joe Gordon's message of "Thu, 11 Dec 2014 11:26:58 -0800") References: <5489B879.7030006@rackspace.com> Message-ID: <874msxrxnc.fsf@metaswitch.com> Hi Joe, Joe Gordon writes: > In preparation, I put together a nova-specs dashboard: > > https://review.openstack.org/141137 > > https://review.openstack.org/#/dashboard/?foreach=project%3A%5Eopenstack%2Fnova-specs+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%252cjenkins+NOT+label%3ACode-Review%3E%3D-2%252cself+branch%3Amaster&title=Nova+Specs&&Your+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself&Needs+final+%2B2=label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Anova-specs-core+label%3ACode-Review%3C%3D-1%29+limit%3A100&Passed+Jenkins%2C+Positive+Nova-Core+Feedback=NOT+label%3ACode-Review%3E%3D2+%28reviewerin%3Anova-core+label%3ACode-Review%3E%3D1%29+NOT%28reviewerin%3Anova-core+label%3ACode-Review%3C%3D-1%29+limit%3A100&Passed+Jenkins%2C+No+Positive+Nova-Core+Feedback%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3C%3D-1+NOT+label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Anova-core+label%3ACode-Review%3E%3D1%29+limit%3A100&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+7+days%29=NOT+label%3ACode-Review%3C%3D2+age%3A7d&Some+negative+feedback%2C+might+still+be+worth+commenting=label%3ACode-Review%3D-1+NOT+label%3ACode-Review%3D-2+limit%3A100&Dead+Specs=label%3ACode-Review%3C%3D-2 My Nova spec (https://review.openstack.org/#/c/130732/) does not appear on this dashboard, even though I believe it's in good standing and - I hope - close to approval. Do you know why - does it mean that I've set some metadata field somewhere wrongly? Many thanks, Neil From akamyshnikova at mirantis.com Mon Dec 15 11:04:02 2014 From: akamyshnikova at mirantis.com (Anna Kamyshnikova) Date: Mon, 15 Dec 2014 15:04:02 +0400 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> <548A1A34.40105@dague.net> <548B60D0.7090600@dague.net> Message-ID: Looking at all comments it seems that existing change is reasonable. I will update it with link to this thread. Thanks! Regards, Ann Kamyshnikova On Sat, Dec 13, 2014 at 1:15 AM, Rochelle Grober wrote: > > > > > Morgan Fainberg [mailto:morgan.fainberg at gmail.com] *on* Friday, December > 12, 2014 2:01 PM wrote: > On Friday, December 12, 2014, Sean Dague wrote: > > On 12/12/2014 01:05 PM, Maru Newby wrote: > > > > On Dec 11, 2014, at 2:27 PM, Sean Dague wrote: > > > >> On 12/11/2014 04:16 PM, Jay Pipes wrote: > >>> On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote: > >>>> On Dec 11, 2014, at 1:04 PM, Jay Pipes wrote: > >>>>> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: > >>>>>> > >>>>>> On Dec 11, 2014, at 8:00 AM, Henry Gessau wrote: > >>>>>> > >>>>>>> On Thu, Dec 11, 2014, Mark McClain wrote: > >>>>>>>> > >>>>>>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes >>>>>>>>> > wrote: > >>>>>>>>> > >>>>>>>>> I'm generally in favor of making name attributes opaque, utf-8 > >>>>>>>>> strings that > >>>>>>>>> are entirely user-defined and have no constraints on them. I > >>>>>>>>> consider the > >>>>>>>>> name to be just a tag that the user places on some resource. It > >>>>>>>>> is the > >>>>>>>>> resource's ID that is unique. > >>>>>>>>> > >>>>>>>>> I do realize that Nova takes a different approach to *some* > >>>>>>>>> resources, > >>>>>>>>> including the security group name. > >>>>>>>>> > >>>>>>>>> End of the day, it's probably just a personal preference whether > >>>>>>>>> names > >>>>>>>>> should be unique to a tenant/user or not. > >>>>>>>>> > >>>>>>>>> Maru had asked me my opinion on whether names should be unique > and I > >>>>>>>>> answered my personal opinion that no, they should not be, and if > >>>>>>>>> Neutron > >>>>>>>>> needed to ensure that there was one and only one default security > >>>>>>>>> group for > >>>>>>>>> a tenant, that a way to accomplish such a thing in a race-free > >>>>>>>>> way, without > >>>>>>>>> use of SELECT FOR UPDATE, was to use the approach I put into the > >>>>>>>>> pastebin on > >>>>>>>>> the review above. > >>>>>>>>> > >>>>>>>> > >>>>>>>> I agree with Jay. We should not care about how a user names the > >>>>>>>> resource. > >>>>>>>> There other ways to prevent this race and Jay?s suggestion is a > >>>>>>>> good one. > >>>>>>> > >>>>>>> However we should open a bug against Horizon because the user > >>>>>>> experience there > >>>>>>> is terrible with duplicate security group names. > >>>>>> > >>>>>> The reason security group names are unique is that the ec2 api > >>>>>> supports source > >>>>>> rule specifications by tenant_id (user_id in amazon) and name, so > >>>>>> not enforcing > >>>>>> uniqueness means that invocation in the ec2 api will either fail or > be > >>>>>> non-deterministic in some way. > >>>>> > >>>>> So we should couple our API evolution to EC2 API then? > >>>>> > >>>>> -jay > >>>> > >>>> No I was just pointing out the historical reason for uniqueness, and > >>>> hopefully > >>>> encouraging someone to find the best behavior for the ec2 api if we > >>>> are going > >>>> to keep the incompatibility there. Also I personally feel the ux is > >>>> better > >>>> with unique names, but it is only a slight preference. > >>> > >>> Sorry for snapping, you made a fair point. > >> > >> Yeh, honestly, I agree with Vish. I do feel that the UX of that > >> constraint is useful. Otherwise you get into having to show people UUIDs > >> in a lot more places. While those are good for consistency, they are > >> kind of terrible to show to people. > > > > While there is a good case for the UX of unique names - it also makes > orchestration via tools like puppet a heck of a lot simpler - the fact is > that most OpenStack resources do not require unique names. That being the > case, why would we want security groups to deviate from this convention? > > Maybe the other ones are the broken ones? > > Honestly, any sanely usable system makes names unique inside a > container. Like files in a directory. In this case the tenant is the > container, which makes sense. > > It is one of many places that OpenStack is not consistent. But I'd > rather make things consistent and more usable than consistent and less. > > > > +1. > > > > More consistent and more usable is a good approach. The name uniqueness > has prior art in OpenStack - keystone keeps project names unique within a > domain (domain is the container), similar usernames can't be duplicated in > the same domain. It would be silly to auth with the user ID, likewise > unique names for the security group in the container (tenant) makes a lot > of sense from a UX Perspective. > > > > *[Rockyg] +1* > > *Especially when dealing with domain data that are managed by Humans, > human visible unique is important for understanding *and* efficiency. > Tenant security is expected to be managed by the tenant admin, not some > automated ?robot admin? and as such needs to be clear , memorable and > seperable between instances. Unique names is the most straightforward (and > easiest to enforce) way do this for humans. Humans read differentiate > alphanumerics, so that should be the standard differentiator when humans > are expected to interact and reason about containers.* > > > > --Morgan > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ijw.ubuntu at cack.org.uk Mon Dec 15 11:08:17 2014 From: ijw.ubuntu at cack.org.uk (Ian Wells) Date: Mon, 15 Dec 2014 12:08:17 +0100 Subject: [openstack-dev] [nova][neutron] bridge name generator for vif plugging In-Reply-To: <20141215102833.GC11566@redhat.com> References: <1FAA045BFF455543B55A30E48415AA0808C128FF@BPXM06GP.gisp.nec.co.jp> <20141215102833.GC11566@redhat.com> Message-ID: Let me write a spec and see what you both think. I have a couple of things we could address here and while it's a bit late it wouldn't be a dramatic thing to fix and it might be acceptable. On 15 December 2014 at 11:28, Daniel P. Berrange wrote: > > On Mon, Dec 15, 2014 at 11:15:56AM +0100, Ian Wells wrote: > > Hey Ryota, > > > > A better way of describing it would be that the bridge name is, at > present, > > generated in *both* Nova *and* Neutron, and the VIF type semantics define > > how it's calculated. I think you're right that in both cases it would > make > > more sense for Neutron to tell Nova what the connection endpoint was > going > > to be rather than have Nova calculate it independently. I'm not sure > that > > that necessarily requires two blueprints, and you don't have a spec there > > at the moment, which is a problem because the Neutron spec deadline is > upon > > us, but the idea's a good one. (You might get away without a Neutron > spec, > > since the change to Neutron to add the information should be small and > > backward compatible, but that's not something I can make judgement on.) > > Yep, the fact that both Nova & Neutron calculat the bridge name is a > historical accident. Originally Nova did it, because nova-network was > the only solution. Then Neutron did it too, so it matched what Nova > was doing. Clearly if we had Neutron right from the start, then it > would have been Neutrons responsibility todo this. Nothing in Nova > cares what the names are from a functional POV - it just needs to > be told what to use. > > Regards, > Daniel > -- > |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ > :| > |: http://libvirt.org -o- http://virt-manager.org > :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ > :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc > :| > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.trifonov at gmail.com Mon Dec 15 11:09:44 2014 From: t.trifonov at gmail.com (Tihomir Trifonov) Date: Mon, 15 Dec 2014 13:09:44 +0200 Subject: [openstack-dev] [horizon] REST and Django In-Reply-To: References: <547DD08A.7000402@redhat.com> Message-ID: Travis, That said, I can see a few ways that we could use the same REST decorator > code and provide direct access to the API. We?d simply provide a class > where the url_regex maps to the desired path and gives direct passthrough. > Maybe that kind of passthrough could always be provided for ease of > customization / extensibility and additional methods with wrappers provided > when necessary. I completely agree on this. We can use the REST decorator to handle either some really specific cases, or to handle some features, like pagination, in a general way for all entities, if possible. What I argued against was that it is unnecessary to have duplicate code JS->REST->APIClient, if we can call directly JS->(auth wrapper)->APIClient. Also, there are some examples - where the middleware wrapper hides some functionality, or makes some unneeded processing, that we may completely skip. In the given example - we don't need the (images, has_prev, has_more) return value. What we really need is the list of images, sliced based on the request.GET parameters and the config.API_RESULT_LIMIT. Then - the whole check for has_prev, has_more should be done in the client. Currently this processing was done at the server, as it was needed by the Django rendering engine. Now it is not. So I am basically talking on simplification of the middleware layer as much as possible, and moving the presentation logic into the JS Client. Also, to answer the comment of Thai - there is a lot of work that the server will still do - like the translation - I guess we should load the angular templates from the server with applied translation rather than putting them into plain js files. I'm not sure what are the best options here. But still - there is a lot of unneeded code currently in the openstack_dashboard/api/*.py files. So I guess the current approach with Django-REST might fit our needs. We just have to look over each /api/ file in greater detail and to remove the code that will better work on the client. Let's move the discussion in Gerrit, and discuss the api wrapper proposed by Richard. I believe we are on the same page now, I just needed to clarify for myself that we are not going to just replace the Django with REST, but we want to make Horizon a really flexible and powerful application. On Sat, Dec 13, 2014 at 1:09 AM, Tripp, Travis S wrote: > > Tihomir, > > Today I added one glance call based on Richard?s decorator pattern[1] and > started to play with incorporating some of your ideas. Please note, I only > had limited time today. That is passing the kwargs through to the glance > client. This was an interesting first choice, because it immediately > highlighted a concrete example of the horizon glance wrapper > post-processing still being useful (rather than be a direct pass-through > with no wrapper). See below. If you have some some concrete code examples > of your ideas it would be helpful. > > [1] > https://review.openstack.org/#/c/141273/2/openstack_dashboard/api/rest/glance.py > > With the patch, basically, you can call the following and all of the GET > parameters get passed directly through to the horizon glance client and you > get results back as expected. > > > http://localhost:8002/api/glance/images/?sort_dir=desc&sort_key=created_at&paginate=True&marker=bb2cfb1c-2234-4f54-aec5-b4916fe2d747 > > If you pass in an incorrect sort_key, the glance client returns the > following error message which propagates back to the REST caller as an > error with the message: > > "sort_key must be one of the following: name, status, container_format, > disk_format, size, id, created_at, updated_at." > > This is done by passing **request.GET.dict() through. > > Please note, that if you try this (with POSTMAN, for example), you need to > set the header of X-Requested-With = XMLHttpRequest > > So, what issues did it immediately call out with directly invoking the > client? > > The python-glanceclient internally handles pagination by returning a > generator. Each iteration on the generator will handle making a request > for the next page of data. If you were to just do something like return > list(image_generator) to serialize it back out to the caller, it would > actually end up making a call back to the server X times to fetch all pages > before serializing back (thereby not really paginating). The horizon glance > client wrapper today handles this by using islice intelligently along with > honoring the API_RESULT_LIMIT setting in Horizon. So, this gives a direct > example of where the wrapper does something that a direct passthrough to > the client would not allow. > > That said, I can see a few ways that we could use the same REST decorator > code and provide direct access to the API. We?d simply provide a class > where the url_regex maps to the desired path and gives direct passthrough. > Maybe that kind of passthrough could always be provided for ease of > customization / extensibility and additional methods with wrappers provided > when necessary. I need to leave for today, so can?t actually try that out > at the moment. > > Thanks, > Travis > > From: Thai Q Tran > > Reply-To: OpenStack List openstack-dev at lists.openstack.org>> > Date: Friday, December 12, 2014 at 11:05 AM > To: OpenStack List openstack-dev at lists.openstack.org>> > Subject: Re: [openstack-dev] [horizon] REST and Django > > > In your previous example, you are posting to a certain URL (i.e. > /keystone/{ver:=x.0}/{method:=update}). > => forward to clients[ver].getattr("method")(**kwargs)> => > > Correct me if I'm wrong, but it looks like you have a unique URL for each > /service/version/method. > I fail to see how that is different than what we have today? Is there a > view for each service? each version? > > Let's say for argument sake that you have a single view that takes care of > all URL routing. All requests pass through this view and contain a JSON > that contains instruction on which API to invoke and what parameters to > pass. > And lets also say that you wrote some code that uses reflection to map the > JSON to an action. What you end up with is a client-centric application, > where all of the logic resides client-side. If there are things we want to > accomplish server-side, it will be extremely hard to pull off. Things like > caching, websocket, aggregation, batch actions, translation, etc.... What > you end up with is a client with no help from the server. > > Obviously the other extreme is what we have today, where we do everything > server-side and only using client-side for binding events. I personally > prefer a more balance approach where we can leverage both the server and > client. There are things that client can do well, and there are things that > server can do well. Going the RPC way restrict us to just client > technologies and may hamper any additional future functionalities we want > to bring server-side. In other words, using REST over RPC gives us the > opportunity to use server-side technologies to help solve problems should > the need for it arises. > > I would also argue that the REST approach is NOT what we have today. What > we have today is a static webpage that is generated server-side, where API > is hidden from the client. What we end up with using the REST approach is a > dynamic webpage generated client-side, two very different things. We have > essentially striped out the rendering logic from Django templating and > replaced it with Angular. > > > -----Tihomir Trifonov > > wrote: ----- > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org >> > From: Tihomir Trifonov > > Date: 12/12/2014 04:53AM > Subject: Re: [openstack-dev] [horizon] REST and Django > > Here's an example: Admin user Joe has an Domain open and stares at it for > 15 minutes while he updates the description. Admin user Bob is asked to go > ahead and enable it. He opens the record, edits it, and then saves it. Joe > finished perfecting the description and saves it. Doing this action would > mean that the Domain is enabled and the description gets updated. Last man > in still wins if he updates the same fields, but if they update different > fields then both of their changes will take affect without them stomping on > each other. Whether that is good or bad may depend on the situation? > > > That's a great example. I believe that all of the Openstack APIs support > PATCH updates of arbitrary fields. This way - the frontend(AngularJS) can > detect which fields are being modified, and to submit only these fields for > update. If we however use a form with POST, although we will load the > object before updating it, the middleware cannot find which fields are > actually modified, and will update them all, which is more likely what PUT > should do. Thus having full control in the frontend part, we can submit > only changed fields. If however a service API doesn't support PATCH, it is > actually a problem in the API and not in the client... > > > > The service API documentation almost always lags (although, helped by > specs now) and the service team takes on the burden of exposing a > programmatic way to access the API. This is tested and easily consumable > via the python clients, which removes some guesswork from using the service. > > True. But what if the service team modifies a method signature from let's > say: > > def add_something(self, request, > ? field1, field2): > > > to > > def add_something(self, request, > ? field1, field2, field3): > > > and in the middleware we have the old signature: > > ?def add_something(self, request, > ? field1, field2): > > we still need to modify the middleware to add the new field. If however > the middleware is transparent and just passes **kwargs, it will pass > through whatever the frontend sends. So we just need to update the > frontend, which can be done using custom views, and not necessary going > through an upstream change. My point is why do we need to hide some > features of the backend service API behind a "firewall" what the middleware > in fact is? > > > > > > > > On Fri, Dec 12, 2014 at 8:08 AM, Tripp, Travis S > wrote: > I just re-read and I apologize for the hastily written email I previously > sent. I?ll try to salvage it with a bit of a revision below (please ignore > the previous email). > > On 12/11/14, 7:02 PM, "Tripp, Travis S" travis.tripp at hp.com>> wrote > (REVISED): > > >Tihomir, > > > >Your comments in the patch were very helpful for me to understand your > >concerns about the ease of customizing without requiring upstream > >changes. It also reminded me that I?ve also previously questioned the > >python middleman. > > > >However, here are a couple of bullet points for Devil?s Advocate > >consideration. > > > > > > * Will we take on auto-discovery of API extensions in two spots > >(python for legacy and JS for new)? > > * The Horizon team will have to keep an even closer eye on every > >single project and be ready to react if there are changes to the API that > >break things. Right now in Glance, for example, they are working on some > >fixes to the v2 API (soon to become v2.3) that will allow them to > >deprecate v1 somewhat transparently to users of the client library. > > * The service API documentation almost always lags (although, helped > >by specs now) and the service team takes on the burden of exposing a > >programmatic way to access the API. This is tested and easily consumable > >via the python clients, which removes some guesswork from using the > >service. > > * This is going to be an incremental approach with legacy support > >requirements anyway. So, incorporating python side changes won?t just go > >away. > > * Which approach would be better if we introduce a server side > >caching mechanism or a new source of data such as elastic search to > >improve performance? Would the client side code have to be changed > >dramatically to take advantage of those improvements or could it be done > >transparently on the server side if we own the exposed API? > > > >I?m not sure I fully understood your example about Cinder. Was it the > >cinder client that held up delivery of horizon support, the cinder API or > >both? If the API isn?t in, then it would hold up delivery of the feature > >in any case. There still would be timing pressures to react and build a > >new view that supports it. For customization, with Richard?s approach new > >views could be supported by just dropping in a new REST API decorated > >module with the APIs you want, including direct pass through support if > >desired to new APIs. Downstream customizations / Upstream changes to > >views seem a bit like a bit of a related, but different issue to me as > >long as their is an easy way to drop in new API support. > > > >Finally, regarding the client making two calls to do an update: > > > >?>>Do we really need the lines:? > > > >>> project = api.keystone.tenant_get(request, id) > >>> kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None) > >? > >I agree that if you already have all the data it may be bad to have to do > >another call. I do think there is room for discussing the reasoning, > >though. > >As far as I can tell, they do this so that if you are updating an entity, > >you have to be very specific about the fields you are changing. I > >actually see this as potentially a protectionary measure against data > >loss and sometimes a very nice to have feature. It perhaps was intended > >to *help* guard against race conditions (no locking and no transactions > >with many users simultaneously accessing the data). > > > >Here's an example: Admin user Joe has a Domain open and stares at it for > >15 minutes while he updates just the description. Admin user Bob is asked > >to go ahead and enable it. He opens the record, edits it, and then saves > >it. Joe finished perfecting the description and saves it. They could in > >effect both edit the same domain independently. Last man in still wins if > >he updates the same fields, but if they update different fields then both > >of their changes will take affect without them stomping on each other. Or > >maybe it is intended to encourage client users to compare their current > >and previous to see if they should issue a warning if the data changed > >between getting and updating the data. Or maybe like you said, it is just > >overhead API calls. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Tihomir Trifonov -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkaminski at mirantis.com Mon Dec 15 11:30:38 2014 From: pkaminski at mirantis.com (Przemyslaw Kaminski) Date: Mon, 15 Dec 2014 12:30:38 +0100 Subject: [openstack-dev] [Fuel] Building Fuel plugins with UI part In-Reply-To: References: Message-ID: <548EC65E.6010608@mirantis.com> First of all, compiling of statics shouldn't be a required step. No one does this during development. For production-ready plugins, the compiled files should already be included in the GitHub repos and installation of plugin should just be a matter of downloading it. The API should then take care of informing the UI what plugins are installed. The npm install step is mostly one-time. The grunt build step for the plugin should basically just compile the staticfiles of the plugin and not the whole project. Besides with one file this is not extendable -- for N plugins we would build 2^N files with all possible combinations of including the plugins? :) P. On 12/15/2014 11:35 AM, Anton Zemlyanov wrote: > My experience with building Fuel plugins with UI part is following. To > build a ui-less plugin, it takes 3 seconds and those commands: > > git clone https://github.com/AlgoTrader/test-plugin.git > cd ./test-plugin > fpb --build ./ > > When UI added, build start to look like this and takes many minutes: > > git clone https://github.com/AlgoTrader/test-plugin.git > git clone https://github.com/stackforge/fuel-web.git > cd ./fuel-web > git fetch https://review.openstack.org/stackforge/fuel-web > refs/changes/00/112600/24 && git checkout FETCH_HEAD > cd .. > mkdir -p ./fuel-web/nailgun/static/plugins/test-plugin > cp -R ./test-plugin/ui/* ./fuel-web/nailgun/static/plugins/test-plugin > cd ./fuel-web/nailgun > npm install && npm update > grunt build --static-dir=static_compressed > cd ../.. > rm -rf ./test-plugin/ui > mkdir ./test-plugin/ui > cp -R ./fuel-web/nailgun/static_compressed/plugins/test-plugin/* > ./test-plugin/ui > cd ./test-plugin > fpb --build ./ > > I think we need something not so complex and fragile > > Anton > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From tnapierala at mirantis.com Mon Dec 15 11:40:34 2014 From: tnapierala at mirantis.com (Tomasz Napierala) Date: Mon, 15 Dec 2014 12:40:34 +0100 Subject: [openstack-dev] [Fuel] Logs format on UI (High/6.0) In-Reply-To: References: Message-ID: Also +1 here. In huge envs we already have problems with parsing performance. In long long term we need to think about other log management solution > On 12 Dec 2014, at 23:17, Igor Kalnitsky wrote: > > +1 to stop parsing logs on UI and show them "as is". I think it's more > than enough for all users. > > On Fri, Dec 12, 2014 at 8:35 PM, Dmitry Pyzhov wrote: >> We have a high priority bug in 6.0: >> https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story. >> >> Our openstack services use to send logs in strange format with extra copy of >> timestamp and loglevel: >> ==> ./neutron-metadata-agent.log <== >> 2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 INFO >> neutron.common.config [-] Logging enabled! >> >> And we have a workaround for this. We hide extra timestamp and use second >> loglevel. >> >> In Juno some of services have updated oslo.logging and now send logs in >> simple format: >> ==> ./nova-api.log <== >> 2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from >> /etc/nova/api-paste.ini >> >> In order to keep backward compatibility and deal with both formats we have a >> dirty workaround for our workaround: >> https://review.openstack.org/#/c/141450/ >> >> As I see, our best choice here is to throw away all workarounds and show >> logs on UI as is. If service sends duplicated data - we should show >> duplicated data. >> >> Long term fix here is to update oslo.logging in all packages. We can do it >> in 6.1. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Tomasz 'Zen' Napierala Sr. OpenStack Engineer tnapierala at mirantis.com From rakhmerov at mirantis.com Mon Dec 15 12:06:03 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Mon, 15 Dec 2014 18:06:03 +0600 Subject: [openstack-dev] [Mistral] Global Context and Execution Environment In-Reply-To: <38A58752-A625-414F-9416-62ACD983E607@stackstorm.com> References: <38A58752-A625-414F-9416-62ACD983E607@stackstorm.com> Message-ID: <9FA0291E-68A7-4408-A777-712D33E76851@mirantis.com> Hi, > It looked good and I began to write down the summary: > https://etherpad.openstack.org/p/mistral-global-context Thanks, I left my comments in there. > What problems are we trying to solve: > 1) reduce passing the same parameters over and over from parent to child > 2) ?automatically? make a parameter accessible to most actions without typing it all over (like auth token) I agree that it?s one of the angle from what we?re looking at the problem. However, IMO, it?s wider than just these two points. My perspective is that we are, first of all, discussing workflow variables? scoping (see my previous email in this thread). So I would rather focus on that. Let?s list all the scopes that would make sense, their semantics and use cases where each of them could solve particular usability problems (I?m saying ?usability problems? because it?s really all about usability only). The reason I?m trying to discuss all this from this point of view is because I think we should try to be more formal on things like that. > Can #1 be solved by passing input to subworkflows automatically No, it can?t. ?input? is something that gets validated upon workflow execution (which happens now) and can?t be arbitrarily passed all the way through because of that. If we introduce something like ?global? scope then we can always pass variables of this scope down to nested workflows using a separate mechanism (i.e. different parameter of start_workflow() method). > Can #2 be solved somehow else? Default passing of arbitrary parameters to action seems like breaking abstraction Yes, unless explicitly specified I would not give actions more than they need. Encapsulation has been proven to be a good thing. > Thoughts? need to brainstorm further?. Just once again, I appeal to talk about scopes, their semantics and use cases purely from workflow language (DSL) and API standpoint because I?m afraid otherwise we could bury ourselves under a pile of minor technical details. Specification first, implementation second. Thanks Renat Akhmerov @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbryant at redhat.com Mon Dec 15 12:11:20 2014 From: rbryant at redhat.com (Russell Bryant) Date: Mon, 15 Dec 2014 07:11:20 -0500 Subject: [openstack-dev] [nova][cinder][infra] Ceph CI status update In-Reply-To: <5489CE60.1050701@anteaya.info> References: <20141211163643.GA10911@helmut> <5489CE60.1050701@anteaya.info> Message-ID: <548ECFE8.20106@redhat.com> On 12/11/2014 12:03 PM, Anita Kuno wrote: > On 12/11/2014 09:36 AM, Jon Bernard wrote: >> Heya, quick Ceph CI status update. Once the test_volume_boot_pattern >> was marked as skipped, only the revert_resize test was failing. I have >> submitted a patch to nova for this [1], and that yields an all green >> ceph ci run [2]. So at the moment, and with my revert patch, we're in >> good shape. >> >> I will fix up that patch today so that it can be properly reviewed and >> hopefully merged. From there I'll submit a patch to infra to move the >> job to the check queue as non-voting, and we can go from there. >> >> [1] https://review.openstack.org/#/c/139693/ >> [2] http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html >> >> Cheers, >> > Please add the name of your CI account to this table: > https://wiki.openstack.org/wiki/ThirdPartySystems > > As outlined in the third party CI requirements: > http://ci.openstack.org/third_party.html#requirements > > Please post system status updates to your individual CI wikipage that is > linked to this table. > > The mailing list is not the place to post status updates for third party > CI systems. > > If you have questions about any of the above, please attend one of the > two third party meetings and ask any and all questions until you are > satisfied. https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting This is not a third party CI system. This is a job running in OpenStack infra. It was in the experimental pipeline while bugs were being fixed. This report is about those bugs being fixed and Jon giving a heads up that he thinks it will be ready to move to the check queue very soon. -- Russell Bryant From mbirru at gmail.com Mon Dec 15 12:18:25 2014 From: mbirru at gmail.com (Murali B) Date: Mon, 15 Dec 2014 17:48:25 +0530 Subject: [openstack-dev] SRIOV-error Message-ID: Hi David, Please add as per the Irena suggestion FYI: refer the below configuration http://pastebin.com/DGmW7ZEg Thanks -Murali -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Mon Dec 15 12:21:22 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Mon, 15 Dec 2014 18:21:22 +0600 Subject: [openstack-dev] [mistral] Team meeting - 12/15/2014 Message-ID: <47FA3F0C-C96B-4AB6-932B-1CAEDFB9A4EB@mirantis.com> Hi, I?m just reminding about another team meeting we have today at 16.00 UTC #openstack-meeting channel. Agenda: Review action items Current status (progress, issues, roadblocks, further plans) Release "Kilo-1" progress "for-each" spec Discuss scoping (global, local etc.) Open discussion (see https://wiki.openstack.org/wiki/Meetings/MistralAgenda to the same agenda and the meeting archive). Renat Akhmerov @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From r-mibu at cq.jp.nec.com Mon Dec 15 12:34:21 2014 From: r-mibu at cq.jp.nec.com (Ryota Mibu) Date: Mon, 15 Dec 2014 12:34:21 +0000 Subject: [openstack-dev] [nova][neutron] bridge name generator for vif plugging In-Reply-To: References: <1FAA045BFF455543B55A30E48415AA0808C128FF@BPXM06GP.gisp.nec.co.jp> <20141215102833.GC11566@redhat.com> Message-ID: <1FAA045BFF455543B55A30E48415AA0808C12F3A@BPXM06GP.gisp.nec.co.jp> Ian and Daniel, Thanks for the comments. I have neutron spec here and planned to start from Neutron side to expose bridge name via port-binding API. https://review.openstack.org/#/c/131342/ Thanks, Ryota > -----Original Message----- > From: Ian Wells [mailto:ijw.ubuntu at cack.org.uk] > Sent: Monday, December 15, 2014 8:08 PM > To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage > questions) > Subject: Re: [openstack-dev] [nova][neutron] bridge name generator for vif > plugging > > Let me write a spec and see what you both think. I have a couple of things > we could address here and while it's a bit late it wouldn't be a dramatic > thing to fix and it might be acceptable. > > > On 15 December 2014 at 11:28, Daniel P. Berrange > wrote: > > On Mon, Dec 15, 2014 at 11:15:56AM +0100, Ian Wells wrote: > > Hey Ryota, > > > > A better way of describing it would be that the bridge name is, > at present, > > generated in *both* Nova *and* Neutron, and the VIF type semantics > define > > how it's calculated. I think you're right that in both cases > it would make > > more sense for Neutron to tell Nova what the connection endpoint > was going > > to be rather than have Nova calculate it independently. I'm not > sure that > > that necessarily requires two blueprints, and you don't have a > spec there > > at the moment, which is a problem because the Neutron spec deadline > is upon > > us, but the idea's a good one. (You might get away without a > Neutron spec, > > since the change to Neutron to add the information should be small > and > > backward compatible, but that's not something I can make judgement > on.) > > Yep, the fact that both Nova & Neutron calculat the bridge name > is a > historical accident. Originally Nova did it, because nova-network > was > the only solution. Then Neutron did it too, so it matched what Nova > was doing. Clearly if we had Neutron right from the start, then > it > would have been Neutrons responsibility todo this. Nothing in Nova > cares what the names are from a functional POV - it just needs to > be told what to use. > > Regards, > Daniel > -- > |: http://berrange.com -o- > http://www.flickr.com/photos/dberrange/ :| > |: http://libvirt.org -o- > http://virt-manager.org :| > |: http://autobuild.org -o- > http://search.cpan.org/~danberr/ :| > |: http://entangle-photo.org -o- > http://live.gnome.org/gtk-vnc :| > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack- > dev > From visnusaran.murugan at hp.com Mon Dec 15 12:47:47 2014 From: visnusaran.murugan at hp.com (Murugan, Visnusaran) Date: Mon, 15 Dec 2014 12:47:47 +0000 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <548B8480.9010506@redhat.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> <548A3FB8.9030007@redhat.com> <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> <548B8480.9010506@redhat.com> Message-ID: <4641310AFBEE10419D0A020273367C140CA3A305@G1W3645.americas.hpqcorp.net> Hi Zane, We have been going through this chain for quite some time now and we still feel a disconnect in our understanding. Can you put up a etherpad where we can understand your approach. For example: for storing resource dependencies, Are you storing its name, version tuple or just its ID. If I am correct, you are updating all resources on update regardless of their change which will be inefficient if stack contains a million resource. We have similar questions regarding other areas in your implementation, which we believe if we understand the outline of your implementation. It is difficult to get a hold on your approach just by looking at code. Docs strings / Etherpad will help. About streams, Yes in a million resource stack, the data will be huge, but less than template. Also this stream is stored only In IN_PROGRESS resources. The reason to have entire dependency list to reduce DB queries while a stack update. When you have a singular dependency on each resources similar to your implantation, then we will end up loading Dependencies one at a time and altering almost all resource's dependency regardless of their change. Regarding a 2 template approach for delete, it is not actually 2 different templates. Its just that we have a delete stream To be taken up post update. (Any post operation will be handled as an update) This approach is True when Rollback==True We can always fall back to regular stream (non-delete stream) if Rollback=False In our view we would like to have only one basic operation and that is UPDATE. 1. CREATE will be an update where a realized graph == Empty 2. UPDATE will be an update where realized graph == Full/Partial realized (possibly with a delete stream as post operation if Rollback==True) 3. DELETE will be just another update with an empty to_be_realized_graph. It would be great if we can freeze a stable approach by mid-week as Christmas vacations are round the corner. :) :) > -----Original Message----- > From: Zane Bitter [mailto:zbitter at redhat.com] > Sent: Saturday, December 13, 2014 5:43 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept > showdown > > On 12/12/14 05:29, Murugan, Visnusaran wrote: > > > > > >> -----Original Message----- > >> From: Zane Bitter [mailto:zbitter at redhat.com] > >> Sent: Friday, December 12, 2014 6:37 AM > >> To: openstack-dev at lists.openstack.org > >> Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept > >> showdown > >> > >> On 11/12/14 08:26, Murugan, Visnusaran wrote: > >>>>> [Murugan, Visnusaran] > >>>>> In case of rollback where we have to cleanup earlier version of > >>>>> resources, > >>>> we could get the order from old template. We'd prefer not to have a > >>>> graph table. > >>>> > >>>> In theory you could get it by keeping old templates around. But > >>>> that means keeping a lot of templates, and it will be hard to keep > >>>> track of when you want to delete them. It also means that when > >>>> starting an update you'll need to load every existing previous > >>>> version of the template in order to calculate the dependencies. It > >>>> also leaves the dependencies in an ambiguous state when a resource > >>>> fails, and although that can be worked around it will be a giant pain to > implement. > >>>> > >>> > >>> Agree that looking to all templates for a delete is not good. But > >>> baring Complexity, we feel we could achieve it by way of having an > >>> update and a delete stream for a stack update operation. I will > >>> elaborate in detail in the etherpad sometime tomorrow :) > >>> > >>>> I agree that I'd prefer not to have a graph table. After trying a > >>>> couple of different things I decided to store the dependencies in > >>>> the Resource table, where we can read or write them virtually for > >>>> free because it turns out that we are always reading or updating > >>>> the Resource itself at exactly the same time anyway. > >>>> > >>> > >>> Not sure how this will work in an update scenario when a resource > >>> does not change and its dependencies do. > >> > >> We'll always update the requirements, even when the properties don't > >> change. > >> > > > > Can you elaborate a bit on rollback. > > I didn't do anything special to handle rollback. It's possible that we need to - > obviously the difference in the UpdateReplace + rollback case is that the > replaced resource is now the one we want to keep, and yet the > replaced_by/replaces dependency will force the newer (replacement) > resource to be checked for deletion first, which is an inversion of the usual > order. > > However, I tried to think of a scenario where that would cause problems and > I couldn't come up with one. Provided we know the actual, real-world > dependencies of each resource I don't think the ordering of those two > checks matters. > > In fact, I currently can't think of a case where the dependency order > between replacement and replaced resources matters at all. It matters in the > current Heat implementation because resources are artificially segmented > into the current and backup stacks, but with a holistic view of dependencies > that may well not be required. I tried taking that line out of the simulator > code and all the tests still passed. If anybody can think of a scenario in which > it would make a difference, I would be very interested to hear it. > > In any event though, it should be no problem to reverse the direction of that > one edge in these particular circumstances if it does turn out to be a > problem. > > > We had an approach with depends_on > > and needed_by columns in ResourceTable. But dropped it when we > figured > > out we had too many DB operations for Update. > > Yeah, I initially ran into this problem too - you have a bunch of nodes that are > waiting on the current node, and now you have to go look them all up in the > database to see what else they're waiting on in order to tell if they're ready > to be triggered. > > It turns out the answer is to distribute the writes but centralise the reads. So > at the start of the update, we read all of the Resources, obtain their > dependencies and build one central graph[1]. We than make that graph > available to each resource (either by passing it as a notification parameter, or > storing it somewhere central in the DB that they will all have to read anyway, > i.e. the Stack). But when we update a dependency we don't update the > central graph, we update the individual Resource so there's no global lock > required. > > [1] > https://github.com/zaneb/heat-convergence-prototype/blob/distributed- > graph/converge/stack.py#L166-L168 > > >>> Also taking care of deleting resources in order will be an issue. > >> > >> It works fine. > >> > >>> This implies that there will be different versions of a resource > >>> which will even complicate further. > >> > >> No it doesn't, other than the different versions we already have due > >> to UpdateReplace. > >> > >>>>>> This approach reduces DB queries by waiting for completion > >>>>>> notification > >>>> on a topic. The drawback I see is that delete stack stream will be > >>>> huge as it will have the entire graph. We can always dump such data > >>>> in ResourceLock.data Json and pass a simple flag > >>>> "load_stream_from_db" to converge RPC call as a workaround for > >>>> delete > >> operation. > >>>>> > >>>>> This seems to be essentially equivalent to my 'SyncPoint' > >>>>> proposal[1], with > >>>> the key difference that the data is stored in-memory in a Heat > >>>> engine rather than the database. > >>>>> > >>>>> I suspect it's probably a mistake to move it in-memory for similar > >>>>> reasons to the argument Clint made against synchronising the > >>>>> marking off > >>>> of dependencies in-memory. The database can handle that and the > >>>> problem of making the DB robust against failures of a single > >>>> machine has already been solved by someone else. If we do it > >>>> in-memory we are just creating a single point of failure for not > >>>> much gain. (I guess you could argue it doesn't matter, since if any > >>>> Heat engine dies during the traversal then we'll have to kick off > >>>> another one anyway, but it does limit our options if that changes > >>>> in the > >>>> future.) [Murugan, Visnusaran] Resource completes, removes itself > >>>> from resource_lock and notifies engine. Engine will acquire parent > >>>> lock and initiate parent only if all its children are satisfied (no > >>>> child entry in > >> resource_lock). > >>>> This will come in place of Aggregator. > >>>> > >>>> Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly > >>>> what I > >> did. > >>>> The three differences I can see are: > >>>> > >>>> 1) I think you are proposing to create all of the sync points at > >>>> the start of the traversal, rather than on an as-needed basis. This > >>>> is probably a good idea. I didn't consider it because of the way my > >>>> prototype evolved, but there's now no reason I can see not to do this. > >>>> If we could move the data to the Resource table itself then we > >>>> could even get it for free from an efficiency point of view. > >>> > >>> +1. But we will need engine_id to be stored somewhere for recovery > >> purpose (easy to be queried format). > >> > >> Yeah, so I'm starting to think you're right, maybe the/a Lock table > >> is the right thing to use there. We could probably do it within the > >> resource table using the same select-for-update to set the engine_id, > >> but I agree that we might be starting to jam too much into that one table. > >> > > > > yeah. Unrelated values in resource table. Upon resource completion we > > have to unset engine_id as well as compared to dropping a row from > resource lock. > > Both are good. Having engine_id in resource_table will reduce db > > operaions in half. We should go with just resource table along with > engine_id. > > OK > > >>> Sync points are created as-needed. Single resource is enough to > >>> restart > >> that entire stream. > >>> I think there is a disconnect in our understanding. I will detail it > >>> as well in > >> the etherpad. > >> > >> OK, that would be good. > >> > >>>> 2) You're using a single list from which items are removed, rather > >>>> than two lists (one static, and one to which items are added) that > >>>> get > >> compared. > >>>> Assuming (1) then this is probably a good idea too. > >>> > >>> Yeah. We have a single list per active stream which work by removing > >>> Complete/satisfied resources from it. > >> > >> I went to change this and then remembered why I did it this way: the > >> sync point is also storing data about the resources that are > >> triggering it. Part of this is the RefID and attributes, and we could > >> replace that by storing that data in the Resource itself and querying > >> it rather than having it passed in via the notification. But the > >> other part is the ID/key of those resources, which we _need_ to know > >> in order to update the requirements in case one of them has been > >> replaced and thus the graph doesn't reflect it yet. (Or, for that > >> matter, we need it to know where to go looking for the RefId and/or > >> attributes if they're in the > >> DB.) So we have to store some data, we can't just remove items from > >> the required list (although we could do that as well). > >> > >>>> 3) You're suggesting to notify the engine unconditionally and let > >>>> the engine decide if the list is empty. That's probably not a good > >>>> idea - not only does it require extra reads, it introduces a race > >>>> condition that you then have to solve (it can be solved, it's just more > work). > >>>> Since the update to remove a child from the list is atomic, it's > >>>> best to just trigger the engine only if the list is now empty. > >>>> > >>> > >>> No. Notify only if stream has something to be processed. The newer > >>> Approach based on db lock will be that the last resource will > >>> initiate its > >> parent. > >>> This is opposite to what our Aggregator model had suggested. > >> > >> OK, I think we're on the same page on this one then. > >> > > > > > > Yeah. > > > >>>>> It's not clear to me how the 'streams' differ in practical terms > >>>>> from just passing a serialisation of the Dependencies object, > >>>>> other than being incomprehensible to me ;). The current > >>>>> Dependencies implementation > >>>>> (1) is a very generic implementation of a DAG, (2) works and has > >>>>> plenty of > >>>> unit tests, (3) has, with I think one exception, a pretty > >>>> straightforward API, > >>>> (4) has a very simple serialisation, returned by the edges() > >>>> method, which can be passed back into the constructor to recreate > >>>> it, and (5) has an API that is to some extent relied upon by > >>>> resources, and so won't likely be removed outright in any event. > >>>>> Whatever code we need to handle dependencies ought to just build > >>>>> on > >>>> this existing implementation. > >>>>> [Murugan, Visnusaran] Our thought was to reduce payload size > >>>> (template/graph). Just planning for worst case scenario (million > >>>> resource > >>>> stack) We could always dump them in ResourceLock.data to be loaded > >>>> by Worker. > > With the latest updates to the Etherpad, I'm even more confused by streams > than I was before. > > One thing I never understood is why do you need to store the whole path to > reach each node in the graph? Surely you only need to know the nodes this > one is waiting on, the nodes waiting on this one and the ones those are > waiting on, not the entire history up to this point. The size of each stream is > theoretically up to O(n^2) and you're storing n of them - that's going to get > painful in this million-resource stack. > > >>>> If there's a smaller representation of a graph than a list of edges > >>>> then I don't know what it is. The proposed stream structure > >>>> certainly isn't it, unless you mean as an alternative to storing > >>>> the entire graph once for each resource. A better alternative is to > >>>> store it once centrally - in my current implementation it is passed > >>>> down through the trigger messages, but since only one traversal can > >>>> be in progress at a time it could just as easily be stored in the > >>>> Stack table of the > >> database at the slight cost of an extra write. > >>>> > >>> > >>> Agree that edge is the smallest representation of a graph. But it > >>> does not give us a complete picture without doing a DB lookup. Our > >>> assumption was to store streams in IN_PROGRESS resource_lock.data > >>> column. This could be in resource table instead. > >> > >> That's true, but I think in practice at any point where we need to > >> look at this we will always have already loaded the Stack from the DB > >> for some other reason, so we actually can get it for free. (See > >> detailed discussion in my reply to Anant.) > >> > > > > Aren't we planning to stop loading stack with all resource objects in > > future to Address scalability concerns we currently have? > > We plan on not loading all of the Resource objects each time we load the > Stack object, but I think we will always need to have loaded the Stack object > (for example, we'll need to check the current traversal ID, amongst other > reasons). So if the serialised dependency graph is stored in the Stack it will be > no big deal. > > >>>> I'm not opposed to doing that, BTW. In fact, I'm really interested > >>>> in your input on how that might help make recovery from failure > >>>> more robust. I know Anant mentioned that not storing enough data to > >>>> recover when a node dies was his big concern with my current > approach. > >>>> > >>> > >>> With streams, We feel recovery will be easier. All we need is a > >>> trigger :) > >>> > >>>> I can see that by both creating all the sync points at the start of > >>>> the traversal and storing the dependency graph in the database > >>>> instead of letting it flow through the RPC messages, we would be > >>>> able to resume a traversal where it left off, though I'm not sure > >>>> what that buys > >> us. > >>>> > >>>> And I guess what you're suggesting is that by having an explicit > >>>> lock with the engine ID specified, we can detect when a resource is > >>>> stuck in IN_PROGRESS due to an engine going down? That's actually > >>>> pretty > >> interesting. > >>>> > >>> > >>> Yeah :) > >>> > >>>>> Based on our call on Thursday, I think you're taking the idea of > >>>>> the Lock > >>>> table too literally. The point of referring to locks is that we can > >>>> use the same concepts as the Lock table relies on to do atomic > >>>> updates on a particular row of the database, and we can use those > >>>> atomic updates to prevent race conditions when implementing > >>>> SyncPoints/Aggregators/whatever you want to call them. It's not > >>>> that we'd actually use the Lock table itself, which implements a > >>>> mutex and therefore offers only a much slower and more stateful way > >>>> of doing what we want (lock mutex, change data, unlock mutex). > >>>>> [Murugan, Visnusaran] Are you suggesting something like a > >>>>> select-for- > >>>> update in resource table itself without having a lock table? > >>>> > >>>> Yes, that's exactly what I was suggesting. > >>> > >>> DB is always good for sync. But we need to be careful not to overdo it. > >> > >> Yeah, I see what you mean now, it's starting to _feel_ like there'd > >> be too many things mixed together in the Resource table. Are you > >> aware of some concrete harm that might cause though? What happens if > >> we overdo it? Is select-for-update on a huge row more expensive than > >> the whole overhead of manipulating the Lock? > >> > >> Just trying to figure out if intuition is leading me astray here. > >> > > > > You are right. There should be no difference apart from little bump In > > memory usage. But I think it should be fine. > > > >>> Will update etherpad by tomorrow. > >> > >> OK, thanks. > >> > >> cheers, > >> Zane. > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From akasatkin at mirantis.com Mon Dec 15 12:58:06 2014 From: akasatkin at mirantis.com (Aleksey Kasatkin) Date: Mon, 15 Dec 2014 14:58:06 +0200 Subject: [openstack-dev] [Fuel] Logs format on UI (High/6.0) In-Reply-To: References: Message-ID: +1 to show "as is". We don't get benefits from parsing now (like filtering by value of particular parameter, date/time intervals). It only adds complexity. Aleksey Kasatkin On Mon, Dec 15, 2014 at 1:40 PM, Tomasz Napierala wrote: > > Also +1 here. > In huge envs we already have problems with parsing performance. In long > long term we need to think about other log management solution > > > > On 12 Dec 2014, at 23:17, Igor Kalnitsky > wrote: > > > > +1 to stop parsing logs on UI and show them "as is". I think it's more > > than enough for all users. > > > > On Fri, Dec 12, 2014 at 8:35 PM, Dmitry Pyzhov > wrote: > >> We have a high priority bug in 6.0: > >> https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story. > >> > >> Our openstack services use to send logs in strange format with extra > copy of > >> timestamp and loglevel: > >> ==> ./neutron-metadata-agent.log <== > >> 2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 > INFO > >> neutron.common.config [-] Logging enabled! > >> > >> And we have a workaround for this. We hide extra timestamp and use > second > >> loglevel. > >> > >> In Juno some of services have updated oslo.logging and now send logs in > >> simple format: > >> ==> ./nova-api.log <== > >> 2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from > >> /etc/nova/api-paste.ini > >> > >> In order to keep backward compatibility and deal with both formats we > have a > >> dirty workaround for our workaround: > >> https://review.openstack.org/#/c/141450/ > >> > >> As I see, our best choice here is to throw away all workarounds and show > >> logs on UI as is. If service sends duplicated data - we should show > >> duplicated data. > >> > >> Long term fix here is to update oslo.logging in all packages. We can do > it > >> in 6.1. > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Tomasz 'Zen' Napierala > Sr. OpenStack Engineer > tnapierala at mirantis.com > > > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joehuang at huawei.com Mon Dec 15 13:03:04 2014 From: joehuang at huawei.com (joehuang) Date: Mon, 15 Dec 2014 13:03:04 +0000 Subject: [openstack-dev] =?gb2312?B?UkU6IFtvcGVuc3RhY2stZGV2XSBbYWxsXSBbdGNdIFtQVExdIENhc2NhZGlu?= =?gb2312?B?ZyB2cy4gQ2VsbHMgqEMgc3VtbWl0IHJlY2FwIGFuZCBtb3ZlIGZvcndhcmQ=?= In-Reply-To: References: <185155243D60FE48B384BC144F7C15A58E13354A@SZXEML506-MBS.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD80CA@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541D07BF@szxema505-mbx.china.huawei.com> <891761EAFA335D44AD1FFDB9B4A8C063CD84D2@G9W0762.americas.hpqcorp.net> <5E7A3D1BF5FD014E86E5F971CF446EFF541EB993@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC452@szxema505-mbs.china.huawei.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541EC5D5@szxema505-mbs.china.huawei.com> <5488B96B.2070207@redhat.com> <5E7A3D1BF5FD014E86E5F971CF446EFF541FD171@szxema505-mbs.china.huawei.com> <5489F6CF.7000602@rackspace.com> <548B00B9.3090609@redhat.com> <1994EE26-22EF-4CBD-A7F1-58E8F83C27EF@gmail.com>, Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541FEDC6@szxema505-mbs.china.huawei.com> Hello, Morgan, The keystone is global service for cascading OpenStack and cascaded OpenStacks, just like it works for multi-region. PKI token/UUID token should be workable for multi-region first, if there is some security issues, we need to fix it, no matter cascading introduced or not. Using global KeyStone makes the project ID/user/role/domain/group have consistentent view in the cloud. The token used in the request to cascading Nova/Cinder/Neutron will be transfered in the request to the cascaded Nova/Cinder/Neutron too. Best regards Chaoyi Huang ( joehuang ) ________________________________ From: Morgan Fainberg [morgan.fainberg at gmail.com] Sent: 13 December 2014 19:42 To: Henry; OpenStack Development Mailing List (not for usage questions) Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ?C summit recap and move forward On December 13, 2014 at 3:26:34 AM, Henry (henry4hly at gmail.com) wrote: Hi Morgan, A good question about keystone. In fact, keystone is naturally suitable for multi-region deployment. It has only REST service interface, and PKI based token greatly reduce the central service workload. So, unlike other openstack service, it would not be set to cascade mode. I agree that Keystone is suitable for multi-region in some cases, I am still concerned from a security standpoint. The cascade examples all assert a *global* tenant_id / project_id in a lot of comments/documentation. The answer you gave me doesn??t quite address this issue nor the issue of a disparate deployment having a wildly different role-set or security profile. A PKI token is not (as of today) possible to use with a Keystone (or OpenStack deployment) that it didn??t come from. This is like this because Keystone needs to control the AuthZ for it??s local deployment (same design as the keystone-to-keystone federation). So I have to direct questions: * Is there something specific you expect to happen with the cascading that makes resolving a project_id to something globally unique or am I mis-reading this as part of the design? * Does the cascade centralization just ask for Keystone tokens for each of the deployments or is there something else being done? Essentially how does one work with a Nova from cloud XXX and cloud YYY from an authorization standpoint? You don??t need to answer these right away, but they are clarification points that need to be thought about as this design moves forward. There are a number of security / authorization questions I can expand on, but the above two are the really big ones to start with. As you scale up (or utilize deployments owned by different providers) it isn??t always possible to replicate the Keystone data around. Cheers, Morgan Best regards Henry Sent from my iPad On 2014-12-13, at ????3:12, Morgan Fainberg > wrote: On Dec 12, 2014, at 10:30, Joe Gordon > wrote: On Fri, Dec 12, 2014 at 6:50 AM, Russell Bryant > wrote: On 12/11/2014 12:55 PM, Andrew Laski wrote: > Cells can handle a single API on top of globally distributed DCs. I > have spoken with a group that is doing exactly that. But it requires > that the API is a trusted part of the OpenStack deployments in those > distributed DCs. And the way the rest of the components fit into that scenario is far from clear to me. Do you consider this more of a "if you can make it work, good for you", or something we should aim to be more generally supported over time? Personally, I see the globally distributed OpenStack under a single API case much more complex, and worth considering out of scope for the short to medium term, at least. For me, this discussion boils down to ... 1) Do we consider these use cases in scope at all? 2) If we consider it in scope, is it enough of a priority to warrant a cross-OpenStack push in the near term to work on it? 3) If yes to #2, how would we do it? Cascading, or something built around cells? I haven't worried about #3 much, because I consider #2 or maybe even #1 to be a show stopper here. Agreed I agree with Russell as well. I also am curious on how identity will work in these cases. As it stands identity provides authoritative information only for the deployment it runs. There is a lot of concern I have from a security standpoint when I start needing to address what the central api can do on the other providers. We have had this discussion a number of times in Keystone, specifically when designing the keystone-to-keystone identity federation, and we came to the conclusion that we needed to ensure that the keystone local to a given cloud is the only source of authoritative authz information. While it may, in some cases, accept authn from a source that is trusted, it still controls the local set of roles and grants. Second, we only guarantee that a tenan_id / project_id is unique within a single deployment of keystone (e.g. Shared/replicated backends such as a percona cluster, which cannot be when crossing between differing IAAS deployers/providers). If there is ever a tenant_id conflict (in theory possible with ldap assignment or an unlucky random uuid generation) between installations, you end up with potentially granting access that should not exist to a given user. With that in mind, how does Keystone fit into this conversation? What is expected of identity? What would keystone need to actually support to make this a reality? I ask because I've only seen information on nova, glance, cinder, and ceilometer in the documentation. Based upon the above information I outlined, I would be concerned with an assumption that identity would "just work" without also being part of this conversation. Thanks, Morgan _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuxinguo at huawei.com Mon Dec 15 13:23:09 2014 From: liuxinguo at huawei.com (liuxinguo) Date: Mon, 15 Dec 2014 13:23:09 +0000 Subject: [openstack-dev] Have added mapping for huawei-storage-drivers Message-ID: Hi Mike, Sorry to delay so long. Now we have added a mapping in cinder/volume/manager.py and add a file named "test_huawei_drivers_compatibility.py" to test the compatibility, please check it, thanks. Best regards, liu -------------- next part -------------- An HTML attachment was scrubbed... URL: From azemlyanov at mirantis.com Mon Dec 15 13:26:54 2014 From: azemlyanov at mirantis.com (Anton Zemlyanov) Date: Mon, 15 Dec 2014 17:26:54 +0400 Subject: [openstack-dev] [Fuel] Building Fuel plugins with UI part In-Reply-To: <548EC65E.6010608@mirantis.com> References: <548EC65E.6010608@mirantis.com> Message-ID: The building of the UI plugin has several things I do not like 1) I need to extract the UI part of the plugin and copy/symlink it to fuel-web 2) I have to run grunt build on the whole fuel-web 3) I have to copy files back to original location to pack them 4) I cannot easily switch between development/production versions (no way to easily change entry point) The only way to install plugin is `fuel plugins --install`, no matter development or production, so even development plugins should be packed to tar.gz Anton On Mon, Dec 15, 2014 at 3:30 PM, Przemyslaw Kaminski wrote: > > First of all, compiling of statics shouldn't be a required step. No one > does this during development. > For production-ready plugins, the compiled files should already be > included in the GitHub repos and installation of plugin should just be a > matter of downloading it. The API should then take care of informing the UI > what plugins are installed. > The npm install step is mostly one-time. > The grunt build step for the plugin should basically just compile the > staticfiles of the plugin and not the whole project. Besides with one file > this is not extendable -- for N plugins we would build 2^N files with all > possible combinations of including the plugins? :) > > P. > > > On 12/15/2014 11:35 AM, Anton Zemlyanov wrote: > > My experience with building Fuel plugins with UI part is following. To > build a ui-less plugin, it takes 3 seconds and those commands: > > git clone https://github.com/AlgoTrader/test-plugin.git > cd ./test-plugin > fpb --build ./ > > When UI added, build start to look like this and takes many minutes: > > git clone https://github.com/AlgoTrader/test-plugin.git > git clone https://github.com/stackforge/fuel-web.git > cd ./fuel-web > git fetch https://review.openstack.org/stackforge/fuel-web > refs/changes/00/112600/24 && git checkout FETCH_HEAD > cd .. > mkdir -p ./fuel-web/nailgun/static/plugins/test-plugin > cp -R ./test-plugin/ui/* ./fuel-web/nailgun/static/plugins/test-plugin > cd ./fuel-web/nailgun > npm install && npm update > grunt build --static-dir=static_compressed > cd ../.. > rm -rf ./test-plugin/ui > mkdir ./test-plugin/ui > cp -R ./fuel-web/nailgun/static_compressed/plugins/test-plugin/* > ./test-plugin/ui > cd ./test-plugin > fpb --build ./ > > I think we need something not so complex and fragile > > Anton > > > > > _______________________________________________ > OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkaminski at mirantis.com Mon Dec 15 13:48:05 2014 From: pkaminski at mirantis.com (Przemyslaw Kaminski) Date: Mon, 15 Dec 2014 14:48:05 +0100 Subject: [openstack-dev] [Fuel] Building Fuel plugins with UI part In-Reply-To: References: <548EC65E.6010608@mirantis.com> Message-ID: <548EE695.8090609@mirantis.com> On 12/15/2014 02:26 PM, Anton Zemlyanov wrote: > The building of the UI plugin has several things I do not like > > 1) I need to extract the UI part of the plugin and copy/symlink it to > fuel-web This is required, the UI part should live somewhere in statics/js. This directory is served by nginx and symlinking/copying is I think the best way, far better than adding new directories to nginx configuration. > 2) I have to run grunt build on the whole fuel-web This shouldn't at all be necessary. > 3) I have to copy files back to original location to pack them Shouldn't be necessary. > 4) I cannot easily switch between development/production versions (no > way to easily change entry point) Development/production versions should only differ by serving raw/compressed files. The compressed files should be published by the plugin author. > > The only way to install plugin is `fuel plugins --install`, no matter > development or production, so even development plugins should be > packed to tar.gz The UI part should be working immediately after symlinking somewhere in the statics/js directory imho (and after API is aware of the new pugin but). P. > > Anton > > On Mon, Dec 15, 2014 at 3:30 PM, Przemyslaw Kaminski > > wrote: > > First of all, compiling of statics shouldn't be a required step. > No one does this during development. > For production-ready plugins, the compiled files should already be > included in the GitHub repos and installation of plugin should > just be a matter of downloading it. The API should then take care > of informing the UI what plugins are installed. > The npm install step is mostly one-time. > The grunt build step for the plugin should basically just compile > the staticfiles of the plugin and not the whole project. Besides > with one file this is not extendable -- for N plugins we would > build 2^N files with all possible combinations of including the > plugins? :) > > P. > > > On 12/15/2014 11:35 AM, Anton Zemlyanov wrote: >> My experience with building Fuel plugins with UI part is >> following. To build a ui-less plugin, it takes 3 seconds and >> those commands: >> >> git clone https://github.com/AlgoTrader/test-plugin.git >> cd ./test-plugin >> fpb --build ./ >> >> When UI added, build start to look like this and takes many minutes: >> >> git clone https://github.com/AlgoTrader/test-plugin.git >> git clone https://github.com/stackforge/fuel-web.git >> cd ./fuel-web >> git fetch https://review.openstack.org/stackforge/fuel-web >> refs/changes/00/112600/24 && git checkout FETCH_HEAD >> cd .. >> mkdir -p ./fuel-web/nailgun/static/plugins/test-plugin >> cp -R ./test-plugin/ui/* >> ./fuel-web/nailgun/static/plugins/test-plugin >> cd ./fuel-web/nailgun >> npm install && npm update >> grunt build --static-dir=static_compressed >> cd ../.. >> rm -rf ./test-plugin/ui >> mkdir ./test-plugin/ui >> cp -R ./fuel-web/nailgun/static_compressed/plugins/test-plugin/* >> ./test-plugin/ui >> cd ./test-plugin >> fpb --build ./ >> >> I think we need something not so complex and fragile >> >> Anton >> >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From drfish at us.ibm.com Mon Dec 15 12:37:02 2014 From: drfish at us.ibm.com (Douglas Fish) Date: Mon, 15 Dec 2014 06:37:02 -0600 Subject: [openstack-dev] #PERSONAL# : Facing problem in installing new python dependencies for Horizon- Pls help In-Reply-To: References: Message-ID: Swati Shukla1 wrote on 12/14/2014 11:29:19 PM: > From: Swati Shukla1 > To: openstack-dev at lists.openstack.org > Date: 12/14/2014 11:34 PM > Subject: [openstack-dev] #PERSONAL# : Facing problem in installing > new python dependencies for Horizon- Pls help > > Hi, > > I want to install 2 new modules in Horizon but have no clue how it > installs in its virtualenv. > > I mentioned pisa >= 3.0.33 and reportlab >= 2.5 in requirements.txt > file, ran ./unstack.sh and ./stack.sh, but still do not get these > installed in its virtualenv. > > As a result, when I do ./run_tests.sh, I get " ImportError: No > module named ho.pisa" > > Please suggest me if I am going wrong somewhere or how to proceed with this. > > Thanks in advance. > > Regards, > Swati Shukla > Tata Consultancy Services > Mailto: swati.shukla1 at tcs.com > Website: http://www.tcs.com > ____________________________________________ > Experience certainty. IT Services > Business Solutions > Consulting > ____________________________________________ > =====-----=====-----===== > Notice: The information contained in this e-mail > message and/or attachments to it may contain > confidential or privileged information. If you are > not the intended recipient, any dissemination, use, > review, distribution, printing or copying of the > information contained in this e-mail message > and/or attachments to it are strictly prohibited. If > you have received this communication in error, > please notify us by reply e-mail or telephone and > immediately and permanently delete the message > and any attachments. Thank you_______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev run_tests.sh --force will reinstall the virtual environment and will be up the added modules From ihrachys at redhat.com Mon Dec 15 14:15:41 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 15 Dec 2014 15:15:41 +0100 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split Message-ID: <548EED0D.6020600@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Hi all, the question arose recently in one of reviews for neutron-*aas repos to remove all oslo-incubator code from those repos since it's duplicated in neutron main repo. (You can find the link to the review at the end of the email.) Brief hostory: neutron repo was recently split into 4 pieces (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split resulted in each repository keeping their own copy of neutron/openstack/common/... tree (currently unused in all neutron-*aas repos that are still bound to modules from main repo). As a oslo liaison for the project, I wonder what's the best way to manage oslo-incubator files. We have several options: 1. just kill all the neutron/openstack/common/ trees from neutron-*aas repositories and continue using modules from main repo. 2. kill all duplicate modules from neutron-*aas repos and leave only those that are used in those repos but not in main repo. 3. fully duplicate all those modules in each of four repos that use them. I think option 1. is a straw man, since we should be able to introduce new oslo-incubator modules into neutron-*aas repos even if they are not used in main repo. Option 2. is good when it comes to synching non-breaking bug fixes (or security fixes) from oslo-incubator, in that it will require only one sync patch instead of e.g. four. At the same time there may be potential issues when synchronizing updates from oslo-incubator that would break API and hence require changes to each of the modules that use it. Since we don't support atomic merges for multiple projects in gate, we will need to be cautious about those updates, and we will still need to leave neutron-*aas repos broken for some time (though the time may be mitigated with care). Option 3. is vice versa - in theory, you get total decoupling, meaning no oslo-incubator updates in main repo are expected to break neutron-*aas repos, but bug fixing becomes a huge PITA. I would vote for option 2., for two reasons: - - most oslo-incubator syncs are non-breaking, and we may effectively apply care to updates that may result in potential breakage (f.e. being able to trigger an integrated run for each of neutron-*aas repos with the main sync patch, if there are any concerns). - - it will make oslo liaison life a lot easier. OK, I'm probably too selfish on that. ;) - - it will make stable maintainers life a lot easier. The main reason why stable maintainers and distributions like recent oslo graduation movement is that we don't need to track each bug fix we need in every project, and waste lots of cycles on it. Being able to fix a bug in one place only is *highly* anticipated. [OK, I'm quite selfish on that one too.] - - it's a delusion that there will be no neutron-main syncs that will break neutron-*aas repos ever. There can still be problems due to incompatibility between neutron main and neutron-*aas code resulted EXACTLY because multiple parts of the same process use different versions of the same module. That said, Doug Wiegley (lbaas core) seems to be in favour of option 3. due to lower coupling that is achieved in that way. I know that lbaas team had a bad experience due to tight coupling to neutron project in the past, so I appreciate their concerns. All in all, we should come up with some standard solution for both advanced services that are already split out, *and* upcoming vendor plugin shrinking initiative. The initial discussion is captured at: https://review.openstack.org/#/c/141427/ Thanks, /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUju0NAAoJEC5aWaUY1u57n5YH/jA4l5DsLgRpw9gYsoSWVGvh apmJ4UlnAKhxzc787XImz1VA+ztSyIwAUdEdcfq3gkinP58q7o48oIXOGjFXaBNq 6qBePC1hflEqZ85Hm4/i5z51qutjW0dyi4y4C6FHgM5NsEkhbh0QIa/u8Hr4F1q6 tkr0kDbCbDkiZ8IX1l74VGWQ3QvCNeJkANUg79BqGq+qIVP3BeOHyWqRmpLZFQ6E QiUwhiYv5l4HekCEQN8PWisJoqnhbTNjvLBnLD82IitLd5vXnsXfSkxKhv9XeOg/ czLUCyr/nJg4aw8Qm0DTjnZxS+BBe5De0Ke4zm2AGePgFYcai8YQPtuOfSJDbXk= =D6Gn -----END PGP SIGNATURE----- From ihrachys at redhat.com Mon Dec 15 14:23:40 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 15 Dec 2014 15:23:40 +0100 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: <548EED0D.6020600@redhat.com> References: <548EED0D.6020600@redhat.com> Message-ID: <548EEEEC.9050107@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 15/12/14 15:15, Ihar Hrachyshka wrote: > - it's a delusion that there will be no neutron-main syncs that > will break neutron-*aas repos ever. OK, I've just decided to check whether my (non-native speaker) understanding of the meaning of the word 'delusion' is correct, and I need to admit that what I've found out in dictionaries is not what I really meant. :| I only meant that it's a wrong assumption. So in case anyone reads it as any kind of derogation, I'm very sorry for the bad wording. Please blame my bad English. And Richard Dawkins. /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUju7sAAoJEC5aWaUY1u57XfIH/ituoB51dLLaLvryFumADvT/ dhkJyzylD/x0j++BS88KdNE9i7aiAFn2MQyvINxYV7THSsgl60YruV6xXj5X72aK EUd967OI77XuIheOP6iIC2ZoGa3ie8RGyMTxbTW5hEeDR8+mtYhQTmDZUWKtT15o jviGV9/kPftVU2U1UirwjpZY3DPee4D9CwIoJdTKvk93+NNNlMh1cAsWIR0ISJBC mm/X020SSl2wOy9d3lUge4QEi698NPYpkAAbAqL6YTkblXOFgfb7EBexGQoV388P TFb2StHyCD7hVpdx6ljLWR2mEQVavIJE9VUkvAzvoBMmlZnYFFx4TnUEu6Vu7VY= =Q+GY -----END PGP SIGNATURE----- From vkramskikh at mirantis.com Mon Dec 15 14:25:33 2014 From: vkramskikh at mirantis.com (Vitaly Kramskikh) Date: Mon, 15 Dec 2014 15:25:33 +0100 Subject: [openstack-dev] [Fuel] Building Fuel plugins with UI part In-Reply-To: <548EE695.8090609@mirantis.com> References: <548EC65E.6010608@mirantis.com> <548EE695.8090609@mirantis.com> Message-ID: Hi, The only thing I don't really like is that we need fuel-web code to build plugin. But we have can do nothing with it, as typical UI plugin by design is tightly coupled with the core. If plugin want to reuse core libraries, utils, controls then it has to declare them as dependencies and if there would be a build error if these files weren't found by r.js. I created the first version of spec where described my vision of build process. You can comment on it there. Some responses inline: 2014-12-15 14:48 GMT+01:00 Przemyslaw Kaminski : > > > On 12/15/2014 02:26 PM, Anton Zemlyanov wrote: > > The building of the UI plugin has several things I do not like > > 1) I need to extract the UI part of the plugin and copy/symlink it to > fuel-web > > > This is required, the UI part should live somewhere in statics/js. This > directory is served by nginx and symlinking/copying is I think the best > way, far better than adding new directories to nginx configuration. > > I think Anton is talking not about serving, but building the plugin. Yes, to build the UI part of plugin you need to extract its UI part and move/symlink it to static/plugins/ before you can run the build. > 2) I have to run grunt build on the whole fuel-web > > > This shouldn't at all be necessary. > > Yes, it is not necessary. Actually you don't have if you add another task or option for grunt build to not build the main project. It can be achieved by removing these lines . > 3) I have to copy files back to original location to pack them > > > Shouldn't be necessary. > > 4) I cannot easily switch between development/production versions (no > way to easily change entry point) > > > Development/production versions should only differ by serving > raw/compressed files. The compressed files should be published by the > plugin author. > > On my development machine I use different ports of nginx to serve original and compressed versions of UI. It's configuration is pretty straightforward. > > The only way to install plugin is `fuel plugins --install`, no matter > development or production, so even development plugins should be packed to > tar.gz > > > The UI part should be working immediately after symlinking somewhere in > the statics/js directory imho (and after API is aware of the new pugin but). > > P. > > > > Anton > > On Mon, Dec 15, 2014 at 3:30 PM, Przemyslaw Kaminski < > pkaminski at mirantis.com> wrote: >> >> First of all, compiling of statics shouldn't be a required step. No one >> does this during development. >> For production-ready plugins, the compiled files should already be >> included in the GitHub repos and installation of plugin should just be a >> matter of downloading it. The API should then take care of informing the UI >> what plugins are installed. >> The npm install step is mostly one-time. >> The grunt build step for the plugin should basically just compile the >> staticfiles of the plugin and not the whole project. Besides with one file >> this is not extendable -- for N plugins we would build 2^N files with all >> possible combinations of including the plugins? :) >> >> P. >> >> >> On 12/15/2014 11:35 AM, Anton Zemlyanov wrote: >> >> My experience with building Fuel plugins with UI part is following. To >> build a ui-less plugin, it takes 3 seconds and those commands: >> >> git clone https://github.com/AlgoTrader/test-plugin.git >> cd ./test-plugin >> fpb --build ./ >> >> When UI added, build start to look like this and takes many minutes: >> >> git clone https://github.com/AlgoTrader/test-plugin.git >> git clone https://github.com/stackforge/fuel-web.git >> cd ./fuel-web >> git fetch https://review.openstack.org/stackforge/fuel-web >> refs/changes/00/112600/24 && git checkout FETCH_HEAD >> cd .. >> mkdir -p ./fuel-web/nailgun/static/plugins/test-plugin >> cp -R ./test-plugin/ui/* ./fuel-web/nailgun/static/plugins/test-plugin >> cd ./fuel-web/nailgun >> npm install && npm update >> grunt build --static-dir=static_compressed >> cd ../.. >> rm -rf ./test-plugin/ui >> mkdir ./test-plugin/ui >> cp -R ./fuel-web/nailgun/static_compressed/plugins/test-plugin/* >> ./test-plugin/ui >> cd ./test-plugin >> fpb --build ./ >> >> I think we need something not so complex and fragile >> >> Anton >> >> >> >> >> _______________________________________________ >> OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anant.patil at hp.com Mon Dec 15 14:32:49 2014 From: anant.patil at hp.com (Anant Patil) Date: Mon, 15 Dec 2014 20:02:49 +0530 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <548A3E09.2040408@redhat.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <5489363B.2060008@hp.com> <548A3E09.2040408@redhat.com> Message-ID: <548EF111.6000801@hp.com> On 12-Dec-14 06:29, Zane Bitter wrote: > On 11/12/14 01:14, Anant Patil wrote: >> On 04-Dec-14 10:49, Zane Bitter wrote: >>> On 01/12/14 02:02, Anant Patil wrote: >>>> On GitHub:https://github.com/anantpatil/heat-convergence-poc >>> >>> I'm trying to review this code at the moment, and finding some stuff I >>> don't understand: >>> >>> https://github.com/anantpatil/heat-convergence-poc/blob/master/heat/engine/stack.py#L911-L916 >>> >>> This appears to loop through all of the resources *prior* to kicking off >>> any actual updates to check if the resource will change. This is >>> impossible to do in general, since a resource may obtain a property >>> value from an attribute of another resource and there is no way to know >>> whether an update to said other resource would cause a change in the >>> attribute value. >>> >>> In addition, no attempt to catch UpdateReplace is made. Although that >>> looks like a simple fix, I'm now worried about the level to which this >>> code has been tested. >>> >> We were working on new branch and as we discussed on Skype, we have >> handled all these cases. Please have a look at our current branch: >> https://github.com/anantpatil/heat-convergence-poc/tree/graph-version >> >> When a new resource is taken for convergence, its children are loaded >> and the resource definition is re-parsed. The frozen resource definition >> will have all the "get_attr" resolved. >> >>> >>> I'm also trying to wrap my head around how resources are cleaned up in >>> dependency order. If I understand correctly, you store in the >>> ResourceGraph table the dependencies between various resource names in >>> the current template (presumably there could also be some left around >>> from previous templates too?). For each resource name there may be a >>> number of rows in the Resource table, each with an incrementing version. >>> As far as I can tell though, there's nowhere that the dependency graph >>> for _previous_ templates is persisted? So if the dependency order >>> changes in the template we have no way of knowing the correct order to >>> clean up in any more? (There's not even a mechanism to associate a >>> resource version with a particular template, which might be one avenue >>> by which to recover the dependencies.) >>> >>> I think this is an important case we need to be able to handle, so I >>> added a scenario to my test framework to exercise it and discovered that >>> my implementation was also buggy. Here's the fix: >>> https://github.com/zaneb/heat-convergence-prototype/commit/786f367210ca0acf9eb22bea78fd9d51941b0e40 >>> >> >> Thanks for pointing this out Zane. We too had a buggy implementation for >> handling inverted dependency. I had a hard look at our algorithm where >> we were continuously merging the edges from new template into the edges >> from previous updates. It was an optimized way of traversing the graph >> in both forward and reverse order with out missing any resources. But, >> when the dependencies are inverted, this wouldn't work. >> >> We have changed our algorithm. The changes in edges are noted down in >> DB, only the delta of edges from previous template is calculated and >> kept. At any given point of time, the graph table has all the edges from >> current template and delta from previous templates. Each edge has >> template ID associated with it. > > The thing is, the cleanup dependencies aren't really about the template. > The real resources really depend on other real resources. You can't > delete a Volume before its VolumeAttachment, not because it says so in > the template but because it will fail if you try. The template can give > us a rough guide in advance to what those dependencies will be, but if > that's all we keep then we are discarding information. > > There may be multiple versions of a resource corresponding to one > template version. Even worse, the actual dependencies of a resource > change on a smaller time scale than an entire stack update (this is the > reason the current implementation updates the template one resource at a > time as we go). > Absolutely! The edges from the template are kept only for the reference purposes. When we have a resource in new template, its template ID will also be marked to current template. At any point of time, realized resource will from current template, even if it were found in previous templates. The template ID "moves" for a resource if it is found. > Given that our Resource entries in the DB are in 1:1 correspondence with > actual resources (we create a new one whenever we need to replace the > underlying resource), I found it makes the most conceptual and practical > sense to store the requirements in the resource itself, and update them > at the time they actually change in the real world (bonus: introduces no > new locking issues and no extra DB writes). I settled on this after a > legitimate attempt at trying other options, but they didn't work out: > https://github.com/zaneb/heat-convergence-prototype/commit/a62958342e8583f74e2aca90f6239ad457ba984d > I am okay with the notion of graph stored in resource table. >> For resource clean up, we start from the >> first template (template which was completed and updates were made on >> top of it, empty template otherwise), and move towards the current >> template in the order in which the updates were issued, and for each >> template the graph (edges if found for the template) is traversed in >> reverse order and resources are cleaned-up. > > I'm pretty sure this is backwards - you'll need to clean up newer > resources first because they may reference resources from older > templates. Also if you have a stubborn old resource that won't delete > you don't want that to block cleanups of anything newer. > > You're also serialising some of the work unnecessarily because you've > discarded the information about dependencies that cross template > versions, forcing you to clean up only one template version at a time. > Since only the changed set is stored, it is going to be smaller in most of the cases. Within each previous template, if there are any edges, they are tried for clean-up in reverse order. If a resource is not updated, but available in new template, the template ID will be current template ID as I mentioned above. With the template, the process is concurrent, but across the templates, it will be serial. I am okay with this since in real world, the change in dependencies may not be as much as we think. >> The process ends up with >> current template being traversed in reverse order and resources being >> cleaned up. All the update-replaced resources from the older templates >> (older updates in concurrent updates) are cleaned up in the order in >> which they are suppose to be. >> >> Resources are now tied to template, they have a template_id instead of >> version. As we traverse the graph, we know which template we are working >> on, and can take the relevant action on resource. >> >> For rollback, another update is issued with the last completed template >> (it is designed to have an empty template as last completed template by >> default). The current template being worked on becomes predecessor for >> the new incoming template. In case of rollback, the last completed >> template becomes incoming new template, the current becomes the new >> template's predecessor and the successor of last completed template will >> have no predecessor. All these changes are available in the >> 'graph-version' branch. (The branch name is a misnomer though!) >> >> I think it is simpler to think about stack and concurrent updates when >> we associate resources and edges with template, and stack with current >> template and its predecessors (if any). > > It doesn't seem simple to me because it's trying to reconstruct reality > from a lossy version of history. The simplest way to think about it, in > my opinion is this: > - When updating resources, respect their dependencies as given in the > template > - When checking resources to clean up, respect their actual, current > real-world dependencies, and check replacement resources before the > resources that they replaced. > - Don't check a resource for clean up until it has been updated to the > latest template. > >> I also think that we should decouple Resource from Stack. This is really >> a hindrance when workers work on individual resources. The resource >> should be abstracted enough from stack for the worker to work on the >> resource alone. The worker should load the required resource plug-in and >> start converging. > > I think that's a worthy goal, and it would be really nice if we could > load a Resource completely independently of its Stack, and I know this > has always been a design goal of yours (hence you're caching the > resource definition in the Resource rather than getting it from the > template). > > That said, I am convinced it's an unachievable goal, and I believe we > should give up on it. > > - We'll always need to load _some_ central thing (e.g. to find out if > the current traversal is still the valid one), so it might as well be > the Stack. > - Existing plugin abominations like HARestarter expect a working Stack > object to be provided so it can go hunting for other resources. > > I think the best we can do is try to make heat.engine.stack.Stack as > lazy as possible so that it only does extra work when strictly required, > and just accept that the stack will always be loaded from the database. > > I am also strongly in favour of treating the idea of caching the > unresolved resource definition in the Resource table as a straight > performance optimisation that is completely separate to the convergence > work. It's going to be inevitably ugly because there is no > template-format-independent way to serialise a resource definition > (while resource definition objects themselves are designed to be > inherently template-format-independent). Once phase 1 is complete we can > decide whether it's worth it based on measuring the actual performance > improvement. > The idea of keeping the resource definition in DB was to decouple from the stack as much as possible. It is not for performance optimization. When a worker job is received, the worker won't have to load the entire template from the stack, get the realized definitions etc. The worker simply loads the resource with definition with minimal stack (lazily loaded stack), and starts working on it. This work is towards making stack lazily loaded. Like you said, the stack has to be minimal when it is loaded. There will be few information needed by the worker (like resource definition and requirers/requirements) which can be obtained without loading the template and recalculating the graph again and again. > (Note that we _already_ store the _resolved_ properties of the resource, > which is what the observer will be comparing against for phase 2, so > there should be no reason for the observer to need to load the stack.) > >> The READEME.rst is really helpful for bringing up the minimal devstack >> and test the PoC. I also has some notes on design. >> > [snip] >> >> Zane, I have few questions: >> 1. Our current implementation is based on notifications from worker so >> that the engine can take up next set of tasks. I don't see this in your >> case. I think we should be doing this. It gels well with observer >> notification mechanism. When the observer comes, it would send a >> converge notification. Both, the provisioning of stack and the >> continuous observation, happens with notifications (async message >> passing). I see that the workers in your case pick up the parent when/if >> it is done and schedules it or updates the sync point. > > I'm not quite sure what you're asking here, so forgive me if I'm > misunderstanding. What I think you're saying is that where my prototype > propagates notifications thus: > > worker -> worker > -> worker > > (where -> is an async message) > you would prefer it to do: > > worker -> engine -> worker > -> worker > > Is that right? > > To me the distinction seems somewhat academic, given that we've decided > that the engine and the worker will be the same process. I don't see a > disadvantage to doing right away stuff that we know needs to be done > right away. Obviously we should factor the code out tidily into a > separate method where we can _also_ expose it as a notification that can > be triggered by the continuous observer. > > You mentioned above that you thought the workers should not ever load > the Stack, and I think that's probably the reason you favour this > approach: the 'worker' would always load just the Resource and the > 'engine' (even though they're really the same) would always load just > the Stack, right? > > However, as I mentioned above, I think we'll want/have to load the Stack > in the worker anyway, so eliminating the extra asynchronous call > eliminates the performance penalty for having to do so. > When we started, we had the following idea: (provisions with converge API) (Observe) Engine ------------------> Worker --------------> Observer | | | | | | | | ^ (done) v ^ (converge) v ------------------------ ------------------------ The boundary has to be clearly demarcated, as done above. While provisioning a stack, the Engine uses the converge facility, as does the observer when it sees a need to converge a resource. Even though we have these logical things in same process, by not clearly demarcating the responsibilities of each logical entity, we will end-up with issues which we face when we mix the responsibilities. As you mentioned, the worker should have a notification API exposed which can be used not only by the Observer, but also by the engine to provision (CRUD) a stack resource. The engine decides on which resource to be converged next when a resource is done. The observer is only responsible for once resource and issues a converge request if it has to. >> 2. The dependency graph travels everywhere. IMHO, we can keep the graph >> in DB and let the workers work on a resource, and engine decide which >> one to be scheduled next by looking at the graph. There wouldn't be a >> need for a lock here, in the engine, the DB transactions should take >> care of concurrent DB updates. Our new PoC follows this model. > > I'm fine with keeping the graph in the DB instead of having it flow with > the notifications. > >> 3. The request ID is passed down to check_*_complete. Would the check >> method be interrupted if new request arrives? IMHO, the check method >> should not get interrupted. It should return back when the resource has >> reached a concrete state, either failed or completed. > > I agree, it should not be interrupted. > > I've started to think of phase 1 and phase 2 like this: > > 1) Make locks more granular: stack-level lock becomes resource-level > 2) Get rid of locks altogether > I thought that the DB transactions were enough for concurrency issues. The resources status is enough to know whether to trigger the update or not. For resources currently in progress, the engine can wait for notifications from then and trigger the update. With completely distributed graph and decision making it is hard to achieve. If the graph is stored in DB, the decision to trigger update is at one place (by looking at the graph only) and hence locks are not required. > So in phase 1 we'll lock the resources and like you said, it will return > back when it has reached a concrete state. In phase 2 we'll be able to > just update the goal state for the resource and the observe/converge > process will be able to automagically find the best way to that state > regardless of whether it was in the middle of transitioning to another > state or not. Or something. But that's for the future :) > I am thinking about not having any locks at all. The concurrency is handled by DB transaction and notifications gives us hint on deciding when and whether to update the next set of resources. If a resource is in progress currently, and an update is issued on that resource, the graph API doesn't return it as a ready-to-be-converged resource as a previous version is in progress. When it is done, and notification is received, the graph DB API returns it as next to converge. >> 4. Lot of synchronization issues which we faced in our PoC cannot be >> encountered with the framework. How do we evaluate what happens when >> synchronization issues are encountered (like stack lock kind of issues >> which we are replacing with DB transaction). > > Right, yeah, this is obviously the big known limitation of the > simulator. I don't have a better answer other than to Think Very Hard > about it. > > Designing software means solving for hundreds of constraints - too many > for a human to hold in their brain at the same time. The purpose of > prototyping is to fix enough of the responses to those constraints in a > concrete form to allow reasoning about the remaining ones to become > tractable. If you fix solutions for *all* of the constraints, then what > you've built is by definition not a prototype but the final product. > > One technique available to us is to encapsulate the parts of the > algorithm that are subject to synchronisation issues behind abstractions > that offer stronger guarantees. Then in order to have confidence in the > design we need only satisfy ourselves that we have analysed the > guarantees correctly and that a concrete implementation offering those > same guarantees is possible. For example, the SyncPoints are shown to > work under the assumption that they are not subject to race conditions, > and the SyncPoint code is small enough that we can easily see that it > can be implemented in an atomic fashion using the same DB primitives > already proven to work by StackLock. Therefore we can have a very high > confidence (but not proof) that the overall algorithm will work when > implemented in the final product. > > Having Thought Very Hard about it, I'm as confident as I can be that I'm > not relying on any synchronisation properties that can't be implemented > using select-for-update on a single database row. There will of course > be surprises at implementation time, but I hope that won't be one of > them and anticipate that any changes required to the plan will be > localised and not wide-ranging. > > (This is in contrast BTW to my centralised-graph branch, linked above, > where it became very obvious that it would require some sort of external > locking - so there is reason to think that this process can reveal > architectural problems related to synchronisation where they are present.) > > cheers, > Zane. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From me at romcheg.me Mon Dec 15 15:11:42 2014 From: me at romcheg.me (Roman Prykhodchenko) Date: Mon, 15 Dec 2014 16:11:42 +0100 Subject: [openstack-dev] [Fuel] Logs format on UI (High/6.0) In-Reply-To: References: Message-ID: Hi folks! In most productions environments I?ve seen bare logs as they are shown now in Fuel web UI were pretty useless. If someone has an infrastructure that consists of more that 5 servers and 5 services running on them they are most likely to use logstash, loggly or any other log management system. There are options for forwarding these logs to a remote log server and that?s what is likely to be used IRL. Therefore for production environments formatting logs in Fuel web UI or even showing them is a cool but pretty useless feature. In addition to being useless in production environments it also creates additional load to the user interface. However, I can see that developers actually use it for debugging or troubleshooting, so my proposal is to introduce an option for disabling this feature completely. - romcheg > On 15 Dec 2014, at 12:40, Tomasz Napierala wrote: > > Also +1 here. > In huge envs we already have problems with parsing performance. In long long term we need to think about other log management solution > > >> On 12 Dec 2014, at 23:17, Igor Kalnitsky wrote: >> >> +1 to stop parsing logs on UI and show them "as is". I think it's more >> than enough for all users. >> >> On Fri, Dec 12, 2014 at 8:35 PM, Dmitry Pyzhov wrote: >>> We have a high priority bug in 6.0: >>> https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story. >>> >>> Our openstack services use to send logs in strange format with extra copy of >>> timestamp and loglevel: >>> ==> ./neutron-metadata-agent.log <== >>> 2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 INFO >>> neutron.common.config [-] Logging enabled! >>> >>> And we have a workaround for this. We hide extra timestamp and use second >>> loglevel. >>> >>> In Juno some of services have updated oslo.logging and now send logs in >>> simple format: >>> ==> ./nova-api.log <== >>> 2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from >>> /etc/nova/api-paste.ini >>> >>> In order to keep backward compatibility and deal with both formats we have a >>> dirty workaround for our workaround: >>> https://review.openstack.org/#/c/141450/ >>> >>> As I see, our best choice here is to throw away all workarounds and show >>> logs on UI as is. If service sends duplicated data - we should show >>> duplicated data. >>> >>> Long term fix here is to update oslo.logging in all packages. We can do it >>> in 6.1. >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Tomasz 'Zen' Napierala > Sr. OpenStack Engineer > tnapierala at mirantis.com > > > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From doug at doughellmann.com Mon Dec 15 15:13:59 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 15 Dec 2014 10:13:59 -0500 Subject: [openstack-dev] [oslo] need help identifying missing fixtures and test APIs Message-ID: We talked about this last week at the Oslo meeting [1], but I also promised to send an email for a broader audience. We have recently had a couple of issues when we released a library where we broke unit tests running in other projects. We test the source tree of the libraries against the applications using the integration test suite, but we do not run the unit tests. This isn?t a new situation ? we had similar problems in icehouse, and juno. We discussed setting up gate jobs to run the consuming project's unit tests during Juno, but eventually dropped that idea because of the server requirements needed to actually run all of the required jobs. That means we still have a small risk of breaking things with a release if we don?t have an API test in place for something we change, or if a test suite mocks out an implementation detail of a library instead of mocking the public API. As part of releasing each library, we have tried to create test APIs and test fixtures that can be used to control the library?s behavior within a unit test suite in a well-known, testable, and supportable way. We need the liaisons to help identify missing fixtures from existing and not-yet-graduated libraries. There are two main ways application test suites interact with Oslo libraries that we want to address: Using configuration options directly to control library behavior and mocking. Learning more about both will help us understand how the library interacts with the test, and we can then either design a fixture or test API for the library or modify the test (in cases where it is mocking implementation details). A change of this scale is a long term project, but we need to start gathering data now if we want to start writing new fixtures in the next cycle. Please review your project?s test suite and make notes about how it uses mocks and configuration options, then add the information to the etherpad [2]. We will talk about it again at the Oslo meeting in a few weeks. Thanks, Doug [1] http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-12-08-16.00.html [2] https://etherpad.openstack.org/p/oslo-mocks-in-project-unit-tests From anant.patil at hp.com Mon Dec 15 15:15:30 2014 From: anant.patil at hp.com (Anant Patil) Date: Mon, 15 Dec 2014 20:45:30 +0530 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <548B8480.9010506@redhat.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> <548A3FB8.9030007@redhat.com> <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> <548B8480.9010506@redhat.com> Message-ID: <548EFB12.2090303@hp.com> On 13-Dec-14 05:42, Zane Bitter wrote: > On 12/12/14 05:29, Murugan, Visnusaran wrote: >> >> >>> -----Original Message----- >>> From: Zane Bitter [mailto:zbitter at redhat.com] >>> Sent: Friday, December 12, 2014 6:37 AM >>> To: openstack-dev at lists.openstack.org >>> Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept >>> showdown >>> >>> On 11/12/14 08:26, Murugan, Visnusaran wrote: >>>>>> [Murugan, Visnusaran] >>>>>> In case of rollback where we have to cleanup earlier version of >>>>>> resources, >>>>> we could get the order from old template. We'd prefer not to have a >>>>> graph table. >>>>> >>>>> In theory you could get it by keeping old templates around. But that >>>>> means keeping a lot of templates, and it will be hard to keep track >>>>> of when you want to delete them. It also means that when starting an >>>>> update you'll need to load every existing previous version of the >>>>> template in order to calculate the dependencies. It also leaves the >>>>> dependencies in an ambiguous state when a resource fails, and >>>>> although that can be worked around it will be a giant pain to implement. >>>>> >>>> >>>> Agree that looking to all templates for a delete is not good. But >>>> baring Complexity, we feel we could achieve it by way of having an >>>> update and a delete stream for a stack update operation. I will >>>> elaborate in detail in the etherpad sometime tomorrow :) >>>> >>>>> I agree that I'd prefer not to have a graph table. After trying a >>>>> couple of different things I decided to store the dependencies in the >>>>> Resource table, where we can read or write them virtually for free >>>>> because it turns out that we are always reading or updating the >>>>> Resource itself at exactly the same time anyway. >>>>> >>>> >>>> Not sure how this will work in an update scenario when a resource does >>>> not change and its dependencies do. >>> >>> We'll always update the requirements, even when the properties don't >>> change. >>> >> >> Can you elaborate a bit on rollback. > > I didn't do anything special to handle rollback. It's possible that we > need to - obviously the difference in the UpdateReplace + rollback case > is that the replaced resource is now the one we want to keep, and yet > the replaced_by/replaces dependency will force the newer (replacement) > resource to be checked for deletion first, which is an inversion of the > usual order. > This is where the version is so handy! For UpdateReplaced ones, there is an older version to go back to. This version could just be template ID, as I mentioned in another e-mail. All resources are at the current template ID if they are found in the current template, even if they is no need to update them. Otherwise, they need to be cleaned-up in the order given in the previous templates. I think the template ID is used as version as far as I can see in Zane's PoC. If the resource template key doesn't match the current template key, the resource is deleted. The version is misnomer here, but that field (template id) is used as though we had versions of resources. > However, I tried to think of a scenario where that would cause problems > and I couldn't come up with one. Provided we know the actual, real-world > dependencies of each resource I don't think the ordering of those two > checks matters. > > In fact, I currently can't think of a case where the dependency order > between replacement and replaced resources matters at all. It matters in > the current Heat implementation because resources are artificially > segmented into the current and backup stacks, but with a holistic view > of dependencies that may well not be required. I tried taking that line > out of the simulator code and all the tests still passed. If anybody can > think of a scenario in which it would make a difference, I would be very > interested to hear it. > > In any event though, it should be no problem to reverse the direction of > that one edge in these particular circumstances if it does turn out to > be a problem. > >> We had an approach with depends_on >> and needed_by columns in ResourceTable. But dropped it when we figured out >> we had too many DB operations for Update. > > Yeah, I initially ran into this problem too - you have a bunch of nodes > that are waiting on the current node, and now you have to go look them > all up in the database to see what else they're waiting on in order to > tell if they're ready to be triggered. > > It turns out the answer is to distribute the writes but centralise the > reads. So at the start of the update, we read all of the Resources, > obtain their dependencies and build one central graph[1]. We than make > that graph available to each resource (either by passing it as a > notification parameter, or storing it somewhere central in the DB that > they will all have to read anyway, i.e. the Stack). But when we update a > dependency we don't update the central graph, we update the individual > Resource so there's no global lock required. > > [1] > https://github.com/zaneb/heat-convergence-prototype/blob/distributed-graph/converge/stack.py#L166-L168 > A centralized graph and decision making will make the implementation far more simpler than distributed. This looks academic, but the simplicity beats everything! When each worker has to decide, there needs to be lock, only DB transactions are not enough. In contrast, when the decision making is centralized, that particular critical section can be attempted with transaction and re-attempted if needed. With the distributed approach, I see following drawbacks: 1. Every time a resource is done, the peer resources (siblings) are checked to see if they are done and the parent is propagated. This happens for each resource. 2. The worker has to run through all the resources to see if the stack is done, to mark it as completed. 3. The decision to converge is made by each worker resulting in lot of contention. The centralized graph restricts the contention point to one place where we can use DB transactions. It is easier to maintain code where particular decisions are made at a place rather than at many places. 4. The complex part we are trying to solve is to decide on what to do next when a resource is done. With centralized graph, this is abstracted out to the DB API. The API will return the next set of nodes. A smart SQL query can reduce a lot of logic currently being coded in worker/engine. 5. What would be the starting point for resource clean-up? The clean-up has to start when all the resources are updated. With no centralized graph, the DB has to be searched for all the resources with no dependencies and with older versions (or having older template keys) and start removing them. With centralized graph, this would be a simpler with a SQL queries returning what needs to be done. The search space for where to start with clean-up will be huge. 6. When engine restarts, the search space on where to start will be huge. With a centralized graph, the abstracted API to get next set of nodes makes the implementation of decision simpler. I am convinced enough that it is simpler to assign the responsibility to engine on what needs to be done next. No locks will be required, not even resource locks! It is simpler from implementation, understanding and maintenance perspective. >>>> Also taking care of deleting resources in order will be an issue. >>> >>> It works fine. >>> >>>> This implies that there will be different versions of a resource which >>>> will even complicate further. >>> >>> No it doesn't, other than the different versions we already have due to >>> UpdateReplace. >>> >>>>>>> This approach reduces DB queries by waiting for completion >>>>>>> notification >>>>> on a topic. The drawback I see is that delete stack stream will be >>>>> huge as it will have the entire graph. We can always dump such data >>>>> in ResourceLock.data Json and pass a simple flag >>>>> "load_stream_from_db" to converge RPC call as a workaround for delete >>> operation. >>>>>> >>>>>> This seems to be essentially equivalent to my 'SyncPoint' >>>>>> proposal[1], with >>>>> the key difference that the data is stored in-memory in a Heat engine >>>>> rather than the database. >>>>>> >>>>>> I suspect it's probably a mistake to move it in-memory for similar >>>>>> reasons to the argument Clint made against synchronising the marking >>>>>> off >>>>> of dependencies in-memory. The database can handle that and the >>>>> problem of making the DB robust against failures of a single machine >>>>> has already been solved by someone else. If we do it in-memory we are >>>>> just creating a single point of failure for not much gain. (I guess >>>>> you could argue it doesn't matter, since if any Heat engine dies >>>>> during the traversal then we'll have to kick off another one anyway, >>>>> but it does limit our options if that changes in the >>>>> future.) [Murugan, Visnusaran] Resource completes, removes itself >>>>> from resource_lock and notifies engine. Engine will acquire parent >>>>> lock and initiate parent only if all its children are satisfied (no child entry in >>> resource_lock). >>>>> This will come in place of Aggregator. >>>>> >>>>> Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly what I >>> did. >>>>> The three differences I can see are: >>>>> >>>>> 1) I think you are proposing to create all of the sync points at the >>>>> start of the traversal, rather than on an as-needed basis. This is >>>>> probably a good idea. I didn't consider it because of the way my >>>>> prototype evolved, but there's now no reason I can see not to do this. >>>>> If we could move the data to the Resource table itself then we could >>>>> even get it for free from an efficiency point of view. >>>> >>>> +1. But we will need engine_id to be stored somewhere for recovery >>> purpose (easy to be queried format). >>> >>> Yeah, so I'm starting to think you're right, maybe the/a Lock table is the right >>> thing to use there. We could probably do it within the resource table using >>> the same select-for-update to set the engine_id, but I agree that we might >>> be starting to jam too much into that one table. >>> >> >> yeah. Unrelated values in resource table. Upon resource completion we have to >> unset engine_id as well as compared to dropping a row from resource lock. >> Both are good. Having engine_id in resource_table will reduce db operaions >> in half. We should go with just resource table along with engine_id. > > OK > >>>> Sync points are created as-needed. Single resource is enough to restart >>> that entire stream. >>>> I think there is a disconnect in our understanding. I will detail it as well in >>> the etherpad. >>> >>> OK, that would be good. >>> >>>>> 2) You're using a single list from which items are removed, rather >>>>> than two lists (one static, and one to which items are added) that get >>> compared. >>>>> Assuming (1) then this is probably a good idea too. >>>> >>>> Yeah. We have a single list per active stream which work by removing >>>> Complete/satisfied resources from it. >>> >>> I went to change this and then remembered why I did it this way: the sync >>> point is also storing data about the resources that are triggering it. Part of this >>> is the RefID and attributes, and we could replace that by storing that data in >>> the Resource itself and querying it rather than having it passed in via the >>> notification. But the other part is the ID/key of those resources, which we >>> _need_ to know in order to update the requirements in case one of them >>> has been replaced and thus the graph doesn't reflect it yet. (Or, for that >>> matter, we need it to know where to go looking for the RefId and/or >>> attributes if they're in the >>> DB.) So we have to store some data, we can't just remove items from the >>> required list (although we could do that as well). >>> >>>>> 3) You're suggesting to notify the engine unconditionally and let the >>>>> engine decide if the list is empty. That's probably not a good idea - >>>>> not only does it require extra reads, it introduces a race condition >>>>> that you then have to solve (it can be solved, it's just more work). >>>>> Since the update to remove a child from the list is atomic, it's best >>>>> to just trigger the engine only if the list is now empty. >>>>> >>>> >>>> No. Notify only if stream has something to be processed. The newer >>>> Approach based on db lock will be that the last resource will initiate its >>> parent. >>>> This is opposite to what our Aggregator model had suggested. >>> >>> OK, I think we're on the same page on this one then. >>> >> >> >> Yeah. >> >>>>>> It's not clear to me how the 'streams' differ in practical terms >>>>>> from just passing a serialisation of the Dependencies object, other >>>>>> than being incomprehensible to me ;). The current Dependencies >>>>>> implementation >>>>>> (1) is a very generic implementation of a DAG, (2) works and has >>>>>> plenty of >>>>> unit tests, (3) has, with I think one exception, a pretty >>>>> straightforward API, >>>>> (4) has a very simple serialisation, returned by the edges() method, >>>>> which can be passed back into the constructor to recreate it, and (5) >>>>> has an API that is to some extent relied upon by resources, and so >>>>> won't likely be removed outright in any event. >>>>>> Whatever code we need to handle dependencies ought to just build on >>>>> this existing implementation. >>>>>> [Murugan, Visnusaran] Our thought was to reduce payload size >>>>> (template/graph). Just planning for worst case scenario (million >>>>> resource >>>>> stack) We could always dump them in ResourceLock.data to be loaded by >>>>> Worker. > > With the latest updates to the Etherpad, I'm even more confused by > streams than I was before. > > One thing I never understood is why do you need to store the whole path > to reach each node in the graph? Surely you only need to know the nodes > this one is waiting on, the nodes waiting on this one and the ones those > are waiting on, not the entire history up to this point. The size of > each stream is theoretically up to O(n^2) and you're storing n of them - > that's going to get painful in this million-resource stack. > >>>>> If there's a smaller representation of a graph than a list of edges >>>>> then I don't know what it is. The proposed stream structure certainly >>>>> isn't it, unless you mean as an alternative to storing the entire >>>>> graph once for each resource. A better alternative is to store it >>>>> once centrally - in my current implementation it is passed down >>>>> through the trigger messages, but since only one traversal can be in >>>>> progress at a time it could just as easily be stored in the Stack table of the >>> database at the slight cost of an extra write. >>>>> >>>> >>>> Agree that edge is the smallest representation of a graph. But it does >>>> not give us a complete picture without doing a DB lookup. Our >>>> assumption was to store streams in IN_PROGRESS resource_lock.data >>>> column. This could be in resource table instead. >>> >>> That's true, but I think in practice at any point where we need to look at this >>> we will always have already loaded the Stack from the DB for some other >>> reason, so we actually can get it for free. (See detailed discussion in my reply >>> to Anant.) >>> >> >> Aren't we planning to stop loading stack with all resource objects in future to >> Address scalability concerns we currently have? > > We plan on not loading all of the Resource objects each time we load the > Stack object, but I think we will always need to have loaded the Stack > object (for example, we'll need to check the current traversal ID, > amongst other reasons). So if the serialised dependency graph is stored > in the Stack it will be no big deal. > >>>>> I'm not opposed to doing that, BTW. In fact, I'm really interested in >>>>> your input on how that might help make recovery from failure more >>>>> robust. I know Anant mentioned that not storing enough data to >>>>> recover when a node dies was his big concern with my current approach. >>>>> >>>> >>>> With streams, We feel recovery will be easier. All we need is a >>>> trigger :) >>>> >>>>> I can see that by both creating all the sync points at the start of >>>>> the traversal and storing the dependency graph in the database >>>>> instead of letting it flow through the RPC messages, we would be able >>>>> to resume a traversal where it left off, though I'm not sure what that buys >>> us. >>>>> >>>>> And I guess what you're suggesting is that by having an explicit lock >>>>> with the engine ID specified, we can detect when a resource is stuck >>>>> in IN_PROGRESS due to an engine going down? That's actually pretty >>> interesting. >>>>> >>>> >>>> Yeah :) >>>> >>>>>> Based on our call on Thursday, I think you're taking the idea of the >>>>>> Lock >>>>> table too literally. The point of referring to locks is that we can >>>>> use the same concepts as the Lock table relies on to do atomic >>>>> updates on a particular row of the database, and we can use those >>>>> atomic updates to prevent race conditions when implementing >>>>> SyncPoints/Aggregators/whatever you want to call them. It's not that >>>>> we'd actually use the Lock table itself, which implements a mutex and >>>>> therefore offers only a much slower and more stateful way of doing >>>>> what we want (lock mutex, change data, unlock mutex). >>>>>> [Murugan, Visnusaran] Are you suggesting something like a >>>>>> select-for- >>>>> update in resource table itself without having a lock table? >>>>> >>>>> Yes, that's exactly what I was suggesting. >>>> >>>> DB is always good for sync. But we need to be careful not to overdo it. >>> >>> Yeah, I see what you mean now, it's starting to _feel_ like there'd be too >>> many things mixed together in the Resource table. Are you aware of some >>> concrete harm that might cause though? What happens if we overdo it? Is >>> select-for-update on a huge row more expensive than the whole overhead >>> of manipulating the Lock? >>> >>> Just trying to figure out if intuition is leading me astray here. >>> >> >> You are right. There should be no difference apart from little bump >> In memory usage. But I think it should be fine. >> >>>> Will update etherpad by tomorrow. >>> >>> OK, thanks. >>> >>> cheers, >>> Zane. >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From ihrachys at redhat.com Mon Dec 15 15:21:04 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 15 Dec 2014 16:21:04 +0100 Subject: [openstack-dev] Do all OpenStack daemons support sd_notify? In-Reply-To: <548D4E35.6070101@debian.org> References: <548D4E35.6070101@debian.org> Message-ID: <548EFC60.2020505@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 14/12/14 09:45, Thomas Goirand wrote: > Hi, > > As I am slowing fixing all systemd issues for the daemons of > OpenStack in Debian (and hopefully, have this ready before the > freeze of Jessie), I was wondering what kind of Type= directive to > put on the systemd .service files. I have noticed that in Fedora, > there's Type=notify. So my question is: > > Do all OpenStack daemons, as a rule, support the DBus sd_notify > thing? Should I always use Type=notify for systemd .service files? > Can this be called a general rule with no exception? (I will talk about neutron only.) I guess Type=notify is supposed to be used with daemons that use Service class from oslo-incubator that provides systemd notification mechanism, or call to systemd.notify_once() otherwise. In terms of Neutron, neutron-server process is doing it, metadata agent also seems to do it, while OVS agent seems to not. So it really should depend on each service and the way it's implemented. You cannot just assume that every Neutron service reports back to systemd. In terms of Fedora, we have Type=notify for neutron-server service only. BTW now that more distributions are interested in shipping unit files for services, should we upstream them and ship the same thing in all interested distributions? > > Cheers, > > Thomas Goirand (zigo) > > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUjvxgAAoJEC5aWaUY1u57N1gH/RsYPqmGpyoZ8fe8CwXcnz+R Rvfo7FHpcEZ9+Idvr9qitoPhKtGjzwgJC27EIQ6NCvgZZT462f+/jYHlxW0dX5Cz Fm9Zg/Hv50ukDOC1nT3nfDKH8uMwuPMrQsfRuXTGKhwqsfgnFfExozydgVeC2XFw WB9B3tBblp+7PRzaGyN9Bpe3gQnHUm3lyXaziK+wLbf7NTROzATlVCZ4xpPWu/5C ArfzwXICp9Dk5Juy75mwYwh37gw26w0VWfvPzn2WjkSVHKymNVn9GRdflVOrV3fq wnhu08e/wup8XF1/eKkWUJyF+hEsN5E0kO2x5CvavvMS3HSTm3Viuhz5tKC6ZAs= =WiBi -----END PGP SIGNATURE----- From anteaya at anteaya.info Mon Dec 15 15:23:44 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Mon, 15 Dec 2014 10:23:44 -0500 Subject: [openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time In-Reply-To: <5487296C.5010406@anteaya.info> References: <5487296C.5010406@anteaya.info> Message-ID: <548EFD00.6030300@anteaya.info> On 12/09/2014 11:55 AM, Anita Kuno wrote: > On 12/09/2014 08:32 AM, Kurt Taylor wrote: >> All of the feedback so far has supported moving the existing IRC >> Third-party CI meeting to better fit a worldwide audience. >> >> The consensus is that we will have only 1 meeting per week at alternating >> times. You can see examples of other teams with alternating meeting times >> at: https://wiki.openstack.org/wiki/Meetings >> >> This way, one week we are good for one part of the world, the next week for >> the other. You will not need to attend both meetings, just the meeting time >> every other week that fits your schedule. >> >> Proposed times in UTC are being voted on here: >> https://www.google.com/moderator/#16/e=21b93c >> >> Please vote on the time that is best for you. I would like to finalize the >> new times this week. >> >> Thanks! >> Kurt Taylor (krtaylor) >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > Note that Kurt is welcome to do as he pleases with his own time. > > I will be having meetings in the irc channel for the times that I have > booked. > > Thanks, > Anita. > Just in case anyone remains confused, I am chairing third party meetings Mondays at 1500 UTC and Tuesdays at 0800 UTC in #openstack-meeting. There is a meeting currently in progress. This is a great time for people who don't understand requirements to show up and ask questions. Thank you, Anita. From anteaya at anteaya.info Mon Dec 15 15:46:44 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Mon, 15 Dec 2014 10:46:44 -0500 Subject: [openstack-dev] [nova][cinder][infra] Ceph CI status update In-Reply-To: <548ECFE8.20106@redhat.com> References: <20141211163643.GA10911@helmut> <5489CE60.1050701@anteaya.info> <548ECFE8.20106@redhat.com> Message-ID: <548F0264.50308@anteaya.info> On 12/15/2014 07:11 AM, Russell Bryant wrote: > On 12/11/2014 12:03 PM, Anita Kuno wrote: >> On 12/11/2014 09:36 AM, Jon Bernard wrote: >>> Heya, quick Ceph CI status update. Once the test_volume_boot_pattern >>> was marked as skipped, only the revert_resize test was failing. I have >>> submitted a patch to nova for this [1], and that yields an all green >>> ceph ci run [2]. So at the moment, and with my revert patch, we're in >>> good shape. >>> >>> I will fix up that patch today so that it can be properly reviewed and >>> hopefully merged. From there I'll submit a patch to infra to move the >>> job to the check queue as non-voting, and we can go from there. >>> >>> [1] https://review.openstack.org/#/c/139693/ >>> [2] http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html >>> >>> Cheers, >>> >> Please add the name of your CI account to this table: >> https://wiki.openstack.org/wiki/ThirdPartySystems >> >> As outlined in the third party CI requirements: >> http://ci.openstack.org/third_party.html#requirements >> >> Please post system status updates to your individual CI wikipage that is >> linked to this table. >> >> The mailing list is not the place to post status updates for third party >> CI systems. >> >> If you have questions about any of the above, please attend one of the >> two third party meetings and ask any and all questions until you are >> satisfied. https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting > > This is not a third party CI system. This is a job running in OpenStack > infra. It was in the experimental pipeline while bugs were being fixed. > This report is about those bugs being fixed and Jon giving a heads up > that he thinks it will be ready to move to the check queue very soon. > My mistake then, thank you for the explanation. Going forward can we avoid the use of CI in the subject line for emails about jobs running in infra and their status? Thank you, Anita. From fungi at yuggoth.org Mon Dec 15 15:54:15 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 15 Dec 2014 15:54:15 +0000 Subject: [openstack-dev] [nova] pylint failure in havana stable 2013.2.4 In-Reply-To: <1614252448.23708342.1418609647315.JavaMail.hzwangpan@corp.netease.com> References: <1614252448.23708342.1418609647315.JavaMail.hzwangpan@corp.netease.com> Message-ID: <20141215155415.GM2497@yuggoth.org> On 2014-12-15 10:14:04 +0800 (+0800), hzwangpan wrote: [...] > I downloaded the tar package on > launched(https://launchpad.net/nova/ > havana/2013.2.4/+download/nova-2013.2.4.tar.gz), I checked the > nova/ exception.py, there is no 'Forbidden' class or other member > in it, so I guess this is a mis-denpendency backport commit, this > denpended commit c75a15a48981628e77d4178476c121693a656814 should > be backported. OpenStack Havana reached end of support three months ago, and so there will be no more official fixes backported upstream: -- Jeremy Stanley From kurt.r.taylor at gmail.com Mon Dec 15 15:55:49 2014 From: kurt.r.taylor at gmail.com (Kurt Taylor) Date: Mon, 15 Dec 2014 09:55:49 -0600 Subject: [openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time In-Reply-To: <548EFD00.6030300@anteaya.info> References: <5487296C.5010406@anteaya.info> <548EFD00.6030300@anteaya.info> Message-ID: Anita, please, creating yet another meeting time without input from anyone just confuses the issue. The work group has agreed unanimously on alternating weekly meeting times, and are currently voting on the best for everyone. ( https://www.google.com/moderator/#16/e=21b93c 14 voters so far, thanks everyone!) Once we finalize the voting, I was going to start up the new meeting times in the new year. Until then, we would stay at our normal time, Monday at 1800 UTC. I am still confused why you would not want to go with the consensus on this. And, thanks again for everything that you do for us! Kurt Taylor (krtaylor) On Mon, Dec 15, 2014 at 9:23 AM, Anita Kuno wrote: > > On 12/09/2014 11:55 AM, Anita Kuno wrote: > > On 12/09/2014 08:32 AM, Kurt Taylor wrote: > >> All of the feedback so far has supported moving the existing IRC > >> Third-party CI meeting to better fit a worldwide audience. > >> > >> The consensus is that we will have only 1 meeting per week at > alternating > >> times. You can see examples of other teams with alternating meeting > times > >> at: https://wiki.openstack.org/wiki/Meetings > >> > >> This way, one week we are good for one part of the world, the next week > for > >> the other. You will not need to attend both meetings, just the meeting > time > >> every other week that fits your schedule. > >> > >> Proposed times in UTC are being voted on here: > >> https://www.google.com/moderator/#16/e=21b93c > >> > >> Please vote on the time that is best for you. I would like to finalize > the > >> new times this week. > >> > >> Thanks! > >> Kurt Taylor (krtaylor) > >> > >> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > Note that Kurt is welcome to do as he pleases with his own time. > > > > I will be having meetings in the irc channel for the times that I have > > booked. > > > > Thanks, > > Anita. > > > Just in case anyone remains confused, I am chairing third party meetings > Mondays at 1500 UTC and Tuesdays at 0800 UTC in #openstack-meeting. > There is a meeting currently in progress. > > This is a great time for people who don't understand requirements to > show up and ask questions. > > Thank you, > Anita. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Mon Dec 15 16:00:01 2014 From: clint at fewbar.com (Clint Byrum) Date: Mon, 15 Dec 2014 08:00:01 -0800 Subject: [openstack-dev] Do all OpenStack daemons support sd_notify? In-Reply-To: <548EFC60.2020505@redhat.com> References: <548D4E35.6070101@debian.org> <548EFC60.2020505@redhat.com> Message-ID: <1418658934-sup-597@fewbar.com> Excerpts from Ihar Hrachyshka's message of 2014-12-15 07:21:04 -0800: > Hash: SHA512 > > On 14/12/14 09:45, Thomas Goirand wrote: > > Hi, > > > > As I am slowing fixing all systemd issues for the daemons of > > OpenStack in Debian (and hopefully, have this ready before the > > freeze of Jessie), I was wondering what kind of Type= directive to > > put on the systemd .service files. I have noticed that in Fedora, > > there's Type=notify. So my question is: > > > > Do all OpenStack daemons, as a rule, support the DBus sd_notify > > thing? Should I always use Type=notify for systemd .service files? > > Can this be called a general rule with no exception? > > (I will talk about neutron only.) > > I guess Type=notify is supposed to be used with daemons that use > Service class from oslo-incubator that provides systemd notification > mechanism, or call to systemd.notify_once() otherwise. > > In terms of Neutron, neutron-server process is doing it, metadata > agent also seems to do it, while OVS agent seems to not. So it really > should depend on each service and the way it's implemented. You cannot > just assume that every Neutron service reports back to systemd. > > In terms of Fedora, we have Type=notify for neutron-server service only. > > BTW now that more distributions are interested in shipping unit files > for services, should we upstream them and ship the same thing in all > interested distributions? > Since we can expect the five currently implemented OS's in TripleO to all have systemd by default soon (Debian, Fedora, openSUSE, RHEL, Ubuntu), it would make a lot of sense for us to make the systemd unit files that TripleO generates set Type=notify wherever possible. So hopefully we can actually make such a guarantee upstream sometime in the not-so-distant future, especially since our CI will run two of the more distinct forks, Ubuntu and Fedora. From fungi at yuggoth.org Mon Dec 15 16:16:14 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 15 Dec 2014 16:16:14 +0000 Subject: [openstack-dev] [infra] Job failures related to Setuptools 8.0 release Message-ID: <20141215161614.GN2497@yuggoth.org> On Saturday, December 13, Setuptools 8.0 released implementing the new PEP 440[1] version specification and dropping support for a variety of previously somewhat-supported versioning syntaxes. This is impacting us in several ways... "Multiple range" expressions in requirements are no longer interpreted the way we were relying on. The fix is fairly straightforward, since the SQLAlchemy requirement in oslo.db (and corresponding line in global requirements) is the only current example. Fixes for this in multiple branches are in flight. [2][3][4] (This last one is merged to oslo.db's master branch but not released yet.) Arbitrary alphanumeric version subcomponents such as PBR's Git SHA suffixes now sort earlier than all major version numbers. The fix is still in progress[5][6], and resulted in a couple of brown-bag releases over the weekend. 0.10.1 generated PEP 440 compliant versions which ended up unparseable when included in requirements files, so 0.10.2 is a roll-back identical to 0.10. The "1.2.3.rc1" we were using for release candidates is now automatically normalized to "1.2.3c1" during sdist tarball and wheel generation, causing tag text to no longer match the resulting file names. This may simply require us to change our naming convention[7] for these sorts of tags in the future. In the interim we've pinned setuptools<8 in our infrastructure[8] to help keep things moving while these various solutions crystalize, but if you run into this problem locally check your version of setuptools and try an older one. [1] http://legacy.python.org/dev/peps/pep-0440/ [2] https://review.openstack.org/141584 [3] https://review.openstack.org/137583 [4] https://review.openstack.org/140948 [5] https://review.openstack.org/141666 [6] https://review.openstack.org/141667 [7] https://review.openstack.org/141831 [8] https://review.openstack.org/141577 -- Jeremy Stanley From ksamoray at vmware.com Mon Dec 15 16:20:29 2014 From: ksamoray at vmware.com (Kobi Samoray) Date: Mon, 15 Dec 2014 16:20:29 +0000 Subject: [openstack-dev] [neutron][lbaas][fwaas][oslo] Common code between VMWare neutron plugin and services plugins Message-ID: Hi, Some files in neutron are common infrastructure to the VMWare neutron L2/L3 plugin, and the services plugins. These files wrap VMWare NSX and provide a python API to some NSX services. This code is common to: - VMWare L2/L3 plugin, which after the split should be held outside of openstack repo (e.g stackforge) - neutron-lbaas, neutron-fwaas repos, which will hold the VMWare services plugins With neutron split into multiple repos, in and out of openstack, we have the following options: 1. Duplicate the relevant code between the various repos - IMO a pretty bad choice for obvious reasons. 2. Keep the code in the VMWare L3/L4 plugin repo - which will add an import from the neutron-*aas repos to a repo which is outside of openstack. 3. Add these components to oslo.vmware project: oslo.vmware contains, as of now, a wrapper to vCenter API. The components in discussion wrap NSX API, which is out of vCenter scope. Therefore it?s not really a part of oslo.vmware scope as it is defined today, but is still a wrapper layer to a VMWare product. We could extend the oslo.vmware scope to include wrappers to VMWare products, in general, and add the relevant components under oslo.vmware.network.nsx or similar. Thanks, Kobi From dougw at a10networks.com Mon Dec 15 16:22:59 2014 From: dougw at a10networks.com (Doug Wiegley) Date: Mon, 15 Dec 2014 16:22:59 +0000 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: <548EED0D.6020600@redhat.com> References: <548EED0D.6020600@redhat.com> Message-ID: Hi Ihar, I?m actually in favor of option 2, but it implies a few things about your time, and I wanted to chat with you before presuming. Maintenance can not involve breaking changes. At this point, the co-gate will block it. Also, oslo graduation changes will have to be made in the services repos first, and then Neutron. Thanks, doug On 12/15/14, 6:15 AM, "Ihar Hrachyshka" wrote: >-----BEGIN PGP SIGNED MESSAGE----- >Hash: SHA512 > >Hi all, > >the question arose recently in one of reviews for neutron-*aas repos >to remove all oslo-incubator code from those repos since it's >duplicated in neutron main repo. (You can find the link to the review >at the end of the email.) > >Brief hostory: neutron repo was recently split into 4 pieces (main, >neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split resulted >in each repository keeping their own copy of >neutron/openstack/common/... tree (currently unused in all >neutron-*aas repos that are still bound to modules from main repo). > >As a oslo liaison for the project, I wonder what's the best way to >manage oslo-incubator files. We have several options: > >1. just kill all the neutron/openstack/common/ trees from neutron-*aas >repositories and continue using modules from main repo. > >2. kill all duplicate modules from neutron-*aas repos and leave only >those that are used in those repos but not in main repo. > >3. fully duplicate all those modules in each of four repos that use them. > >I think option 1. is a straw man, since we should be able to introduce >new oslo-incubator modules into neutron-*aas repos even if they are >not used in main repo. > >Option 2. is good when it comes to synching non-breaking bug fixes (or >security fixes) from oslo-incubator, in that it will require only one >sync patch instead of e.g. four. At the same time there may be >potential issues when synchronizing updates from oslo-incubator that >would break API and hence require changes to each of the modules that >use it. Since we don't support atomic merges for multiple projects in >gate, we will need to be cautious about those updates, and we will >still need to leave neutron-*aas repos broken for some time (though >the time may be mitigated with care). > >Option 3. is vice versa - in theory, you get total decoupling, meaning >no oslo-incubator updates in main repo are expected to break >neutron-*aas repos, but bug fixing becomes a huge PITA. > >I would vote for option 2., for two reasons: >- - most oslo-incubator syncs are non-breaking, and we may effectively >apply care to updates that may result in potential breakage (f.e. >being able to trigger an integrated run for each of neutron-*aas repos >with the main sync patch, if there are any concerns). >- - it will make oslo liaison life a lot easier. OK, I'm probably too >selfish on that. ;) >- - it will make stable maintainers life a lot easier. The main reason >why stable maintainers and distributions like recent oslo graduation >movement is that we don't need to track each bug fix we need in every >project, and waste lots of cycles on it. Being able to fix a bug in >one place only is *highly* anticipated. [OK, I'm quite selfish on that >one too.] >- - it's a delusion that there will be no neutron-main syncs that will >break neutron-*aas repos ever. There can still be problems due to >incompatibility between neutron main and neutron-*aas code resulted >EXACTLY because multiple parts of the same process use different >versions of the same module. > >That said, Doug Wiegley (lbaas core) seems to be in favour of option >3. due to lower coupling that is achieved in that way. I know that >lbaas team had a bad experience due to tight coupling to neutron >project in the past, so I appreciate their concerns. > >All in all, we should come up with some standard solution for both >advanced services that are already split out, *and* upcoming vendor >plugin shrinking initiative. > >The initial discussion is captured at: >https://review.openstack.org/#/c/141427/ > >Thanks, >/Ihar >-----BEGIN PGP SIGNATURE----- >Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > >iQEcBAEBCgAGBQJUju0NAAoJEC5aWaUY1u57n5YH/jA4l5DsLgRpw9gYsoSWVGvh >apmJ4UlnAKhxzc787XImz1VA+ztSyIwAUdEdcfq3gkinP58q7o48oIXOGjFXaBNq >6qBePC1hflEqZ85Hm4/i5z51qutjW0dyi4y4C6FHgM5NsEkhbh0QIa/u8Hr4F1q6 >tkr0kDbCbDkiZ8IX1l74VGWQ3QvCNeJkANUg79BqGq+qIVP3BeOHyWqRmpLZFQ6E >QiUwhiYv5l4HekCEQN8PWisJoqnhbTNjvLBnLD82IitLd5vXnsXfSkxKhv9XeOg/ >czLUCyr/nJg4aw8Qm0DTjnZxS+BBe5De0Ke4zm2AGePgFYcai8YQPtuOfSJDbXk= >=D6Gn >-----END PGP SIGNATURE----- From rbryant at redhat.com Mon Dec 15 16:23:02 2014 From: rbryant at redhat.com (Russell Bryant) Date: Mon, 15 Dec 2014 11:23:02 -0500 Subject: [openstack-dev] [neutron][lbaas][fwaas][oslo] Common code between VMWare neutron plugin and services plugins In-Reply-To: References: Message-ID: <548F0AE6.7010806@redhat.com> On 12/15/2014 11:20 AM, Kobi Samoray wrote: > 3. Add these components to oslo.vmware project: oslo.vmware contains, as of now, a wrapper to vCenter API. The components in discussion wrap NSX API, which is out of vCenter scope. Therefore it?s not really a part of oslo.vmware scope as it is defined today, but is still a wrapper layer to a VMWare product. > We could extend the oslo.vmware scope to include wrappers to VMWare products, in general, and add the relevant components under oslo.vmware.network.nsx or similar. This option sounds best to me, unless the NSX support brings in some additional dependencies to oslo.vmware that warrant keeping it separate from the existing oslo.vmware. -- Russell Bryant From anteaya at anteaya.info Mon Dec 15 16:24:20 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Mon, 15 Dec 2014 11:24:20 -0500 Subject: [openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time In-Reply-To: References: <5487296C.5010406@anteaya.info> <548EFD00.6030300@anteaya.info> Message-ID: <548F0B34.1010404@anteaya.info> On 12/15/2014 10:55 AM, Kurt Taylor wrote: > Anita, please, creating yet another meeting time without input from anyone > just confuses the issue. When I ask people to attend meetings to reduce noise on the mailing list, there had better be some meetings. I am grateful for the time you have spent chairing, thank you. It gave me a huge break and allowed me to focus on other things (like reviews) that I had to neglect due to the amount of time third party was taking from my life. I need there to be meetings to answer questions for people and will be chairing meetings on the dates and times I have specified, like I said that I would do. Thank you, Anita. > > The work group has agreed unanimously on alternating weekly meeting times, > and are currently voting on the best for everyone. ( > https://www.google.com/moderator/#16/e=21b93c 14 voters so far, thanks > everyone!) Once we finalize the voting, I was going to start up the new > meeting times in the new year. Until then, we would stay at our normal > time, Monday at 1800 UTC. > > I am still confused why you would not want to go with the consensus on this. > > And, thanks again for everything that you do for us! > Kurt Taylor (krtaylor) > > > On Mon, Dec 15, 2014 at 9:23 AM, Anita Kuno wrote: >> >> On 12/09/2014 11:55 AM, Anita Kuno wrote: >>> On 12/09/2014 08:32 AM, Kurt Taylor wrote: >>>> All of the feedback so far has supported moving the existing IRC >>>> Third-party CI meeting to better fit a worldwide audience. >>>> >>>> The consensus is that we will have only 1 meeting per week at >> alternating >>>> times. You can see examples of other teams with alternating meeting >> times >>>> at: https://wiki.openstack.org/wiki/Meetings >>>> >>>> This way, one week we are good for one part of the world, the next week >> for >>>> the other. You will not need to attend both meetings, just the meeting >> time >>>> every other week that fits your schedule. >>>> >>>> Proposed times in UTC are being voted on here: >>>> https://www.google.com/moderator/#16/e=21b93c >>>> >>>> Please vote on the time that is best for you. I would like to finalize >> the >>>> new times this week. >>>> >>>> Thanks! >>>> Kurt Taylor (krtaylor) >>>> >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> Note that Kurt is welcome to do as he pleases with his own time. >>> >>> I will be having meetings in the irc channel for the times that I have >>> booked. >>> >>> Thanks, >>> Anita. >>> >> Just in case anyone remains confused, I am chairing third party meetings >> Mondays at 1500 UTC and Tuesdays at 0800 UTC in #openstack-meeting. >> There is a meeting currently in progress. >> >> This is a great time for people who don't understand requirements to >> show up and ask questions. >> >> Thank you, >> Anita. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dougw at a10networks.com Mon Dec 15 16:25:36 2014 From: dougw at a10networks.com (Doug Wiegley) Date: Mon, 15 Dec 2014 16:25:36 +0000 Subject: [openstack-dev] [neutron][lbaas][fwaas][oslo] Common code between VMWare neutron plugin and services plugins Message-ID: On 12/15/14, 8:20 AM, "Kobi Samoray" wrote: >Hi, >Some files in neutron are common infrastructure to the VMWare neutron >L2/L3 plugin, and the services plugins. >These files wrap VMWare NSX and provide a python API to some NSX services. > >This code is common to: >- VMWare L2/L3 plugin, which after the split should be held outside of >openstack repo (e.g stackforge) >- neutron-lbaas, neutron-fwaas repos, which will hold the VMWare services >plugins > >With neutron split into multiple repos, in and out of openstack, we have >the following options: >1. Duplicate the relevant code between the various repos - IMO a pretty >bad choice for obvious reasons. Yeah, yuck. > >2. Keep the code in the VMWare L3/L4 plugin repo - which will add an >import from the neutron-*aas repos to a repo which is outside of >openstack. Importing code from elsewhere, which is not in the requirements file, is done in a few places for vendor libraries. As long as the mainline code doesn?t require it, and unit tests similarly can run without that import being present, I don?t see a big problem wit hit. Doug > >3. Add these components to oslo.vmware project: oslo.vmware contains, as >of now, a wrapper to vCenter API. The components in discussion wrap NSX >API, which is out of vCenter scope. Therefore it?s not really a part of >oslo.vmware scope as it is defined today, but is still a wrapper layer to >a VMWare product. >We could extend the oslo.vmware scope to include wrappers to VMWare >products, in general, and add the relevant components under >oslo.vmware.network.nsx or similar. > >Thanks, >Kobi >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ksamoray at vmware.com Mon Dec 15 16:46:20 2014 From: ksamoray at vmware.com (Kobi Samoray) Date: Mon, 15 Dec 2014 16:46:20 +0000 Subject: [openstack-dev] [neutron][lbaas][fwaas][oslo] Common code between VMWare neutron plugin and services plugins In-Reply-To: <548F0AE6.7010806@redhat.com> References: <548F0AE6.7010806@redhat.com> Message-ID: It shouldn?t drag any additional dependencies - it?s REST API wrapper so nothing beyond XML/JSON/HTTP/threads should be used in these components. > On Dec 15, 2014, at 18:23, Russell Bryant wrote: > > On 12/15/2014 11:20 AM, Kobi Samoray wrote: >> 3. Add these components to oslo.vmware project: oslo.vmware contains, as of now, a wrapper to vCenter API. The components in discussion wrap NSX API, which is out of vCenter scope. Therefore it?s not really a part of oslo.vmware scope as it is defined today, but is still a wrapper layer to a VMWare product. >> We could extend the oslo.vmware scope to include wrappers to VMWare products, in general, and add the relevant components under oslo.vmware.network.nsx or similar. > > This option sounds best to me, unless the NSX support brings in some > additional dependencies to oslo.vmware that warrant keeping it separate > from the existing oslo.vmware. > > -- > Russell Bryant > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From rgerganov at vmware.com Mon Dec 15 16:46:45 2014 From: rgerganov at vmware.com (Radoslav Gerganov) Date: Mon, 15 Dec 2014 18:46:45 +0200 Subject: [openstack-dev] [nova] Kilo specs review day In-Reply-To: <874msxrxnc.fsf@metaswitch.com> References: <5489B879.7030006@rackspace.com> <874msxrxnc.fsf@metaswitch.com> Message-ID: <548F1075.60507@vmware.com> On 12/15/2014 12:54 PM, Neil Jerram wrote: > My Nova spec (https://review.openstack.org/#/c/130732/) does not appear > on this dashboard, even though I believe it's in good standing and - I > hope - close to approval. Do you know why - does it mean that I've set > some metadata field somewhere wrongly? > The dashboard doesn't show your own specs. You have to remove "+NOT+owner%3Aself" from the URL to see them. From doug at doughellmann.com Mon Dec 15 16:53:07 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 15 Dec 2014 11:53:07 -0500 Subject: [openstack-dev] oslo.db 1.3.0 released Message-ID: The Oslo team is pleased to announce the release of oslo.db 1.3.0: oslo.db library This release is primarily meant to update the SQLAlchemy dependency to resolve the issue with the new version of setuptools changing how it evaluates version range specifications. For more details, please see the git log history below and http://launchpad.net/oslo.db/+milestone/1.3.0 Please report issues through launchpad: http://bugs.launchpad.net/oslo.db ---------------------------------------- Changes in openstack/oslo.db 1.2.0..1.3.0 0265aa4 Repair string-based disconnect filters for MySQL, DB2 b1af0f5 Fix python3.x scoping issues with removed 'uee' variable c6b352e Updated from global requirements 9658b28 Fix test_migrate_cli for py3 4c939b3 Fix TestConnectionUtils to py3x compatibility 9c3477d Updated from global requirements 32e5c60 Upgrade exc_filters for 'engine' argument and connect behavior 161bbb2 Workflow documentation is now in infra-manual 86c136a Fix nested() for py3 diffstat (except docs and test files): CONTRIBUTING.rst | 7 ++--- oslo/db/sqlalchemy/compat/__init__.py | 6 ++-- oslo/db/sqlalchemy/compat/handle_error.py | 50 +++++++++++++++++++++++++++---- oslo/db/sqlalchemy/compat/utils.py | 1 + oslo/db/sqlalchemy/exc_filters.py | 34 ++++++++------------- requirements.txt | 4 +-- tests/sqlalchemy/test_exc_filters.py | 39 +++++++++++++++++++----- tests/sqlalchemy/test_migrate_cli.py | 6 ++-- tests/sqlalchemy/test_utils.py | 12 ++++---- tests/utils.py | 4 +-- 10 files changed, 111 insertions(+), 52 deletions(-) Requirements updates: diff --git a/requirements.txt b/requirements.txt index f8a0d8c..8ab53a0 100644 --- a/requirements.txt +++ b/requirements.txt @@ -11,2 +11,2 @@ oslo.config>=1.4.0 # Apache-2.0 -oslo.utils>=1.0.0 # Apache-2.0 -SQLAlchemy>=0.8.4,<=0.8.99,>=0.9.7,<=0.9.99 +oslo.utils>=1.1.0 # Apache-2.0 +SQLAlchemy>=0.9.7,<=0.9.99 From ihrachys at redhat.com Mon Dec 15 16:54:13 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 15 Dec 2014 17:54:13 +0100 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <548B60D0.7090600@dague.net> References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> <548A1A34.40105@dague.net> <548B60D0.7090600@dague.net> Message-ID: <548F1235.2020502@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 I was (rightfully) asked to share my comments on the matter that I left in gerrit here. See below. On 12/12/14 22:40, Sean Dague wrote: > On 12/12/2014 01:05 PM, Maru Newby wrote: >> >> On Dec 11, 2014, at 2:27 PM, Sean Dague wrote: >> >>> On 12/11/2014 04:16 PM, Jay Pipes wrote: >>>> On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote: >>>>> On Dec 11, 2014, at 1:04 PM, Jay Pipes >>>>> wrote: >>>>>> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: >>>>>>> >>>>>>> On Dec 11, 2014, at 8:00 AM, Henry Gessau >>>>>>> wrote: >>>>>>> >>>>>>>> On Thu, Dec 11, 2014, Mark McClain >>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes >>>>>>>>>> > >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> I'm generally in favor of making name attributes >>>>>>>>>> opaque, utf-8 strings that are entirely >>>>>>>>>> user-defined and have no constraints on them. I >>>>>>>>>> consider the name to be just a tag that the user >>>>>>>>>> places on some resource. It is the resource's ID >>>>>>>>>> that is unique. >>>>>>>>>> >>>>>>>>>> I do realize that Nova takes a different approach >>>>>>>>>> to *some* resources, including the security group >>>>>>>>>> name. >>>>>>>>>> >>>>>>>>>> End of the day, it's probably just a personal >>>>>>>>>> preference whether names should be unique to a >>>>>>>>>> tenant/user or not. >>>>>>>>>> >>>>>>>>>> Maru had asked me my opinion on whether names >>>>>>>>>> should be unique and I answered my personal >>>>>>>>>> opinion that no, they should not be, and if >>>>>>>>>> Neutron needed to ensure that there was one and >>>>>>>>>> only one default security group for a tenant, >>>>>>>>>> that a way to accomplish such a thing in a >>>>>>>>>> race-free way, without use of SELECT FOR UPDATE, >>>>>>>>>> was to use the approach I put into the pastebin >>>>>>>>>> on the review above. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I agree with Jay. We should not care about how a >>>>>>>>> user names the resource. There other ways to >>>>>>>>> prevent this race and Jay?s suggestion is a good >>>>>>>>> one. >>>>>>>> >>>>>>>> However we should open a bug against Horizon because >>>>>>>> the user experience there is terrible with duplicate >>>>>>>> security group names. >>>>>>> >>>>>>> The reason security group names are unique is that the >>>>>>> ec2 api supports source rule specifications by >>>>>>> tenant_id (user_id in amazon) and name, so not >>>>>>> enforcing uniqueness means that invocation in the ec2 >>>>>>> api will either fail or be non-deterministic in some >>>>>>> way. >>>>>> >>>>>> So we should couple our API evolution to EC2 API then? >>>>>> >>>>>> -jay >>>>> >>>>> No I was just pointing out the historical reason for >>>>> uniqueness, and hopefully encouraging someone to find the >>>>> best behavior for the ec2 api if we are going to keep the >>>>> incompatibility there. Also I personally feel the ux is >>>>> better with unique names, but it is only a slight >>>>> preference. >>>> >>>> Sorry for snapping, you made a fair point. >>> >>> Yeh, honestly, I agree with Vish. I do feel that the UX of >>> that constraint is useful. Otherwise you get into having to >>> show people UUIDs in a lot more places. While those are good >>> for consistency, they are kind of terrible to show to people. >> >> While there is a good case for the UX of unique names - it also >> makes orchestration via tools like puppet a heck of a lot simpler >> - the fact is that most OpenStack resources do not require unique >> names. That being the case, why would we want security groups to >> deviate from this convention? > > Maybe the other ones are the broken ones? > > Honestly, any sanely usable system makes names unique inside a > container. Like files in a directory. Correct. Or take git: it does not use hashes to identify objects, right? > In this case the tenant is the container, which makes sense. > > It is one of many places that OpenStack is not consistent. But I'd > rather make things consistent and more usable than consistent and > less. Are we only proposing to make security group name unique? I assume that, since that's what we currently have in review. The change would make API *more* inconsistent, not less, since other objects still use uuid for identification. You may say that we should move *all* neutron objects to the new identification system by name. But what's the real benefit? If there are problems in UX (client, horizon, ...), we should fix the view and not data models used. If we decide we want users to avoid using objects with the same names, fine, let's add warnings in UI (probably with an option to disable it so that we don't push the validation into their throats). Finally, I have concern about us changing user visible object attributes like names during db migrations, as it's proposed in the patch discussed here. I think such behaviour can be quite unexpected for some users, if not breaking their workflow and/or scripts. My belief is that responsible upstream does not apply ad-hoc changes to API to fix a race condition that is easily solvable in other ways (see Assaf's proposal to introduce a new DefaultSecurityGroups table in patchset 12 comments). As for the whole object identification scheme change, for this to work, it probably needs a spec and a long discussion on any possible complications (and benefits) when applying a change like that. For reference and convenience of readers, leaving the link to the patch below: https://review.openstack.org/#/c/135006/ > > -Sean > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUjxI1AAoJEC5aWaUY1u579M8H/RC+M7/9YYDClWRjhQLBNqEq 0pMxURZi8lTyDmi+cXA7wq1QzgOwUqWnJnOMYzq8nt9wLh8cgasjU5YXZokrqDLw zEu/a1Cd9Alp4iGYQ6upw94BptGrMvk+XwTedMX9zMLf0od1k8Gzp5xYH/GXInN3 E+wje40Huia0MmLu4i2GMr/13gD2aYhMeGxZtDMcxQsF0DBh0gy8OM9pfKgIiXVM T65nFbXUY1/PuAdzYwMto5leuWZH03YIddXlzkQbcZoH4PGgNEE3eKl1ctQSMGoo bz3l522VimQvVnP/XiM6xBjFqsnPM5Tc7Ylu942l+NfpfcAM5QB6Ihvw5kQI0uw= =gIsu -----END PGP SIGNATURE----- From fungi at yuggoth.org Mon Dec 15 17:01:05 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 15 Dec 2014 17:01:05 +0000 Subject: [openstack-dev] oslo.db 1.3.0 released In-Reply-To: References: Message-ID: <20141215170104.GO2497@yuggoth.org> On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote: [...] > This release is primarily meant to update the SQLAlchemy dependency > to resolve the issue with the new version of setuptools changing > how it evaluates version range specifications. [...] However note that I'm in the middle of forcing a refresh on a couple of our PyPI mirror servers, so it may be a couple hours before we see the effects of this throughout all of our infrastructure. -- Jeremy Stanley From andre.f.aranha at gmail.com Mon Dec 15 17:02:17 2014 From: andre.f.aranha at gmail.com (=?UTF-8?Q?Andr=C3=A9_Aranha?=) Date: Mon, 15 Dec 2014 14:02:17 -0300 Subject: [openstack-dev] [Ironic] Unrecognized Services and Install Guide Message-ID: When I try to follow the instalation guide I'm having some issues ( http://docs.openstack.org/developer/ironic/deploy/install-guide.html ) I installed the devstack with ironic and did worked. Now, having a single machine running devstack, I want to deploy the Ironic on it. So, I'll have a machine as the controller node and another machine as the one ironic will use as a VM. I'm using Ubuntu 14.04 and when I download the irnoic services # Available in Ubuntu 14.04 (trusty) > apt-get install ironic-api ironic-conductor python-ironicclient I get the ironic-api version and the version downloaded was the 2014.1.rc1 ironic-api --version > 2014.1.rc1 This version don't have the capability 'create_schema', as it's required in the guide ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema So, following some tips in the #openstack-ironic, I downloaded the code from the repository, and installed it after removing the downloaded ironic-api: git clone https://github.com/openstack/ironic.git > python setup.py install Now the ironic-dbsync is working with the create_schema, and I have the following ironic-api version: ironic-api --version > 2015.1.dev206.g2db2659 But when I continue the guide 1) I get an error on the ironic-api service sudo service ironic-api restart > ironic-api: unrecognized service 2) nova-scheduler service don't exist sudo service nova-scheduler restart > nova-scheduler: unrecognized service 3) Neither nova-compute sudo service nova-compute restart > nova-compute: unrecognized service Did someone have this problem, and how can it be solved? I don't know if this issue should be addressed to Openstack-dev so I'm also addressing to Openstack. Thank you, Andre Aranha -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Mon Dec 15 17:05:27 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Mon, 15 Dec 2014 23:05:27 +0600 Subject: [openstack-dev] [mistral] Team meeting minutes - 15/12/2014 Message-ID: <82A366F1-83AD-41A6-9125-8CD00C04690E@mirantis.com> Thanks for joining our team meeting today! Meeting minutes: http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-15-16.01.html Full log: http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-15-16.01.log.html The next meeting is on Dec 22 at the same time. Renat Akhmerov @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amuller at redhat.com Mon Dec 15 17:13:28 2014 From: amuller at redhat.com (Assaf Muller) Date: Mon, 15 Dec 2014 12:13:28 -0500 (EST) Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <548F1235.2020502@redhat.com> References: <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> <548A1A34.40105@dague.net> <548B60D0.7090600@dague.net> <548F1235.2020502@redhat.com> Message-ID: <936303430.2524172.1418663608280.JavaMail.zimbra@redhat.com> ----- Original Message ----- > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > I was (rightfully) asked to share my comments on the matter that I > left in gerrit here. See below. > > On 12/12/14 22:40, Sean Dague wrote: > > On 12/12/2014 01:05 PM, Maru Newby wrote: > >> > >> On Dec 11, 2014, at 2:27 PM, Sean Dague wrote: > >> > >>> On 12/11/2014 04:16 PM, Jay Pipes wrote: > >>>> On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote: > >>>>> On Dec 11, 2014, at 1:04 PM, Jay Pipes > >>>>> wrote: > >>>>>> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: > >>>>>>> > >>>>>>> On Dec 11, 2014, at 8:00 AM, Henry Gessau > >>>>>>> wrote: > >>>>>>> > >>>>>>>> On Thu, Dec 11, 2014, Mark McClain > >>>>>>>> wrote: > >>>>>>>>> > >>>>>>>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes > >>>>>>>>>> > > >>>>>>>>>> wrote: > >>>>>>>>>> > >>>>>>>>>> I'm generally in favor of making name attributes > >>>>>>>>>> opaque, utf-8 strings that are entirely > >>>>>>>>>> user-defined and have no constraints on them. I > >>>>>>>>>> consider the name to be just a tag that the user > >>>>>>>>>> places on some resource. It is the resource's ID > >>>>>>>>>> that is unique. > >>>>>>>>>> > >>>>>>>>>> I do realize that Nova takes a different approach > >>>>>>>>>> to *some* resources, including the security group > >>>>>>>>>> name. > >>>>>>>>>> > >>>>>>>>>> End of the day, it's probably just a personal > >>>>>>>>>> preference whether names should be unique to a > >>>>>>>>>> tenant/user or not. > >>>>>>>>>> > >>>>>>>>>> Maru had asked me my opinion on whether names > >>>>>>>>>> should be unique and I answered my personal > >>>>>>>>>> opinion that no, they should not be, and if > >>>>>>>>>> Neutron needed to ensure that there was one and > >>>>>>>>>> only one default security group for a tenant, > >>>>>>>>>> that a way to accomplish such a thing in a > >>>>>>>>>> race-free way, without use of SELECT FOR UPDATE, > >>>>>>>>>> was to use the approach I put into the pastebin > >>>>>>>>>> on the review above. > >>>>>>>>>> > >>>>>>>>> > >>>>>>>>> I agree with Jay. We should not care about how a > >>>>>>>>> user names the resource. There other ways to > >>>>>>>>> prevent this race and Jay?s suggestion is a good > >>>>>>>>> one. > >>>>>>>> > >>>>>>>> However we should open a bug against Horizon because > >>>>>>>> the user experience there is terrible with duplicate > >>>>>>>> security group names. > >>>>>>> > >>>>>>> The reason security group names are unique is that the > >>>>>>> ec2 api supports source rule specifications by > >>>>>>> tenant_id (user_id in amazon) and name, so not > >>>>>>> enforcing uniqueness means that invocation in the ec2 > >>>>>>> api will either fail or be non-deterministic in some > >>>>>>> way. > >>>>>> > >>>>>> So we should couple our API evolution to EC2 API then? > >>>>>> > >>>>>> -jay > >>>>> > >>>>> No I was just pointing out the historical reason for > >>>>> uniqueness, and hopefully encouraging someone to find the > >>>>> best behavior for the ec2 api if we are going to keep the > >>>>> incompatibility there. Also I personally feel the ux is > >>>>> better with unique names, but it is only a slight > >>>>> preference. > >>>> > >>>> Sorry for snapping, you made a fair point. > >>> > >>> Yeh, honestly, I agree with Vish. I do feel that the UX of > >>> that constraint is useful. Otherwise you get into having to > >>> show people UUIDs in a lot more places. While those are good > >>> for consistency, they are kind of terrible to show to people. > >> > >> While there is a good case for the UX of unique names - it also > >> makes orchestration via tools like puppet a heck of a lot simpler > >> - the fact is that most OpenStack resources do not require unique > >> names. That being the case, why would we want security groups to > >> deviate from this convention? > > > > Maybe the other ones are the broken ones? > > > > Honestly, any sanely usable system makes names unique inside a > > container. Like files in a directory. > > Correct. Or take git: it does not use hashes to identify objects, right? > > > In this case the tenant is the container, which makes sense. > > > > It is one of many places that OpenStack is not consistent. But I'd > > rather make things consistent and more usable than consistent and > > less. > > Are we only proposing to make security group name unique? I assume > that, since that's what we currently have in review. The change would > make API *more* inconsistent, not less, since other objects still use > uuid for identification. > > You may say that we should move *all* neutron objects to the new > identification system by name. But what's the real benefit? > > If there are problems in UX (client, horizon, ...), we should fix the > view and not data models used. If we decide we want users to avoid > using objects with the same names, fine, let's add warnings in UI > (probably with an option to disable it so that we don't push the > validation into their throats). > > Finally, I have concern about us changing user visible object > attributes like names during db migrations, as it's proposed in the > patch discussed here. I think such behaviour can be quite unexpected > for some users, if not breaking their workflow and/or scripts. > > My belief is that responsible upstream does not apply ad-hoc changes > to API to fix a race condition that is easily solvable in other ways > (see Assaf's proposal to introduce a new DefaultSecurityGroups table > in patchset 12 comments). > As usual you explain yourself better than I can... I think my main original objection to the patch is that it feels like an accidental API change to fix a bug. If you want unique naming: 1) We need to be consistent across different resources 2) It needs to be in a dedicate change, and perhaps blueprint Since there's conceivable alternative solutions to the bug that aren't substantially more costly or complicated, I don't see why we would pursue the proposed approach. > As for the whole object identification scheme change, for this to > work, it probably needs a spec and a long discussion on any possible > complications (and benefits) when applying a change like that. > > For reference and convenience of readers, leaving the link to the > patch below: https://review.openstack.org/#/c/135006/ > > > > > > > -Sean > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > > iQEcBAEBCgAGBQJUjxI1AAoJEC5aWaUY1u579M8H/RC+M7/9YYDClWRjhQLBNqEq > 0pMxURZi8lTyDmi+cXA7wq1QzgOwUqWnJnOMYzq8nt9wLh8cgasjU5YXZokrqDLw > zEu/a1Cd9Alp4iGYQ6upw94BptGrMvk+XwTedMX9zMLf0od1k8Gzp5xYH/GXInN3 > E+wje40Huia0MmLu4i2GMr/13gD2aYhMeGxZtDMcxQsF0DBh0gy8OM9pfKgIiXVM > T65nFbXUY1/PuAdzYwMto5leuWZH03YIddXlzkQbcZoH4PGgNEE3eKl1ctQSMGoo > bz3l522VimQvVnP/XiM6xBjFqsnPM5Tc7Ylu942l+NfpfcAM5QB6Ihvw5kQI0uw= > =gIsu > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From joe.gordon0 at gmail.com Mon Dec 15 17:14:04 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Mon, 15 Dec 2014 09:14:04 -0800 Subject: [openstack-dev] [nova] Kilo specs review day In-Reply-To: <548F1075.60507@vmware.com> References: <5489B879.7030006@rackspace.com> <874msxrxnc.fsf@metaswitch.com> <548F1075.60507@vmware.com> Message-ID: On Mon, Dec 15, 2014 at 8:46 AM, Radoslav Gerganov wrote: > > On 12/15/2014 12:54 PM, Neil Jerram wrote: > >> My Nova spec (https://review.openstack.org/#/c/130732/) does not appear >> on this dashboard, even though I believe it's in good standing and - I >> hope - close to approval. Do you know why - does it mean that I've set >> some metadata field somewhere wrongly? >> >> > The dashboard doesn't show your own specs. You have to remove > "+NOT+owner%3Aself" from the URL to see them. The latest iteration of the dashboard ( https://review.openstack.org/#/c/130732/) shows your own specs: https://review.openstack.org/#/dashboard/?foreach=project%3Aopenstack%2Fnova%252Dspecs+status%3Aopen+NOT+label%3AWorkflow%3C%3D%252D1+branch%3Amaster+NOT+owner%3Aself&title=Nova+Specs&Your+are+a+reviewer%252c+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkins&Not+blocked+by+%252D2s=NOT+label%3ACode%252DReview%3C%3D%252D2+NOT+label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkins&No+votes+and+spec+is+%3E+1+week+old=NOT+label%3ACode%252DReview%3E%3D%252D2+age%3A7d+label%3AVerified%3E%3D1%252cjenkins&Needs+final+%2B2=label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkins&Broken+Specs+%28doesn%27t+pass+Jenkins%29=label%3AVerified%3C%3D%252D1%252cjenkins&Dead+Specs+%28blocked+by+a+%252D2%29=label%3ACode%252DReview%3C%3D%252D2 > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougw at a10networks.com Mon Dec 15 17:16:33 2014 From: dougw at a10networks.com (Doug Wiegley) Date: Mon, 15 Dec 2014 17:16:33 +0000 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: References: <548EED0D.6020600@redhat.com> Message-ID: Hi all, Ihar and I discussed this on IRC, and are going forward with option 2 unless someone has a big problem with it. Thanks, Doug On 12/15/14, 8:22 AM, "Doug Wiegley" wrote: >Hi Ihar, > >I?m actually in favor of option 2, but it implies a few things about your >time, and I wanted to chat with you before presuming. > >Maintenance can not involve breaking changes. At this point, the co-gate >will block it. Also, oslo graduation changes will have to be made in the >services repos first, and then Neutron. > >Thanks, >doug > > >On 12/15/14, 6:15 AM, "Ihar Hrachyshka" wrote: > >>-----BEGIN PGP SIGNED MESSAGE----- >>Hash: SHA512 >> >>Hi all, >> >>the question arose recently in one of reviews for neutron-*aas repos >>to remove all oslo-incubator code from those repos since it's >>duplicated in neutron main repo. (You can find the link to the review >>at the end of the email.) >> >>Brief hostory: neutron repo was recently split into 4 pieces (main, >>neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split resulted >>in each repository keeping their own copy of >>neutron/openstack/common/... tree (currently unused in all >>neutron-*aas repos that are still bound to modules from main repo). >> >>As a oslo liaison for the project, I wonder what's the best way to >>manage oslo-incubator files. We have several options: >> >>1. just kill all the neutron/openstack/common/ trees from neutron-*aas >>repositories and continue using modules from main repo. >> >>2. kill all duplicate modules from neutron-*aas repos and leave only >>those that are used in those repos but not in main repo. >> >>3. fully duplicate all those modules in each of four repos that use them. >> >>I think option 1. is a straw man, since we should be able to introduce >>new oslo-incubator modules into neutron-*aas repos even if they are >>not used in main repo. >> >>Option 2. is good when it comes to synching non-breaking bug fixes (or >>security fixes) from oslo-incubator, in that it will require only one >>sync patch instead of e.g. four. At the same time there may be >>potential issues when synchronizing updates from oslo-incubator that >>would break API and hence require changes to each of the modules that >>use it. Since we don't support atomic merges for multiple projects in >>gate, we will need to be cautious about those updates, and we will >>still need to leave neutron-*aas repos broken for some time (though >>the time may be mitigated with care). >> >>Option 3. is vice versa - in theory, you get total decoupling, meaning >>no oslo-incubator updates in main repo are expected to break >>neutron-*aas repos, but bug fixing becomes a huge PITA. >> >>I would vote for option 2., for two reasons: >>- - most oslo-incubator syncs are non-breaking, and we may effectively >>apply care to updates that may result in potential breakage (f.e. >>being able to trigger an integrated run for each of neutron-*aas repos >>with the main sync patch, if there are any concerns). >>- - it will make oslo liaison life a lot easier. OK, I'm probably too >>selfish on that. ;) >>- - it will make stable maintainers life a lot easier. The main reason >>why stable maintainers and distributions like recent oslo graduation >>movement is that we don't need to track each bug fix we need in every >>project, and waste lots of cycles on it. Being able to fix a bug in >>one place only is *highly* anticipated. [OK, I'm quite selfish on that >>one too.] >>- - it's a delusion that there will be no neutron-main syncs that will >>break neutron-*aas repos ever. There can still be problems due to >>incompatibility between neutron main and neutron-*aas code resulted >>EXACTLY because multiple parts of the same process use different >>versions of the same module. >> >>That said, Doug Wiegley (lbaas core) seems to be in favour of option >>3. due to lower coupling that is achieved in that way. I know that >>lbaas team had a bad experience due to tight coupling to neutron >>project in the past, so I appreciate their concerns. >> >>All in all, we should come up with some standard solution for both >>advanced services that are already split out, *and* upcoming vendor >>plugin shrinking initiative. >> >>The initial discussion is captured at: >>https://review.openstack.org/#/c/141427/ >> >>Thanks, >>/Ihar >>-----BEGIN PGP SIGNATURE----- >>Version: GnuPG/MacGPG2 v2.0.22 (Darwin) >> >>iQEcBAEBCgAGBQJUju0NAAoJEC5aWaUY1u57n5YH/jA4l5DsLgRpw9gYsoSWVGvh >>apmJ4UlnAKhxzc787XImz1VA+ztSyIwAUdEdcfq3gkinP58q7o48oIXOGjFXaBNq >>6qBePC1hflEqZ85Hm4/i5z51qutjW0dyi4y4C6FHgM5NsEkhbh0QIa/u8Hr4F1q6 >>tkr0kDbCbDkiZ8IX1l74VGWQ3QvCNeJkANUg79BqGq+qIVP3BeOHyWqRmpLZFQ6E >>QiUwhiYv5l4HekCEQN8PWisJoqnhbTNjvLBnLD82IitLd5vXnsXfSkxKhv9XeOg/ >>czLUCyr/nJg4aw8Qm0DTjnZxS+BBe5De0Ke4zm2AGePgFYcai8YQPtuOfSJDbXk= >>=D6Gn >>-----END PGP SIGNATURE----- > From joe.gordon0 at gmail.com Mon Dec 15 17:24:06 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Mon, 15 Dec 2014 09:24:06 -0800 Subject: [openstack-dev] [nova] Kilo specs review day In-Reply-To: References: <5489B879.7030006@rackspace.com> <874msxrxnc.fsf@metaswitch.com> <548F1075.60507@vmware.com> Message-ID: On Mon, Dec 15, 2014 at 9:14 AM, Joe Gordon wrote: > > > > On Mon, Dec 15, 2014 at 8:46 AM, Radoslav Gerganov > wrote: >> >> On 12/15/2014 12:54 PM, Neil Jerram wrote: >> >>> My Nova spec (https://review.openstack.org/#/c/130732/) does not appear >>> on this dashboard, even though I believe it's in good standing and - I >>> hope - close to approval. Do you know why - does it mean that I've set >>> some metadata field somewhere wrongly? >>> >>> >> The dashboard doesn't show your own specs. You have to remove >> "+NOT+owner%3Aself" from the URL to see them. > > > The latest iteration of the dashboard ( > https://review.openstack.org/#/c/130732/) shows your own specs: > Looks like that section was removed in https://review.openstack.org/#/c/141411/ > > > https://review.openstack.org/#/dashboard/?foreach=project%3Aopenstack%2Fnova%252Dspecs+status%3Aopen+NOT+label%3AWorkflow%3C%3D%252D1+branch%3Amaster+NOT+owner%3Aself&title=Nova+Specs&Your+are+a+reviewer%252c+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkins&Not+blocked+by+%252D2s=NOT+label%3ACode%252DReview%3C%3D%252D2+NOT+label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkins&No+votes+and+spec+is+%3E+1+week+old=NOT+label%3ACode%252DReview%3E%3D%252D2+age%3A7d+label%3AVerified%3E%3D1%252cjenkins&Needs+final+%2B2=label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkins&Broken+Specs+%28doesn%27t+pass+Jenkins%29=label%3AVerified%3C%3D%252D1%252cjenkins&Dead+Specs+%28blocked+by+a+%252D2%29=label%3ACode%252DReview%3C%3D%252D2 > > >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Neil.Jerram at metaswitch.com Mon Dec 15 17:32:43 2014 From: Neil.Jerram at metaswitch.com (Neil Jerram) Date: Mon, 15 Dec 2014 17:32:43 +0000 Subject: [openstack-dev] [nova] Kilo specs review day In-Reply-To: (Joe Gordon's message of "Mon, 15 Dec 2014 09:14:04 -0800") References: <5489B879.7030006@rackspace.com> <874msxrxnc.fsf@metaswitch.com> <548F1075.60507@vmware.com> Message-ID: <87mw6orf7o.fsf@metaswitch.com> Joe Gordon writes: > On Mon, Dec 15, 2014 at 8:46 AM, Radoslav Gerganov > wrote: > > On 12/15/2014 12:54 PM, Neil Jerram wrote: > > My Nova spec (https://review.openstack.org/#/c/130732/) does not > appear > on this dashboard, even though I believe it's in good standing > and - I > hope - close to approval. Do you know why - does it mean that > I've set > some metadata field somewhere wrongly? > > > > The dashboard doesn't show your own specs. You have to remove > "+NOT+owner%3Aself" from the URL to see them. Ah, as simple an explanation and solution as that. Thanks! > The latest iteration of the dashboard > (https://review.openstack.org/#/c/130732/) shows your own specs: > > https://review.openstack.org/#/dashboard/?foreach=project%3Aopenstack%2Fnova%252Dspecs+status%3Aopen+NOT+label%3AWorkflow%3C%3D%252D1+branch%3Amaster+NOT+owner%3Aself&title=Nova+Specs&Your+are+a+reviewer%252c+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkins&Not+blocked+by+%252D2s=NOT+label%3ACode%252DReview%3C%3D%252D2+NOT+label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkins&No+votes+and+spec+is+%3E+1+week+old=NOT+label%3ACode%252DReview%3E%3D%252D2+age%3A7d+label%3AVerified%3E%3D1%252cjenkins&Needs+final+%2B2=label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkins&Broken+Specs+%28doesn%27t+pass+Jenkins%29=label%3AVerified%3C%3D%252D1%252cjenkins&Dead+Specs+%28blocked+by+a+%252D2%29=label%3ACode%252DReview%3C%3D%252D2 Actually that URL still doesn't. But this one does: https://review.openstack.org/#/dashboard/?foreach=project%3Aopenstack%2Fnova%252Dspecs+status%3Aopen+NOT+label%3AWorkflow%3C%3D%252D1+branch%3Amaster&title=Nova+Specs&Your+are+a+reviewer%252c+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkins&Not+blocked+by+%252D2s=NOT+label%3ACode%252DReview%3C%3D%252D2+NOT+label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkins&No+votes+and+spec+is+%3E+1+week+old=NOT+label%3ACode%252DReview%3E%3D%252D2+age%3A7d+label%3AVerified%3E%3D1%252cjenkins&Needs+final+%2B2=label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkins&Broken+Specs+%28doesn%27t+pass+Jenkins%29=label%3AVerified%3C%3D%252D1%252cjenkins&Dead+Specs+%28blocked+by+a+%252D2%29=label%3ACode%252DReview%3C%3D%252D2 Thanks for your reply and for generating this dashboard! Neil From mestery at mestery.com Mon Dec 15 17:46:46 2014 From: mestery at mestery.com (Kyle Mestery) Date: Mon, 15 Dec 2014 11:46:46 -0600 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: References: <548EED0D.6020600@redhat.com> Message-ID: Option 2 works for me, thanks for figuring this out Ihar and Doug! On Mon, Dec 15, 2014 at 11:16 AM, Doug Wiegley wrote: > > Hi all, > > Ihar and I discussed this on IRC, and are going forward with option 2 > unless someone has a big problem with it. > > Thanks, > Doug > > > On 12/15/14, 8:22 AM, "Doug Wiegley" wrote: > > >Hi Ihar, > > > >I?m actually in favor of option 2, but it implies a few things about your > >time, and I wanted to chat with you before presuming. > > > >Maintenance can not involve breaking changes. At this point, the co-gate > >will block it. Also, oslo graduation changes will have to be made in the > >services repos first, and then Neutron. > > > >Thanks, > >doug > > > > > >On 12/15/14, 6:15 AM, "Ihar Hrachyshka" wrote: > > > >>-----BEGIN PGP SIGNED MESSAGE----- > >>Hash: SHA512 > >> > >>Hi all, > >> > >>the question arose recently in one of reviews for neutron-*aas repos > >>to remove all oslo-incubator code from those repos since it's > >>duplicated in neutron main repo. (You can find the link to the review > >>at the end of the email.) > >> > >>Brief hostory: neutron repo was recently split into 4 pieces (main, > >>neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split resulted > >>in each repository keeping their own copy of > >>neutron/openstack/common/... tree (currently unused in all > >>neutron-*aas repos that are still bound to modules from main repo). > >> > >>As a oslo liaison for the project, I wonder what's the best way to > >>manage oslo-incubator files. We have several options: > >> > >>1. just kill all the neutron/openstack/common/ trees from neutron-*aas > >>repositories and continue using modules from main repo. > >> > >>2. kill all duplicate modules from neutron-*aas repos and leave only > >>those that are used in those repos but not in main repo. > >> > >>3. fully duplicate all those modules in each of four repos that use them. > >> > >>I think option 1. is a straw man, since we should be able to introduce > >>new oslo-incubator modules into neutron-*aas repos even if they are > >>not used in main repo. > >> > >>Option 2. is good when it comes to synching non-breaking bug fixes (or > >>security fixes) from oslo-incubator, in that it will require only one > >>sync patch instead of e.g. four. At the same time there may be > >>potential issues when synchronizing updates from oslo-incubator that > >>would break API and hence require changes to each of the modules that > >>use it. Since we don't support atomic merges for multiple projects in > >>gate, we will need to be cautious about those updates, and we will > >>still need to leave neutron-*aas repos broken for some time (though > >>the time may be mitigated with care). > >> > >>Option 3. is vice versa - in theory, you get total decoupling, meaning > >>no oslo-incubator updates in main repo are expected to break > >>neutron-*aas repos, but bug fixing becomes a huge PITA. > >> > >>I would vote for option 2., for two reasons: > >>- - most oslo-incubator syncs are non-breaking, and we may effectively > >>apply care to updates that may result in potential breakage (f.e. > >>being able to trigger an integrated run for each of neutron-*aas repos > >>with the main sync patch, if there are any concerns). > >>- - it will make oslo liaison life a lot easier. OK, I'm probably too > >>selfish on that. ;) > >>- - it will make stable maintainers life a lot easier. The main reason > >>why stable maintainers and distributions like recent oslo graduation > >>movement is that we don't need to track each bug fix we need in every > >>project, and waste lots of cycles on it. Being able to fix a bug in > >>one place only is *highly* anticipated. [OK, I'm quite selfish on that > >>one too.] > >>- - it's a delusion that there will be no neutron-main syncs that will > >>break neutron-*aas repos ever. There can still be problems due to > >>incompatibility between neutron main and neutron-*aas code resulted > >>EXACTLY because multiple parts of the same process use different > >>versions of the same module. > >> > >>That said, Doug Wiegley (lbaas core) seems to be in favour of option > >>3. due to lower coupling that is achieved in that way. I know that > >>lbaas team had a bad experience due to tight coupling to neutron > >>project in the past, so I appreciate their concerns. > >> > >>All in all, we should come up with some standard solution for both > >>advanced services that are already split out, *and* upcoming vendor > >>plugin shrinking initiative. > >> > >>The initial discussion is captured at: > >>https://review.openstack.org/#/c/141427/ > >> > >>Thanks, > >>/Ihar > >>-----BEGIN PGP SIGNATURE----- > >>Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > >> > >>iQEcBAEBCgAGBQJUju0NAAoJEC5aWaUY1u57n5YH/jA4l5DsLgRpw9gYsoSWVGvh > >>apmJ4UlnAKhxzc787XImz1VA+ztSyIwAUdEdcfq3gkinP58q7o48oIXOGjFXaBNq > >>6qBePC1hflEqZ85Hm4/i5z51qutjW0dyi4y4C6FHgM5NsEkhbh0QIa/u8Hr4F1q6 > >>tkr0kDbCbDkiZ8IX1l74VGWQ3QvCNeJkANUg79BqGq+qIVP3BeOHyWqRmpLZFQ6E > >>QiUwhiYv5l4HekCEQN8PWisJoqnhbTNjvLBnLD82IitLd5vXnsXfSkxKhv9XeOg/ > >>czLUCyr/nJg4aw8Qm0DTjnZxS+BBe5De0Ke4zm2AGePgFYcai8YQPtuOfSJDbXk= > >>=D6Gn > >>-----END PGP SIGNATURE----- > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt.r.taylor at gmail.com Mon Dec 15 17:47:11 2014 From: kurt.r.taylor at gmail.com (Kurt Taylor) Date: Mon, 15 Dec 2014 11:47:11 -0600 Subject: [openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time In-Reply-To: <548F0B34.1010404@anteaya.info> References: <5487296C.5010406@anteaya.info> <548EFD00.6030300@anteaya.info> <548F0B34.1010404@anteaya.info> Message-ID: On Mon, Dec 15, 2014 at 10:24 AM, Anita Kuno wrote: > > On 12/15/2014 10:55 AM, Kurt Taylor wrote: > > Anita, please, creating yet another meeting time without input from > anyone > > just confuses the issue. > When I ask people to attend meetings to reduce noise on the mailing > list, there had better be some meetings. > I don't think we have a problem with the volume of third-party email. In fact, I wish there was even more questions and discussion. I encourage everyone to use whatever method (meetings or email) to get involved. > > I am grateful for the time you have spent chairing, thank you. It gave > me a huge break and allowed me to focus on other things (like reviews) > that I had to neglect due to the amount of time third party was taking > from my life. > No problem at all. I'm just a CI operator running a meeting for CI operators, I get just as much out of it as everyone else. > > I need there to be meetings to answer questions for people and will be > chairing meetings on the dates and times I have specified, like I said > that I would do. > I don't know how to move forward with this, except to follow what the group has agreed on. I will be happy to kick off the meetings we are voting on, but I hope to bring other CI operators in the mix to help with chairing, leading development work groups, and sharing their best practices. I think we are on the right track to do some great things in Kilo! Kurt Taylor (krtaylor) > > Thank you, > Anita. > > > > > The work group has agreed unanimously on alternating weekly meeting > times, > > and are currently voting on the best for everyone. ( > > https://www.google.com/moderator/#16/e=21b93c 14 voters so far, thanks > > everyone!) Once we finalize the voting, I was going to start up the new > > meeting times in the new year. Until then, we would stay at our normal > > time, Monday at 1800 UTC. > > > > I am still confused why you would not want to go with the consensus on > this. > > > > And, thanks again for everything that you do for us! > > Kurt Taylor (krtaylor) > > > > > > On Mon, Dec 15, 2014 at 9:23 AM, Anita Kuno > wrote: > >> > >> On 12/09/2014 11:55 AM, Anita Kuno wrote: > >>> On 12/09/2014 08:32 AM, Kurt Taylor wrote: > >>>> All of the feedback so far has supported moving the existing IRC > >>>> Third-party CI meeting to better fit a worldwide audience. > >>>> > >>>> The consensus is that we will have only 1 meeting per week at > >> alternating > >>>> times. You can see examples of other teams with alternating meeting > >> times > >>>> at: https://wiki.openstack.org/wiki/Meetings > >>>> > >>>> This way, one week we are good for one part of the world, the next > week > >> for > >>>> the other. You will not need to attend both meetings, just the meeting > >> time > >>>> every other week that fits your schedule. > >>>> > >>>> Proposed times in UTC are being voted on here: > >>>> https://www.google.com/moderator/#16/e=21b93c > >>>> > >>>> Please vote on the time that is best for you. I would like to finalize > >> the > >>>> new times this week. > >>>> > >>>> Thanks! > >>>> Kurt Taylor (krtaylor) > >>>> > >>>> > >>>> > >>>> _______________________________________________ > >>>> OpenStack-dev mailing list > >>>> OpenStack-dev at lists.openstack.org > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > >>> Note that Kurt is welcome to do as he pleases with his own time. > >>> > >>> I will be having meetings in the irc channel for the times that I have > >>> booked. > >>> > >>> Thanks, > >>> Anita. > >>> > >> Just in case anyone remains confused, I am chairing third party meetings > >> Mondays at 1500 UTC and Tuesdays at 0800 UTC in #openstack-meeting. > >> There is a meeting currently in progress. > >> > >> This is a great time for people who don't understand requirements to > >> show up and ask questions. > >> > >> Thank you, > >> Anita. > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Neil.Jerram at metaswitch.com Mon Dec 15 17:53:09 2014 From: Neil.Jerram at metaswitch.com (Neil Jerram) Date: Mon, 15 Dec 2014 17:53:09 +0000 Subject: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change Message-ID: <87egs0re9m.fsf@metaswitch.com> Hi all, Following the approval for Neutron vendor code decomposition (https://review.openstack.org/#/c/134680/), I just wanted to comment that it appears to work fine to have an ML2 mechanism driver _entirely_ out of tree, so long as the vendor repository that provides the ML2 mechanism driver does something like this to register their driver as a neutron.ml2.mechanism_drivers entry point: setuptools.setup( ..., entry_points = { ..., 'neutron.ml2.mechanism_drivers': [ 'calico = xyz.openstack.mech_xyz:XyzMechanismDriver', ], }, ) (Please see https://github.com/Metaswitch/calico/commit/488dcd8a51d7c6a1a2f03789001c2139b16de85c for the complete change and detail, for the example that works for me.) Then Neutron and the vendor package can be separately installed, and the vendor's driver name configured in ml2_conf.ini, and everything works. Given that, I wonder: - is that what the architects of the decomposition are expecting? - other than for the reference OVS driver, are there any reasons in principle for keeping _any_ ML2 mechanism driver code in tree? Many thanks, Neil From me at not.mn Mon Dec 15 17:56:17 2014 From: me at not.mn (John Dickinson) Date: Mon, 15 Dec 2014 09:56:17 -0800 Subject: [openstack-dev] [all] Swift 2.2.1 rc (err "c") 1 is available Message-ID: All, I'm happy to say that the Swift 2.2.1 release candidate is available. http://tarballs.openstack.org/swift/swift-2.2.1c1.tar.gz Please take a look, and if nothing is found, we'll release this as the final 2.2.1 version at the end of the week. This release includes a lot of great improvements for operators. You can see the change log at https://github.com/openstack/swift/blob/master/CHANGELOG. One note about the tag name. The recent release of setuptools has started enforcing PEP440. According to that spec, 2.2.1rc1 (ie the old way we tagged things) is normalized to 2.2.1c1. See https://www.python.org/dev/peps/pep-0440/#pre-releases for the details. Since OpenStack infrastructure relies on setuptools parsing to determine the tarball name, the tags we use need to be already normalized so that the tag in the repo matches the tarball created. Therefore, the new tag name is 2.2.1c1. --John -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From doug at doughellmann.com Mon Dec 15 17:57:00 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 15 Dec 2014 12:57:00 -0500 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: References: <548EED0D.6020600@redhat.com> Message-ID: There may be a similar problem managing dependencies on libraries that live outside of either tree. I assume you already decided how to handle that. Are you doing the same thing, and adding the requirements to neutron?s lists? On Dec 15, 2014, at 12:16 PM, Doug Wiegley wrote: > Hi all, > > Ihar and I discussed this on IRC, and are going forward with option 2 > unless someone has a big problem with it. > > Thanks, > Doug > > > On 12/15/14, 8:22 AM, "Doug Wiegley" wrote: > >> Hi Ihar, >> >> I?m actually in favor of option 2, but it implies a few things about your >> time, and I wanted to chat with you before presuming. >> >> Maintenance can not involve breaking changes. At this point, the co-gate >> will block it. Also, oslo graduation changes will have to be made in the >> services repos first, and then Neutron. >> >> Thanks, >> doug >> >> >> On 12/15/14, 6:15 AM, "Ihar Hrachyshka" wrote: >> >>> -----BEGIN PGP SIGNED MESSAGE----- >>> Hash: SHA512 >>> >>> Hi all, >>> >>> the question arose recently in one of reviews for neutron-*aas repos >>> to remove all oslo-incubator code from those repos since it's >>> duplicated in neutron main repo. (You can find the link to the review >>> at the end of the email.) >>> >>> Brief hostory: neutron repo was recently split into 4 pieces (main, >>> neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split resulted >>> in each repository keeping their own copy of >>> neutron/openstack/common/... tree (currently unused in all >>> neutron-*aas repos that are still bound to modules from main repo). >>> >>> As a oslo liaison for the project, I wonder what's the best way to >>> manage oslo-incubator files. We have several options: >>> >>> 1. just kill all the neutron/openstack/common/ trees from neutron-*aas >>> repositories and continue using modules from main repo. >>> >>> 2. kill all duplicate modules from neutron-*aas repos and leave only >>> those that are used in those repos but not in main repo. >>> >>> 3. fully duplicate all those modules in each of four repos that use them. >>> >>> I think option 1. is a straw man, since we should be able to introduce >>> new oslo-incubator modules into neutron-*aas repos even if they are >>> not used in main repo. >>> >>> Option 2. is good when it comes to synching non-breaking bug fixes (or >>> security fixes) from oslo-incubator, in that it will require only one >>> sync patch instead of e.g. four. At the same time there may be >>> potential issues when synchronizing updates from oslo-incubator that >>> would break API and hence require changes to each of the modules that >>> use it. Since we don't support atomic merges for multiple projects in >>> gate, we will need to be cautious about those updates, and we will >>> still need to leave neutron-*aas repos broken for some time (though >>> the time may be mitigated with care). >>> >>> Option 3. is vice versa - in theory, you get total decoupling, meaning >>> no oslo-incubator updates in main repo are expected to break >>> neutron-*aas repos, but bug fixing becomes a huge PITA. >>> >>> I would vote for option 2., for two reasons: >>> - - most oslo-incubator syncs are non-breaking, and we may effectively >>> apply care to updates that may result in potential breakage (f.e. >>> being able to trigger an integrated run for each of neutron-*aas repos >>> with the main sync patch, if there are any concerns). >>> - - it will make oslo liaison life a lot easier. OK, I'm probably too >>> selfish on that. ;) >>> - - it will make stable maintainers life a lot easier. The main reason >>> why stable maintainers and distributions like recent oslo graduation >>> movement is that we don't need to track each bug fix we need in every >>> project, and waste lots of cycles on it. Being able to fix a bug in >>> one place only is *highly* anticipated. [OK, I'm quite selfish on that >>> one too.] >>> - - it's a delusion that there will be no neutron-main syncs that will >>> break neutron-*aas repos ever. There can still be problems due to >>> incompatibility between neutron main and neutron-*aas code resulted >>> EXACTLY because multiple parts of the same process use different >>> versions of the same module. >>> >>> That said, Doug Wiegley (lbaas core) seems to be in favour of option >>> 3. due to lower coupling that is achieved in that way. I know that >>> lbaas team had a bad experience due to tight coupling to neutron >>> project in the past, so I appreciate their concerns. >>> >>> All in all, we should come up with some standard solution for both >>> advanced services that are already split out, *and* upcoming vendor >>> plugin shrinking initiative. >>> >>> The initial discussion is captured at: >>> https://review.openstack.org/#/c/141427/ >>> >>> Thanks, >>> /Ihar >>> -----BEGIN PGP SIGNATURE----- >>> Version: GnuPG/MacGPG2 v2.0.22 (Darwin) >>> >>> iQEcBAEBCgAGBQJUju0NAAoJEC5aWaUY1u57n5YH/jA4l5DsLgRpw9gYsoSWVGvh >>> apmJ4UlnAKhxzc787XImz1VA+ztSyIwAUdEdcfq3gkinP58q7o48oIXOGjFXaBNq >>> 6qBePC1hflEqZ85Hm4/i5z51qutjW0dyi4y4C6FHgM5NsEkhbh0QIa/u8Hr4F1q6 >>> tkr0kDbCbDkiZ8IX1l74VGWQ3QvCNeJkANUg79BqGq+qIVP3BeOHyWqRmpLZFQ6E >>> QiUwhiYv5l4HekCEQN8PWisJoqnhbTNjvLBnLD82IitLd5vXnsXfSkxKhv9XeOg/ >>> czLUCyr/nJg4aw8Qm0DTjnZxS+BBe5De0Ke4zm2AGePgFYcai8YQPtuOfSJDbXk= >>> =D6Gn >>> -----END PGP SIGNATURE----- >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From pieter.c.kruithof-jr at hp.com Mon Dec 15 18:01:32 2014 From: pieter.c.kruithof-jr at hp.com (Kruithof, Piet) Date: Mon, 15 Dec 2014 18:01:32 +0000 Subject: [openstack-dev] Help improve the OpenStack Horizon user portal! Message-ID: You are invited to participate in an online card sort activity sponsored by the Horizon team. The purpose of this activity is to help us evaluate the current information architecture and find ways to improve it. We are specifically interested in individuals who use cloud services as a consumer (IaaS, PaaS, SaaS, etc). Activity Time: 15 minutes to complete the online activity by yourself ? no scheduling required! Link to card sort: http://ows.io/os/0v46l867 **Participants will be automatically added to a drawing for one of three HP 7 G2 tablets.** Feel free to forward the link to anyone else who might be interested. College students are welcome to participate. As always, the results will be shared with the community. Thanks, Piet Kruithof Sr. UX Architect ? HP Helion Cloud PS - This is a different from the usability study that was posted earlier. From mriedem at linux.vnet.ibm.com Mon Dec 15 18:30:22 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Mon, 15 Dec 2014 12:30:22 -0600 Subject: [openstack-dev] [infra] Job failures related to Setuptools 8.0 release In-Reply-To: <20141215161614.GN2497@yuggoth.org> References: <20141215161614.GN2497@yuggoth.org> Message-ID: <548F28BE.6050706@linux.vnet.ibm.com> On 12/15/2014 10:16 AM, Jeremy Stanley wrote: > On Saturday, December 13, Setuptools 8.0 released implementing the > new PEP 440[1] version specification and dropping support for a > variety of previously somewhat-supported versioning syntaxes. This > is impacting us in several ways... > > "Multiple range" expressions in requirements are no longer > interpreted the way we were relying on. The fix is fairly > straightforward, since the SQLAlchemy requirement in oslo.db (and > corresponding line in global requirements) is the only current > example. Fixes for this in multiple branches are in flight. > [2][3][4] (This last one is merged to oslo.db's master branch but > not released yet.) > > Arbitrary alphanumeric version subcomponents such as PBR's Git SHA > suffixes now sort earlier than all major version numbers. The fix is > still in progress[5][6], and resulted in a couple of brown-bag > releases over the weekend. 0.10.1 generated PEP 440 compliant > versions which ended up unparseable when included in requirements > files, so 0.10.2 is a roll-back identical to 0.10. > > The "1.2.3.rc1" we were using for release candidates is now > automatically normalized to "1.2.3c1" during sdist tarball and wheel > generation, causing tag text to no longer match the resulting file > names. This may simply require us to change our naming convention[7] > for these sorts of tags in the future. > > In the interim we've pinned setuptools<8 in our infrastructure[8] to > help keep things moving while these various solutions crystalize, > but if you run into this problem locally check your version of > setuptools and try an older one. > > [1] http://legacy.python.org/dev/peps/pep-0440/ > [2] https://review.openstack.org/141584 > [3] https://review.openstack.org/137583 > [4] https://review.openstack.org/140948 > [5] https://review.openstack.org/141666 > [6] https://review.openstack.org/141667 > [7] https://review.openstack.org/141831 > [8] https://review.openstack.org/141577 > I've opened three bugs for three elastic-recheck queries related to this, then waiting for logstash/e-r to catch up to see what's left to categorize. nova: https://bugs.launchpad.net/nova/+bug/1402751 glance: https://bugs.launchpad.net/glance/+bug/1402747 pbr: https://bugs.launchpad.net/pbr/+bug/1402759 -- Thanks, Matt Riedemann From lyz at princessleia.com Mon Dec 15 18:41:12 2014 From: lyz at princessleia.com (Elizabeth K. Joseph) Date: Mon, 15 Dec 2014 10:41:12 -0800 Subject: [openstack-dev] [Infra] Meeting Tuesday December 16th at 19:00 UTC Message-ID: Hi everyone, The OpenStack Infrastructure (Infra) team is hosting our weekly meeting on Tuesday December 16th, at 19:00 UTC in #openstack-meeting Meeting agenda available here: https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is welcome to to add agenda items) Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend. Meeting log and minutes from the last meeting are available here: Minutes: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-09-19.02.html Minutes (text): http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-09-19.02.txt Log: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-09-19.02.log.html -- Elizabeth Krumbach Joseph || Lyz || pleia2 From sean at dague.net Mon Dec 15 18:50:38 2014 From: sean at dague.net (Sean Dague) Date: Mon, 15 Dec 2014 13:50:38 -0500 Subject: [openstack-dev] oslo.db 1.3.0 released In-Reply-To: <20141215170104.GO2497@yuggoth.org> References: <20141215170104.GO2497@yuggoth.org> Message-ID: <548F2D7E.2000809@dague.net> On 12/15/2014 12:01 PM, Jeremy Stanley wrote: > On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote: > [...] >> This release is primarily meant to update the SQLAlchemy dependency >> to resolve the issue with the new version of setuptools changing >> how it evaluates version range specifications. > [...] > > However note that I'm in the middle of forcing a refresh on a couple > of our PyPI mirror servers, so it may be a couple hours before we > see the effects of this throughout all of our infrastructure. > It looks like this change has broken the grenade jobs because now oslo.db 1.3.0 ends up being installed in stable/juno environments, which has incompatible requirements with the rest of stable juno. http://logs.openstack.org/07/137307/1/gate//gate-grenade-dsvm/048ee63//logs/old/screen-s-proxy.txt.gz pkg_resources.VersionConflict: SQLAlchemy 0.8.4 is installed but SQLAlchemy>=0.9.7,<=0.9.99 is required by ['oslo.db'] -Sean -- Sean Dague http://dague.net From stefano at openstack.org Mon Dec 15 18:51:53 2014 From: stefano at openstack.org (Stefano Maffulli) Date: Mon, 15 Dec 2014 10:51:53 -0800 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: References: Message-ID: <548F2DC9.5070907@openstack.org> On 12/09/2014 04:11 PM, by wrote: >>>[vad] how about the documentation in this case?... bcos it needs some > place to document (a short desc and a link to vendor page) or list these > kind of out-of-tree plugins/drivers... just to make the user aware of > the availability of such plugins/driers which is compatible with so and > so openstack release. > I checked with the documentation team and according to them, only the > following plugins/drivers only will get documented... > 1) in-tree plugins/drivers (full documentation) > 2) third-party plugins/drivers (ie, one implements and follows this new > proposal, a.k.a partially-in-tree due to the integration module/code)... > > *** no listing/mention about such completely out-of-tree plugins/drivers*** Discoverability of documentation is a serious issue. As I commented on docs spec [1], I think there are already too many places, mini-sites and random pages holding information that is relevant to users. We should make an effort to keep things discoverable, even if not maintained necessarily by the same, single team. I think the docs team means that they are not able to guarantee documentation for third-party *themselves* (and has not been able, too). The docs team is already overworked as it is now, they can't take on more responsibilities. So once Neutron's code will be split, documentation for the users of all third-party modules should find a good place to live in, indexed and searchable together where the rest of the docs are. I'm hoping that we can find a place (ideally under docs.openstack.org?) where third-party documentation can live and be maintained by the teams responsible for the code, too. Thoughts? /stef [1] https://review.openstack.org/#/c/133372/ From donald at stufft.io Mon Dec 15 18:53:37 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 15 Dec 2014 13:53:37 -0500 Subject: [openstack-dev] oslo.db 1.3.0 released In-Reply-To: <548F2D7E.2000809@dague.net> References: <20141215170104.GO2497@yuggoth.org> <548F2D7E.2000809@dague.net> Message-ID: <2F7EE008-BA1F-4C89-B087-3D8EA8BF7484@stufft.io> > On Dec 15, 2014, at 1:50 PM, Sean Dague wrote: > > On 12/15/2014 12:01 PM, Jeremy Stanley wrote: >> On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote: >> [...] >>> This release is primarily meant to update the SQLAlchemy dependency >>> to resolve the issue with the new version of setuptools changing >>> how it evaluates version range specifications. >> [...] >> >> However note that I'm in the middle of forcing a refresh on a couple >> of our PyPI mirror servers, so it may be a couple hours before we >> see the effects of this throughout all of our infrastructure. >> > > It looks like this change has broken the grenade jobs because now > oslo.db 1.3.0 ends up being installed in stable/juno environments, which > has incompatible requirements with the rest of stable juno. > > http://logs.openstack.org/07/137307/1/gate//gate-grenade-dsvm/048ee63//logs/old/screen-s-proxy.txt.gz > > pkg_resources.VersionConflict: SQLAlchemy 0.8.4 is installed but > SQLAlchemy>=0.9.7,<=0.9.99 is required by ['oslo.db'] > > -Sean It should probably use the specifier from Juno which matches the old specifier in functionality. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From sean at dague.net Mon Dec 15 18:57:23 2014 From: sean at dague.net (Sean Dague) Date: Mon, 15 Dec 2014 13:57:23 -0500 Subject: [openstack-dev] oslo.db 1.3.0 released In-Reply-To: <2F7EE008-BA1F-4C89-B087-3D8EA8BF7484@stufft.io> References: <20141215170104.GO2497@yuggoth.org> <548F2D7E.2000809@dague.net> <2F7EE008-BA1F-4C89-B087-3D8EA8BF7484@stufft.io> Message-ID: <548F2F13.7000901@dague.net> On 12/15/2014 01:53 PM, Donald Stufft wrote: > >> On Dec 15, 2014, at 1:50 PM, Sean Dague wrote: >> >> On 12/15/2014 12:01 PM, Jeremy Stanley wrote: >>> On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote: >>> [...] >>>> This release is primarily meant to update the SQLAlchemy dependency >>>> to resolve the issue with the new version of setuptools changing >>>> how it evaluates version range specifications. >>> [...] >>> >>> However note that I'm in the middle of forcing a refresh on a couple >>> of our PyPI mirror servers, so it may be a couple hours before we >>> see the effects of this throughout all of our infrastructure. >>> >> >> It looks like this change has broken the grenade jobs because now >> oslo.db 1.3.0 ends up being installed in stable/juno environments, which >> has incompatible requirements with the rest of stable juno. >> >> http://logs.openstack.org/07/137307/1/gate//gate-grenade-dsvm/048ee63//logs/old/screen-s-proxy.txt.gz >> >> pkg_resources.VersionConflict: SQLAlchemy 0.8.4 is installed but >> SQLAlchemy>=0.9.7,<=0.9.99 is required by ['oslo.db'] >> >> -Sean > > It should probably use the specifier from Juno which matches the old > specifier in functionality. Probably, but that was specifically reverted here - https://review.openstack.org/#/c/138546/2/global-requirements.txt,cm -Sean -- Sean Dague http://dague.net From donald at stufft.io Mon Dec 15 19:02:13 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 15 Dec 2014 14:02:13 -0500 Subject: [openstack-dev] oslo.db 1.3.0 released In-Reply-To: <548F2F13.7000901@dague.net> References: <20141215170104.GO2497@yuggoth.org> <548F2D7E.2000809@dague.net> <2F7EE008-BA1F-4C89-B087-3D8EA8BF7484@stufft.io> <548F2F13.7000901@dague.net> Message-ID: > On Dec 15, 2014, at 1:57 PM, Sean Dague wrote: > > On 12/15/2014 01:53 PM, Donald Stufft wrote: >> >>> On Dec 15, 2014, at 1:50 PM, Sean Dague wrote: >>> >>> On 12/15/2014 12:01 PM, Jeremy Stanley wrote: >>>> On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote: >>>> [...] >>>>> This release is primarily meant to update the SQLAlchemy dependency >>>>> to resolve the issue with the new version of setuptools changing >>>>> how it evaluates version range specifications. >>>> [...] >>>> >>>> However note that I'm in the middle of forcing a refresh on a couple >>>> of our PyPI mirror servers, so it may be a couple hours before we >>>> see the effects of this throughout all of our infrastructure. >>>> >>> >>> It looks like this change has broken the grenade jobs because now >>> oslo.db 1.3.0 ends up being installed in stable/juno environments, which >>> has incompatible requirements with the rest of stable juno. >>> >>> http://logs.openstack.org/07/137307/1/gate//gate-grenade-dsvm/048ee63//logs/old/screen-s-proxy.txt.gz >>> >>> pkg_resources.VersionConflict: SQLAlchemy 0.8.4 is installed but >>> SQLAlchemy>=0.9.7,<=0.9.99 is required by ['oslo.db'] >>> >>> -Sean >> >> It should probably use the specifier from Juno which matches the old >> specifier in functionality. > > Probably, but that was specifically reverted here - > https://review.openstack.org/#/c/138546/2/global-requirements.txt,cm > Not sure I follow, that doesn?t seem to contain any SQLAlchemy changes? I mean stable/juno has this -> SQLAlchemy>=0.8.4,<=0.9.99,!=0.9.0,!=0.9.1,!=0.9.2,!=0.9.3,!=0.9.4,!=0.9.5,!=0.9.6 and master has this -> SQLAlchemy>=0.9.7,<=0.9.99 I forget who it was but someone suggested just dropping 0.8 in global requirements over the weekend so that?s what I did. It appears oslo.db used the SQLAlchemy specifier from master which means that it won?t work with SQLAlchemy in the 0.8 series. So probably oslo.db should instead use the one from stable/juno? --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From doug at doughellmann.com Mon Dec 15 19:09:52 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 15 Dec 2014 14:09:52 -0500 Subject: [openstack-dev] oslo.db 1.3.0 released In-Reply-To: References: <20141215170104.GO2497@yuggoth.org> <548F2D7E.2000809@dague.net> <2F7EE008-BA1F-4C89-B087-3D8EA8BF7484@stufft.io> <548F2F13.7000901@dague.net> Message-ID: <61F31B60-A123-442B-91CB-53B85CAA7420@doughellmann.com> On Dec 15, 2014, at 2:02 PM, Donald Stufft wrote: > >> On Dec 15, 2014, at 1:57 PM, Sean Dague wrote: >> >> On 12/15/2014 01:53 PM, Donald Stufft wrote: >>> >>>> On Dec 15, 2014, at 1:50 PM, Sean Dague wrote: >>>> >>>> On 12/15/2014 12:01 PM, Jeremy Stanley wrote: >>>>> On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote: >>>>> [...] >>>>>> This release is primarily meant to update the SQLAlchemy dependency >>>>>> to resolve the issue with the new version of setuptools changing >>>>>> how it evaluates version range specifications. >>>>> [...] >>>>> >>>>> However note that I'm in the middle of forcing a refresh on a couple >>>>> of our PyPI mirror servers, so it may be a couple hours before we >>>>> see the effects of this throughout all of our infrastructure. >>>>> >>>> >>>> It looks like this change has broken the grenade jobs because now >>>> oslo.db 1.3.0 ends up being installed in stable/juno environments, which >>>> has incompatible requirements with the rest of stable juno. >>>> >>>> http://logs.openstack.org/07/137307/1/gate//gate-grenade-dsvm/048ee63//logs/old/screen-s-proxy.txt.gz >>>> >>>> pkg_resources.VersionConflict: SQLAlchemy 0.8.4 is installed but >>>> SQLAlchemy>=0.9.7,<=0.9.99 is required by ['oslo.db'] >>>> >>>> -Sean >>> >>> It should probably use the specifier from Juno which matches the old >>> specifier in functionality. >> >> Probably, but that was specifically reverted here - >> https://review.openstack.org/#/c/138546/2/global-requirements.txt,cm >> > > Not sure I follow, that doesn?t seem to contain any SQLAlchemy changes? > > I mean stable/juno has this -> SQLAlchemy>=0.8.4,<=0.9.99,!=0.9.0,!=0.9.1,!=0.9.2,!=0.9.3,!=0.9.4,!=0.9.5,!=0.9.6 > and master has this -> SQLAlchemy>=0.9.7,<=0.9.99 > > I forget who it was but someone suggested just dropping 0.8 in global > requirements over the weekend so that?s what I did. > > It appears oslo.db used the SQLAlchemy specifier from master which means that > it won?t work with SQLAlchemy in the 0.8 series. So probably oslo.db should > instead use the one from stable/juno? Master oslo.db has to match the requirements list being used elsewhere in master, so it can?t use the requirements spec from a stable branch. Can we cap oslo.db in juno to 1.2.0? That should work as a minimum version in the requirements list for master, which would let us maintain an overlapping requirements range to support rolling updates. Doug > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cboylan at sapwetik.org Mon Dec 15 19:11:42 2014 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 15 Dec 2014 11:11:42 -0800 Subject: [openstack-dev] oslo.db 1.3.0 released In-Reply-To: <61F31B60-A123-442B-91CB-53B85CAA7420@doughellmann.com> References: <20141215170104.GO2497@yuggoth.org> <548F2D7E.2000809@dague.net> <2F7EE008-BA1F-4C89-B087-3D8EA8BF7484@stufft.io> <548F2F13.7000901@dague.net> <61F31B60-A123-442B-91CB-53B85CAA7420@doughellmann.com> Message-ID: <1418670702.429643.203185881.0EC40628@webmail.messagingengine.com> On Mon, Dec 15, 2014, at 11:09 AM, Doug Hellmann wrote: > > On Dec 15, 2014, at 2:02 PM, Donald Stufft wrote: > > > > >> On Dec 15, 2014, at 1:57 PM, Sean Dague wrote: > >> > >> On 12/15/2014 01:53 PM, Donald Stufft wrote: > >>> > >>>> On Dec 15, 2014, at 1:50 PM, Sean Dague wrote: > >>>> > >>>> On 12/15/2014 12:01 PM, Jeremy Stanley wrote: > >>>>> On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote: > >>>>> [...] > >>>>>> This release is primarily meant to update the SQLAlchemy dependency > >>>>>> to resolve the issue with the new version of setuptools changing > >>>>>> how it evaluates version range specifications. > >>>>> [...] > >>>>> > >>>>> However note that I'm in the middle of forcing a refresh on a couple > >>>>> of our PyPI mirror servers, so it may be a couple hours before we > >>>>> see the effects of this throughout all of our infrastructure. > >>>>> > >>>> > >>>> It looks like this change has broken the grenade jobs because now > >>>> oslo.db 1.3.0 ends up being installed in stable/juno environments, which > >>>> has incompatible requirements with the rest of stable juno. > >>>> > >>>> http://logs.openstack.org/07/137307/1/gate//gate-grenade-dsvm/048ee63//logs/old/screen-s-proxy.txt.gz > >>>> > >>>> pkg_resources.VersionConflict: SQLAlchemy 0.8.4 is installed but > >>>> SQLAlchemy>=0.9.7,<=0.9.99 is required by ['oslo.db'] > >>>> > >>>> -Sean > >>> > >>> It should probably use the specifier from Juno which matches the old > >>> specifier in functionality. > >> > >> Probably, but that was specifically reverted here - > >> https://review.openstack.org/#/c/138546/2/global-requirements.txt,cm > >> > > > > Not sure I follow, that doesn?t seem to contain any SQLAlchemy changes? > > > > I mean stable/juno has this -> SQLAlchemy>=0.8.4,<=0.9.99,!=0.9.0,!=0.9.1,!=0.9.2,!=0.9.3,!=0.9.4,!=0.9.5,!=0.9.6 > > and master has this -> SQLAlchemy>=0.9.7,<=0.9.99 > > > > I forget who it was but someone suggested just dropping 0.8 in global > > requirements over the weekend so that?s what I did. > > > > It appears oslo.db used the SQLAlchemy specifier from master which means that > > it won?t work with SQLAlchemy in the 0.8 series. So probably oslo.db should > > instead use the one from stable/juno? > > Master oslo.db has to match the requirements list being used elsewhere in > master, so it can?t use the requirements spec from a stable branch. > > Can we cap oslo.db in juno to 1.2.0? That should work as a minimum > version in the requirements list for master, which would let us maintain > an overlapping requirements range to support rolling updates. > > Doug > > > > > --- > > Donald Stufft > > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I think you need a 1.2.1 release that doesn't have the broken requirement for sqlalchemy then cap on that in stable/juno. Clark From clint at fewbar.com Mon Dec 15 19:29:03 2014 From: clint at fewbar.com (Clint Byrum) Date: Mon, 15 Dec 2014 11:29:03 -0800 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <548EFB12.2090303@hp.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> <548A3FB8.9030007@redhat.com> <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> <548B8480.9010506@redhat.com> <548EFB12.2090303@hp.com> Message-ID: <1418660028-sup-6503@fewbar.com> Excerpts from Anant Patil's message of 2014-12-15 07:15:30 -0800: > On 13-Dec-14 05:42, Zane Bitter wrote: > > On 12/12/14 05:29, Murugan, Visnusaran wrote: > >> > >> > >>> -----Original Message----- > >>> From: Zane Bitter [mailto:zbitter at redhat.com] > >>> Sent: Friday, December 12, 2014 6:37 AM > >>> To: openstack-dev at lists.openstack.org > >>> Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept > >>> showdown > >>> > >>> On 11/12/14 08:26, Murugan, Visnusaran wrote: > >>>>>> [Murugan, Visnusaran] > >>>>>> In case of rollback where we have to cleanup earlier version of > >>>>>> resources, > >>>>> we could get the order from old template. We'd prefer not to have a > >>>>> graph table. > >>>>> > >>>>> In theory you could get it by keeping old templates around. But that > >>>>> means keeping a lot of templates, and it will be hard to keep track > >>>>> of when you want to delete them. It also means that when starting an > >>>>> update you'll need to load every existing previous version of the > >>>>> template in order to calculate the dependencies. It also leaves the > >>>>> dependencies in an ambiguous state when a resource fails, and > >>>>> although that can be worked around it will be a giant pain to implement. > >>>>> > >>>> > >>>> Agree that looking to all templates for a delete is not good. But > >>>> baring Complexity, we feel we could achieve it by way of having an > >>>> update and a delete stream for a stack update operation. I will > >>>> elaborate in detail in the etherpad sometime tomorrow :) > >>>> > >>>>> I agree that I'd prefer not to have a graph table. After trying a > >>>>> couple of different things I decided to store the dependencies in the > >>>>> Resource table, where we can read or write them virtually for free > >>>>> because it turns out that we are always reading or updating the > >>>>> Resource itself at exactly the same time anyway. > >>>>> > >>>> > >>>> Not sure how this will work in an update scenario when a resource does > >>>> not change and its dependencies do. > >>> > >>> We'll always update the requirements, even when the properties don't > >>> change. > >>> > >> > >> Can you elaborate a bit on rollback. > > > > I didn't do anything special to handle rollback. It's possible that we > > need to - obviously the difference in the UpdateReplace + rollback case > > is that the replaced resource is now the one we want to keep, and yet > > the replaced_by/replaces dependency will force the newer (replacement) > > resource to be checked for deletion first, which is an inversion of the > > usual order. > > > > This is where the version is so handy! For UpdateReplaced ones, there is > an older version to go back to. This version could just be template ID, > as I mentioned in another e-mail. All resources are at the current > template ID if they are found in the current template, even if they is > no need to update them. Otherwise, they need to be cleaned-up in the > order given in the previous templates. > > I think the template ID is used as version as far as I can see in Zane's > PoC. If the resource template key doesn't match the current template > key, the resource is deleted. The version is misnomer here, but that > field (template id) is used as though we had versions of resources. > > > However, I tried to think of a scenario where that would cause problems > > and I couldn't come up with one. Provided we know the actual, real-world > > dependencies of each resource I don't think the ordering of those two > > checks matters. > > > > In fact, I currently can't think of a case where the dependency order > > between replacement and replaced resources matters at all. It matters in > > the current Heat implementation because resources are artificially > > segmented into the current and backup stacks, but with a holistic view > > of dependencies that may well not be required. I tried taking that line > > out of the simulator code and all the tests still passed. If anybody can > > think of a scenario in which it would make a difference, I would be very > > interested to hear it. > > > > In any event though, it should be no problem to reverse the direction of > > that one edge in these particular circumstances if it does turn out to > > be a problem. > > > >> We had an approach with depends_on > >> and needed_by columns in ResourceTable. But dropped it when we figured out > >> we had too many DB operations for Update. > > > > Yeah, I initially ran into this problem too - you have a bunch of nodes > > that are waiting on the current node, and now you have to go look them > > all up in the database to see what else they're waiting on in order to > > tell if they're ready to be triggered. > > > > It turns out the answer is to distribute the writes but centralise the > > reads. So at the start of the update, we read all of the Resources, > > obtain their dependencies and build one central graph[1]. We than make > > that graph available to each resource (either by passing it as a > > notification parameter, or storing it somewhere central in the DB that > > they will all have to read anyway, i.e. the Stack). But when we update a > > dependency we don't update the central graph, we update the individual > > Resource so there's no global lock required. > > > > [1] > > https://github.com/zaneb/heat-convergence-prototype/blob/distributed-graph/converge/stack.py#L166-L168 > > > > A centralized graph and decision making will make the implementation far > more simpler than distributed. This looks academic, but the simplicity > beats everything! When each worker has to decide, there needs to be > lock, only DB transactions are not enough. In contrast, when the > decision making is centralized, that particular critical section can be > attempted with transaction and re-attempted if needed. > I'm concerned that we're losing sight of the whole point of convergence. Yes, concurrency is hard, and state management is really the only thing hard about concurrency. What Zane is suggesting is a lock-free approach commonly called 'Read Copy Update' or "RCU". It has high reader concurrency, but relies on there only being one writer. It is quite simple, and has proven itself enough to even be included in the Linux kernel: http://lwn.net/Articles/263130/ > With the distributed approach, I see following drawbacks: > 1. Every time a resource is done, the peer resources (siblings) are > checked to see if they are done and the parent is propagated. This > happens for each resource. This is slightly inaccurate. Every time a resource is done, resources that depend on that resource are checked to see if they still have any unresolved dependencies. > 2. The worker has to run through all the resources to see if the stack > is done, to mark it as completed. If a worker completes a resource which has no dependent resources, it only needs to check to see if all of the other edges of the graph are complete to mark the state as complete. There is no full traversal unless you want to make sure nothing regressed, which is not the way any of the specs read. > 3. The decision to converge is made by each worker resulting in lot of > contention. The centralized graph restricts the contention point to one > place where we can use DB transactions. It is easier to maintain code > where particular decisions are made at a place rather than at many > places. Why not let multiple workers use DB transactions? The contention happens _only if it needs to happen to preserve transaction consistency_ instead of _always_. > 4. The complex part we are trying to solve is to decide on what to do > next when a resource is done. With centralized graph, this is abstracted > out to the DB API. The API will return the next set of nodes. A smart > SQL query can reduce a lot of logic currently being coded in > worker/engine. Having seen many such "smart" SQL queries, I have to say, this is terrifying. Complexity in database access is by far the single biggest obstacle to scaling out. I don't really know why you'd want logic to move into the database. It is the one place that you must keep simple in order to scale. We can scale out python like crazy.. but SQL is generally a single threaded, impossible to debug component. So make the schema obvious and accesses to it straight forward. I think we need to land somewhere between the two approaches though. Here is my idea for DB interaction, I realize now it's been in my head for a while but I never shared: CREATE TABLE resource ( id ..., ! all the stuff now version int, replaced_by int, complete_order int, primary key (id, version), key idx_replaced_by (replaced_by)); CREATE TABLE resource_dependencies ( id ...., version int, needed_by ... primary key (id, version, needed_by)); Then completion happens something like this: BEGIN SELECT @complete_order := MAX(complete_order) FROM resource WHERE stack_id = :stack_id: SET @complete_order := @complete_order + 1 UPDATE resource SET complete_order = @complete_order, state='COMPLETE' WHERE id=:id: AND version=:version:; ! if there is a replaced_version UPDATE resource SET replaced_by=:version: WHERE id=:id: AND version=:replaced_version:; SELECT DISTINCT r.id FROM resource r INNER JOIN resource_dependencies rd ON r.id = rd.resource_id AND r.version = rd.version WHERE r.version=:version: AND rd.needed_by=:id: AND r.state != 'COMPLETE' for id in results: convergequeue.submit(id) COMMIT Perhaps I've missed some revelation that makes this hard or impossible. But I don't see a ton of database churn (one update per completion is meh). I also don't see a lot of complexity in the query. The complete_order can be used to handle deletes in the right order later (note that I know that is probably the wrong way to do it and sequences are a thing that can be used for this). > 5. What would be the starting point for resource clean-up? The clean-up > has to start when all the resources are updated. With no centralized > graph, the DB has to be searched for all the resources with no > dependencies and with older versions (or having older template keys) and > start removing them. With centralized graph, this would be a simpler > with a SQL queries returning what needs to be done. The search space for > where to start with clean-up will be huge. "Searched" may be the wrong way. With the table structure above, you can find everything to delete with this query: ! Outright deletes SELECT r_old.id FROM resource r_old LEFT OUTER JOIN resource r_new ON r_old.id = r_new.id AND r_old.version = :cleanup_version: WHERE r_new.id IS NULL OR r_old.replaced_by IS NOT NULL ORDER BY DESC r.complete_order; That should delete everything in more or less the right order. I think for that one you can just delete the rows as they're confirmed deleted from the plugins, no large transaction needed since we'd not expect these rows to be updated anymore. > 6. When engine restarts, the search space on where to start will be > huge. With a centralized graph, the abstracted API to get next set of > nodes makes the implementation of decision simpler. > > I am convinced enough that it is simpler to assign the responsibility to > engine on what needs to be done next. No locks will be required, not > even resource locks! It is simpler from implementation, understanding > and maintenance perspective. > I thought you started saying you would need locks, but now saying you won't. I agree no abstract locking is needed, just a consistent view of the graph in the DB. From Sean_Collins2 at cable.comcast.com Mon Dec 15 19:33:50 2014 From: Sean_Collins2 at cable.comcast.com (Collins, Sean) Date: Mon, 15 Dec 2014 19:33:50 +0000 Subject: [openstack-dev] [DevStack] A grenade for DevStack? Message-ID: <20141215193349.GD35938@HQSML-1081034> Hi, I've been bitten by a couple bugs lately on DevStack installs that have been long lived. One is a built by Vagrant, and the other is bare metal hardware. https://bugs.launchpad.net/devstack/+bug/1395776 In this instance, a commit was made to rollback the introduction of MariaDB in ubuntu, but it does not appear that there was anything done to revert the change in existing deployments, so I had to go and fix by hand on the bare metal lab because I didn't want to waste a lot of time rebuilding the whole lab from scratch just to fix. I also got bit by this on my Vagrant node, but I nuked and paved to fix. https://bugs.launchpad.net/devstack/+bug/1402762 It'd be great to somehow make a long lived dsvm node and job where DevStack is continually deployed to it and restacked, to check for these kinds of errors? -- Sean M. Collins From sean at dague.net Mon Dec 15 20:03:41 2014 From: sean at dague.net (Sean Dague) Date: Mon, 15 Dec 2014 15:03:41 -0500 Subject: [openstack-dev] [DevStack] A grenade for DevStack? In-Reply-To: <20141215193349.GD35938@HQSML-1081034> References: <20141215193349.GD35938@HQSML-1081034> Message-ID: <548F3E9D.20605@dague.net> On 12/15/2014 02:33 PM, Collins, Sean wrote: > Hi, > > I've been bitten by a couple bugs lately on DevStack installs that have > been long lived. One is a built by Vagrant, and the other is bare metal > hardware. > > https://bugs.launchpad.net/devstack/+bug/1395776 > > In this instance, a commit was made to rollback the introduction of > MariaDB in ubuntu, but it does not appear that there was anything done > to revert the change in existing deployments, so I had to go and fix by > hand on the bare metal lab because I didn't want to waste a lot of time > rebuilding the whole lab from scratch just to fix. > > I also got bit by this on my Vagrant node, but I nuked and paved to fix. > > https://bugs.launchpad.net/devstack/+bug/1402762 > > It'd be great to somehow make a long lived dsvm node and job where > DevStack is continually deployed to it and restacked, to check for these > kinds of errors? One of the things we don't test on the devstack side at all is that clean.sh takes us back down to baseline, which I think was the real issue here - https://review.openstack.org/#/c/141891/ I would not be opposed to adding cleanup testing at the end of any devstack run that ensures everything is shut down correctly and cleaned up to a base level. -Sean -- Sean Dague http://dague.net From doug at doughellmann.com Mon Dec 15 20:21:33 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 15 Dec 2014 15:21:33 -0500 Subject: [openstack-dev] oslo.db 1.2.1 release coming to fix stable/juno Message-ID: <27A76931-16EF-400B-8EDD-6C3E18F52053@doughellmann.com> The issue with stable/juno jobs failing because of the difference in the SQLAlchemy requirements between the older applications and the newer oslo.db is being addressed with a new release of the 1.2.x series. We will then cap the requirements for stable/juno to 1.2.1. We decided we did not need to raise the minimum version of oslo.db allowed in kilo, because the old versions of the library do work, if they are installed from packages and not through setuptools. Jeremy created a feature/1.2 branch for us, and I have 2 patches up [1][2] to apply the requirements fix. The change to the oslo.db version in stable/juno is [3]. After the changes in oslo.db merge, I will tag 1.2.1. Doug [1] https://review.openstack.org/#/c/141893/ [2] https://review.openstack.org/#/c/141894/ [3] https://review.openstack.org/#/c/141896/ From aroostifer at yahoo.com Mon Dec 15 20:32:08 2014 From: aroostifer at yahoo.com (Ari Rubenstein) Date: Mon, 15 Dec 2014 20:32:08 +0000 (UTC) Subject: [openstack-dev] Unsafe Abandon In-Reply-To: <1418660028-sup-6503@fewbar.com> References: <1418660028-sup-6503@fewbar.com> Message-ID: <857945511.287944.1418675528377.JavaMail.yahoo@jws100112.mail.ne1.yahoo.com> Hi there, I'm new to the list, and trying to get more information about the following issue: https://bugs.launchpad.net/heat/+bug/1353670 Is there anyone on the list who can explain under what conditions a user might hit this?? Workarounds?? ETA for a fix? Thanks! - Ari On Monday, December 15, 2014 11:30 AM, Clint Byrum wrote: Excerpts from Anant Patil's message of 2014-12-15 07:15:30 -0800: > On 13-Dec-14 05:42, Zane Bitter wrote: > > On 12/12/14 05:29, Murugan, Visnusaran wrote: > >> > >> > >>> -----Original Message----- > >>> From: Zane Bitter [mailto:zbitter at redhat.com] > >>> Sent: Friday, December 12, 2014 6:37 AM > >>> To: openstack-dev at lists.openstack.org > >>> Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept > >>> showdown > >>> > >>> On 11/12/14 08:26, Murugan, Visnusaran wrote: > >>>>>> [Murugan, Visnusaran] > >>>>>> In case of rollback where we have to cleanup earlier version of > >>>>>> resources, > >>>>> we could get the order from old template. We'd prefer not to have a > >>>>> graph table. > >>>>> > >>>>> In theory you could get it by keeping old templates around. But that > >>>>> means keeping a lot of templates, and it will be hard to keep track > >>>>> of when you want to delete them. It also means that when starting an > >>>>> update you'll need to load every existing previous version of the > >>>>> template in order to calculate the dependencies. It also leaves the > >>>>> dependencies in an ambiguous state when a resource fails, and > >>>>> although that can be worked around it will be a giant pain to implement. > >>>>> > >>>> > >>>> Agree that looking to all templates for a delete is not good. But > >>>> baring Complexity, we feel we could achieve it by way of having an > >>>> update and a delete stream for a stack update operation. I will > >>>> elaborate in detail in the etherpad sometime tomorrow :) > >>>> > >>>>> I agree that I'd prefer not to have a graph table. After trying a > >>>>> couple of different things I decided to store the dependencies in the > >>>>> Resource table, where we can read or write them virtually for free > >>>>> because it turns out that we are always reading or updating the > >>>>> Resource itself at exactly the same time anyway. > >>>>> > >>>> > >>>> Not sure how this will work in an update scenario when a resource does > >>>> not change and its dependencies do. > >>> > >>> We'll always update the requirements, even when the properties don't > >>> change. > >>> > >> > >> Can you elaborate a bit on rollback. > > > > I didn't do anything special to handle rollback. It's possible that we > > need to - obviously the difference in the UpdateReplace + rollback case > > is that the replaced resource is now the one we want to keep, and yet > > the replaced_by/replaces dependency will force the newer (replacement) > > resource to be checked for deletion first, which is an inversion of the > > usual order. > > > > This is where the version is so handy! For UpdateReplaced ones, there is > an older version to go back to. This version could just be template ID, > as I mentioned in another e-mail. All resources are at the current > template ID if they are found in the current template, even if they is > no need to update them. Otherwise, they need to be cleaned-up in the > order given in the previous templates. > > I think the template ID is used as version as far as I can see in Zane's > PoC. If the resource template key doesn't match the current template > key, the resource is deleted. The version is misnomer here, but that > field (template id) is used as though we had versions of resources. > > > However, I tried to think of a scenario where that would cause problems > > and I couldn't come up with one. Provided we know the actual, real-world > > dependencies of each resource I don't think the ordering of those two > > checks matters. > > > > In fact, I currently can't think of a case where the dependency order > > between replacement and replaced resources matters at all. It matters in > > the current Heat implementation because resources are artificially > > segmented into the current and backup stacks, but with a holistic view > > of dependencies that may well not be required. I tried taking that line > > out of the simulator code and all the tests still passed. If anybody can > > think of a scenario in which it would make a difference, I would be very > > interested to hear it. > > > > In any event though, it should be no problem to reverse the direction of > > that one edge in these particular circumstances if it does turn out to > > be a problem. > > > >> We had an approach with depends_on > >> and needed_by columns in ResourceTable. But dropped it when we figured out > >> we had too many DB operations for Update. > > > > Yeah, I initially ran into this problem too - you have a bunch of nodes > > that are waiting on the current node, and now you have to go look them > > all up in the database to see what else they're waiting on in order to > > tell if they're ready to be triggered. > > > > It turns out the answer is to distribute the writes but centralise the > > reads. So at the start of the update, we read all of the Resources, > > obtain their dependencies and build one central graph[1]. We than make > > that graph available to each resource (either by passing it as a > > notification parameter, or storing it somewhere central in the DB that > > they will all have to read anyway, i.e. the Stack). But when we update a > > dependency we don't update the central graph, we update the individual > > Resource so there's no global lock required. > > > > [1] > > https://github.com/zaneb/heat-convergence-prototype/blob/distributed-graph/converge/stack.py#L166-L168 > > > > A centralized graph and decision making will make the implementation far > more simpler than distributed. This looks academic, but the simplicity > beats everything! When each worker has to decide, there needs to be > lock, only DB transactions are not enough. In contrast, when the > decision making is centralized, that particular critical section can be > attempted with transaction and re-attempted if needed. > I'm concerned that we're losing sight of the whole point of convergence. Yes, concurrency is hard, and state management is really the only thing hard about concurrency. What Zane is suggesting is a lock-free approach commonly called 'Read Copy Update' or "RCU". It has high reader concurrency, but relies on there only being one writer. It is quite simple, and has proven itself enough to even be included in the Linux kernel: http://lwn.net/Articles/263130/ > With the distributed approach, I see following drawbacks: > 1. Every time a resource is done, the peer resources (siblings) are > checked to see if they are done and the parent is propagated. This > happens for each resource. This is slightly inaccurate. Every time a resource is done, resources that depend on that resource are checked to see if they still have any unresolved dependencies. > 2. The worker has to run through all the resources to see if the stack > is done, to mark it as completed. If a worker completes a resource which has no dependent resources, it only needs to check to see if all of the other edges of the graph are complete to mark the state as complete. There is no full traversal unless you want to make sure nothing regressed, which is not the way any of the specs read. > 3. The decision to converge is made by each worker resulting in lot of > contention. The centralized graph restricts the contention point to one > place where we can use DB transactions. It is easier to maintain code > where particular decisions are made at a place rather than at many > places. Why not let multiple workers use DB transactions? The contention happens _only if it needs to happen to preserve transaction consistency_ instead of _always_. > 4. The complex part we are trying to solve is to decide on what to do > next when a resource is done. With centralized graph, this is abstracted > out to the DB API. The API will return the next set of nodes. A smart > SQL query can reduce a lot of logic currently being coded in > worker/engine. Having seen many such "smart" SQL queries, I have to say, this is terrifying. Complexity in database access is by far the single biggest obstacle to scaling out. I don't really know why you'd want logic to move into the database. It is the one place that you must keep simple in order to scale. We can scale out python like crazy.. but SQL is generally a single threaded, impossible to debug component. So make the schema obvious and accesses to it straight forward. I think we need to land somewhere between the two approaches though. Here is my idea for DB interaction, I realize now it's been in my head for a while but I never shared: CREATE TABLE resource ( ? id ..., ? ! all the stuff now ? version int, ? replaced_by int, ? complete_order int, ? primary key (id, version), ? key idx_replaced_by (replaced_by)); CREATE TABLE resource_dependencies ( ? id ...., ? version int, ? needed_by ... ? primary key (id, version, needed_by)); Then completion happens something like this: BEGIN SELECT @complete_order := MAX(complete_order) FROM resource WHERE stack_id = :stack_id: SET @complete_order := @complete_order + 1 UPDATE resource SET complete_order = @complete_order, state='COMPLETE' WHERE id=:id: AND version=:version:; ! if there is a replaced_version UPDATE resource SET replaced_by=:version: WHERE id=:id: AND version=:replaced_version:; SELECT DISTINCT r.id FROM resource r INNER JOIN resource_dependencies rd ? ? ? ON r.id = rd.resource_id AND r.version = rd.version WHERE r.version=:version: AND rd.needed_by=:id: AND r.state != 'COMPLETE' for id in results: ? convergequeue.submit(id) COMMIT Perhaps I've missed some revelation that makes this hard or impossible. But I don't see a ton of database churn (one update per completion is meh). I also don't see a lot of complexity in the query. The complete_order can be used to handle deletes in the right order later (note that I know that is probably the wrong way to do it and sequences are a thing that can be used for this). > 5. What would be the starting point for resource clean-up? The clean-up > has to start when all the resources are updated. With no centralized > graph, the DB has to be searched for all the resources with no > dependencies and with older versions (or having older template keys) and > start removing them. With centralized graph, this would be a simpler > with a SQL queries returning what needs to be done. The search space for > where to start with clean-up will be huge. "Searched" may be the wrong way. With the table structure above, you can find everything to delete with this query: ! Outright deletes SELECT r_old.id FROM resource r_old LEFT OUTER JOIN resource r_new ? ON r_old.id = r_new.id AND r_old.version = :cleanup_version: WHERE r_new.id IS NULL OR r_old.replaced_by IS NOT NULL ORDER BY DESC r.complete_order; That should delete everything in more or less the right order. I think for that one you can just delete the rows as they're confirmed deleted from the plugins, no large transaction needed since we'd not expect these rows to be updated anymore. > 6. When engine restarts, the search space on where to start will be > huge. With a centralized graph, the abstracted API to get next set of > nodes makes the implementation of decision simpler. > > I am convinced enough that it is simpler to assign the responsibility to > engine on what needs to be done next. No locks will be required, not > even resource locks! It is simpler from implementation, understanding > and maintenance perspective. > I thought you started saying you would need locks, but now saying you won't. I agree no abstract locking is needed, just a consistent view of the graph in the DB. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From marun at redhat.com Mon Dec 15 20:39:29 2014 From: marun at redhat.com (Maru Newby) Date: Mon, 15 Dec 2014 12:39:29 -0800 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <548B60D0.7090600@dague.net> References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> <548A1A34.40105@dague.net> <548B60D0.7090600@dague.net> Message-ID: On Dec 12, 2014, at 1:40 PM, Sean Dague wrote: > On 12/12/2014 01:05 PM, Maru Newby wrote: >> >> On Dec 11, 2014, at 2:27 PM, Sean Dague wrote: >> >>> On 12/11/2014 04:16 PM, Jay Pipes wrote: >>>> On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote: >>>>> On Dec 11, 2014, at 1:04 PM, Jay Pipes wrote: >>>>>> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: >>>>>>> >>>>>>> On Dec 11, 2014, at 8:00 AM, Henry Gessau wrote: >>>>>>> >>>>>>>> On Thu, Dec 11, 2014, Mark McClain wrote: >>>>>>>>> >>>>>>>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes >>>>>>>>> > wrote: >>>>>>>>>> >>>>>>>>>> I'm generally in favor of making name attributes opaque, utf-8 >>>>>>>>>> strings that >>>>>>>>>> are entirely user-defined and have no constraints on them. I >>>>>>>>>> consider the >>>>>>>>>> name to be just a tag that the user places on some resource. It >>>>>>>>>> is the >>>>>>>>>> resource's ID that is unique. >>>>>>>>>> >>>>>>>>>> I do realize that Nova takes a different approach to *some* >>>>>>>>>> resources, >>>>>>>>>> including the security group name. >>>>>>>>>> >>>>>>>>>> End of the day, it's probably just a personal preference whether >>>>>>>>>> names >>>>>>>>>> should be unique to a tenant/user or not. >>>>>>>>>> >>>>>>>>>> Maru had asked me my opinion on whether names should be unique and I >>>>>>>>>> answered my personal opinion that no, they should not be, and if >>>>>>>>>> Neutron >>>>>>>>>> needed to ensure that there was one and only one default security >>>>>>>>>> group for >>>>>>>>>> a tenant, that a way to accomplish such a thing in a race-free >>>>>>>>>> way, without >>>>>>>>>> use of SELECT FOR UPDATE, was to use the approach I put into the >>>>>>>>>> pastebin on >>>>>>>>>> the review above. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I agree with Jay. We should not care about how a user names the >>>>>>>>> resource. >>>>>>>>> There other ways to prevent this race and Jay?s suggestion is a >>>>>>>>> good one. >>>>>>>> >>>>>>>> However we should open a bug against Horizon because the user >>>>>>>> experience there >>>>>>>> is terrible with duplicate security group names. >>>>>>> >>>>>>> The reason security group names are unique is that the ec2 api >>>>>>> supports source >>>>>>> rule specifications by tenant_id (user_id in amazon) and name, so >>>>>>> not enforcing >>>>>>> uniqueness means that invocation in the ec2 api will either fail or be >>>>>>> non-deterministic in some way. >>>>>> >>>>>> So we should couple our API evolution to EC2 API then? >>>>>> >>>>>> -jay >>>>> >>>>> No I was just pointing out the historical reason for uniqueness, and >>>>> hopefully >>>>> encouraging someone to find the best behavior for the ec2 api if we >>>>> are going >>>>> to keep the incompatibility there. Also I personally feel the ux is >>>>> better >>>>> with unique names, but it is only a slight preference. >>>> >>>> Sorry for snapping, you made a fair point. >>> >>> Yeh, honestly, I agree with Vish. I do feel that the UX of that >>> constraint is useful. Otherwise you get into having to show people UUIDs >>> in a lot more places. While those are good for consistency, they are >>> kind of terrible to show to people. >> >> While there is a good case for the UX of unique names - it also makes orchestration via tools like puppet a heck of a lot simpler - the fact is that most OpenStack resources do not require unique names. That being the case, why would we want security groups to deviate from this convention? > > Maybe the other ones are the broken ones? > > Honestly, any sanely usable system makes names unique inside a > container. Like files in a directory. In this case the tenant is the > container, which makes sense. > > It is one of many places that OpenStack is not consistent. But I'd > rather make things consistent and more usable than consistent and less. You might prefer less consistency for the sake of usability, but for me, consistency is a large enough factor in usability that allowing seemingly arbitrary deviation doesn?t seem like a huge win. Regardless, I?d like to see us came to decisions on API usability on an OpenStack-wide basis, so the API working group is probably where this discussion should continue. Maru From sorlando at nicira.com Mon Dec 15 20:47:16 2014 From: sorlando at nicira.com (Salvatore Orlando) Date: Mon, 15 Dec 2014 20:47:16 +0000 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: References: <54899F92.2060900@gmail.com> <5489BFBB.50802@cisco.com> <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> <548A1A34.40105@dague.net> <548B60D0.7090600@dague.net> Message-ID: I think the point made is that the behaviour is currently inconsistent and not user friendly. Regardless of that, I would like to point that technically this kind of change is backward incompatible and so it should not be simply approved by popular acclamation. I will seek input from the API WG in the next meeting. Salvatore On 15 December 2014 at 20:39, Maru Newby wrote: > > > On Dec 12, 2014, at 1:40 PM, Sean Dague wrote: > > > On 12/12/2014 01:05 PM, Maru Newby wrote: > >> > >> On Dec 11, 2014, at 2:27 PM, Sean Dague wrote: > >> > >>> On 12/11/2014 04:16 PM, Jay Pipes wrote: > >>>> On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote: > >>>>> On Dec 11, 2014, at 1:04 PM, Jay Pipes wrote: > >>>>>> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: > >>>>>>> > >>>>>>> On Dec 11, 2014, at 8:00 AM, Henry Gessau > wrote: > >>>>>>> > >>>>>>>> On Thu, Dec 11, 2014, Mark McClain wrote: > >>>>>>>>> > >>>>>>>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes >>>>>>>>>> > wrote: > >>>>>>>>>> > >>>>>>>>>> I'm generally in favor of making name attributes opaque, utf-8 > >>>>>>>>>> strings that > >>>>>>>>>> are entirely user-defined and have no constraints on them. I > >>>>>>>>>> consider the > >>>>>>>>>> name to be just a tag that the user places on some resource. It > >>>>>>>>>> is the > >>>>>>>>>> resource's ID that is unique. > >>>>>>>>>> > >>>>>>>>>> I do realize that Nova takes a different approach to *some* > >>>>>>>>>> resources, > >>>>>>>>>> including the security group name. > >>>>>>>>>> > >>>>>>>>>> End of the day, it's probably just a personal preference whether > >>>>>>>>>> names > >>>>>>>>>> should be unique to a tenant/user or not. > >>>>>>>>>> > >>>>>>>>>> Maru had asked me my opinion on whether names should be unique > and I > >>>>>>>>>> answered my personal opinion that no, they should not be, and if > >>>>>>>>>> Neutron > >>>>>>>>>> needed to ensure that there was one and only one default > security > >>>>>>>>>> group for > >>>>>>>>>> a tenant, that a way to accomplish such a thing in a race-free > >>>>>>>>>> way, without > >>>>>>>>>> use of SELECT FOR UPDATE, was to use the approach I put into the > >>>>>>>>>> pastebin on > >>>>>>>>>> the review above. > >>>>>>>>>> > >>>>>>>>> > >>>>>>>>> I agree with Jay. We should not care about how a user names the > >>>>>>>>> resource. > >>>>>>>>> There other ways to prevent this race and Jay?s suggestion is a > >>>>>>>>> good one. > >>>>>>>> > >>>>>>>> However we should open a bug against Horizon because the user > >>>>>>>> experience there > >>>>>>>> is terrible with duplicate security group names. > >>>>>>> > >>>>>>> The reason security group names are unique is that the ec2 api > >>>>>>> supports source > >>>>>>> rule specifications by tenant_id (user_id in amazon) and name, so > >>>>>>> not enforcing > >>>>>>> uniqueness means that invocation in the ec2 api will either fail > or be > >>>>>>> non-deterministic in some way. > >>>>>> > >>>>>> So we should couple our API evolution to EC2 API then? > >>>>>> > >>>>>> -jay > >>>>> > >>>>> No I was just pointing out the historical reason for uniqueness, and > >>>>> hopefully > >>>>> encouraging someone to find the best behavior for the ec2 api if we > >>>>> are going > >>>>> to keep the incompatibility there. Also I personally feel the ux is > >>>>> better > >>>>> with unique names, but it is only a slight preference. > >>>> > >>>> Sorry for snapping, you made a fair point. > >>> > >>> Yeh, honestly, I agree with Vish. I do feel that the UX of that > >>> constraint is useful. Otherwise you get into having to show people > UUIDs > >>> in a lot more places. While those are good for consistency, they are > >>> kind of terrible to show to people. > >> > >> While there is a good case for the UX of unique names - it also makes > orchestration via tools like puppet a heck of a lot simpler - the fact is > that most OpenStack resources do not require unique names. That being the > case, why would we want security groups to deviate from this convention? > > > > Maybe the other ones are the broken ones? > > > > Honestly, any sanely usable system makes names unique inside a > > container. Like files in a directory. In this case the tenant is the > > container, which makes sense. > > > > It is one of many places that OpenStack is not consistent. But I'd > > rather make things consistent and more usable than consistent and less. > > You might prefer less consistency for the sake of usability, but for me, > consistency is a large enough factor in usability that allowing seemingly > arbitrary deviation doesn?t seem like a huge win. Regardless, I?d like to > see us came to decisions on API usability on an OpenStack-wide basis, so > the API working group is probably where this discussion should continue. > > > Maru > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marun at redhat.com Mon Dec 15 20:49:26 2014 From: marun at redhat.com (Maru Newby) Date: Mon, 15 Dec 2014 12:49:26 -0800 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: <936303430.2524172.1418663608280.JavaMail.zimbra@redhat.com> References: <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> <548A1A34.40105@dague.net> <548B60D0.7090600@dague.net> <548F1235.2020502@redhat.com> <936303430.2524172.1418663608280.JavaMail.zimbra@redhat.com> Message-ID: <695A9601-ECFA-4A85-BCBF-B4C89F473875@redhat.com> On Dec 15, 2014, at 9:13 AM, Assaf Muller wrote: > > > ----- Original Message ----- >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA512 >> >> I was (rightfully) asked to share my comments on the matter that I >> left in gerrit here. See below. >> >> On 12/12/14 22:40, Sean Dague wrote: >>> On 12/12/2014 01:05 PM, Maru Newby wrote: >>>> >>>> On Dec 11, 2014, at 2:27 PM, Sean Dague wrote: >>>> >>>>> On 12/11/2014 04:16 PM, Jay Pipes wrote: >>>>>> On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote: >>>>>>> On Dec 11, 2014, at 1:04 PM, Jay Pipes >>>>>>> wrote: >>>>>>>> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: >>>>>>>>> >>>>>>>>> On Dec 11, 2014, at 8:00 AM, Henry Gessau >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> On Thu, Dec 11, 2014, Mark McClain >>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes >>>>>>>>>>>> > >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>> I'm generally in favor of making name attributes >>>>>>>>>>>> opaque, utf-8 strings that are entirely >>>>>>>>>>>> user-defined and have no constraints on them. I >>>>>>>>>>>> consider the name to be just a tag that the user >>>>>>>>>>>> places on some resource. It is the resource's ID >>>>>>>>>>>> that is unique. >>>>>>>>>>>> >>>>>>>>>>>> I do realize that Nova takes a different approach >>>>>>>>>>>> to *some* resources, including the security group >>>>>>>>>>>> name. >>>>>>>>>>>> >>>>>>>>>>>> End of the day, it's probably just a personal >>>>>>>>>>>> preference whether names should be unique to a >>>>>>>>>>>> tenant/user or not. >>>>>>>>>>>> >>>>>>>>>>>> Maru had asked me my opinion on whether names >>>>>>>>>>>> should be unique and I answered my personal >>>>>>>>>>>> opinion that no, they should not be, and if >>>>>>>>>>>> Neutron needed to ensure that there was one and >>>>>>>>>>>> only one default security group for a tenant, >>>>>>>>>>>> that a way to accomplish such a thing in a >>>>>>>>>>>> race-free way, without use of SELECT FOR UPDATE, >>>>>>>>>>>> was to use the approach I put into the pastebin >>>>>>>>>>>> on the review above. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I agree with Jay. We should not care about how a >>>>>>>>>>> user names the resource. There other ways to >>>>>>>>>>> prevent this race and Jay?s suggestion is a good >>>>>>>>>>> one. >>>>>>>>>> >>>>>>>>>> However we should open a bug against Horizon because >>>>>>>>>> the user experience there is terrible with duplicate >>>>>>>>>> security group names. >>>>>>>>> >>>>>>>>> The reason security group names are unique is that the >>>>>>>>> ec2 api supports source rule specifications by >>>>>>>>> tenant_id (user_id in amazon) and name, so not >>>>>>>>> enforcing uniqueness means that invocation in the ec2 >>>>>>>>> api will either fail or be non-deterministic in some >>>>>>>>> way. >>>>>>>> >>>>>>>> So we should couple our API evolution to EC2 API then? >>>>>>>> >>>>>>>> -jay >>>>>>> >>>>>>> No I was just pointing out the historical reason for >>>>>>> uniqueness, and hopefully encouraging someone to find the >>>>>>> best behavior for the ec2 api if we are going to keep the >>>>>>> incompatibility there. Also I personally feel the ux is >>>>>>> better with unique names, but it is only a slight >>>>>>> preference. >>>>>> >>>>>> Sorry for snapping, you made a fair point. >>>>> >>>>> Yeh, honestly, I agree with Vish. I do feel that the UX of >>>>> that constraint is useful. Otherwise you get into having to >>>>> show people UUIDs in a lot more places. While those are good >>>>> for consistency, they are kind of terrible to show to people. >>>> >>>> While there is a good case for the UX of unique names - it also >>>> makes orchestration via tools like puppet a heck of a lot simpler >>>> - the fact is that most OpenStack resources do not require unique >>>> names. That being the case, why would we want security groups to >>>> deviate from this convention? >>> >>> Maybe the other ones are the broken ones? >>> >>> Honestly, any sanely usable system makes names unique inside a >>> container. Like files in a directory. >> >> Correct. Or take git: it does not use hashes to identify objects, right? >> >>> In this case the tenant is the container, which makes sense. >>> >>> It is one of many places that OpenStack is not consistent. But I'd >>> rather make things consistent and more usable than consistent and >>> less. >> >> Are we only proposing to make security group name unique? I assume >> that, since that's what we currently have in review. The change would >> make API *more* inconsistent, not less, since other objects still use >> uuid for identification. >> >> You may say that we should move *all* neutron objects to the new >> identification system by name. But what's the real benefit? >> >> If there are problems in UX (client, horizon, ...), we should fix the >> view and not data models used. If we decide we want users to avoid >> using objects with the same names, fine, let's add warnings in UI >> (probably with an option to disable it so that we don't push the >> validation into their throats). >> >> Finally, I have concern about us changing user visible object >> attributes like names during db migrations, as it's proposed in the >> patch discussed here. I think such behaviour can be quite unexpected >> for some users, if not breaking their workflow and/or scripts. >> >> My belief is that responsible upstream does not apply ad-hoc changes >> to API to fix a race condition that is easily solvable in other ways >> (see Assaf's proposal to introduce a new DefaultSecurityGroups table >> in patchset 12 comments). >> > > As usual you explain yourself better than I can... I think my main > original objection to the patch is that it feels like an accidental > API change to fix a bug. If you want unique naming: > 1) We need to be consistent across different resources > 2) It needs to be in a dedicate change, and perhaps blueprint > > Since there's conceivable alternative solutions to the bug that aren't > substantially more costly or complicated, I don't see why we would pursue > the proposed approach. +1 Regardless of the merits of security groups having unique names, I don?t think it is a change that should be slipped in as part of a bugfix. If we want to see this kind of API-modifying change introduced in Neutron (or any other OpenStack project), there is a process that needs to be followed. Maru > >> As for the whole object identification scheme change, for this to >> work, it probably needs a spec and a long discussion on any possible >> complications (and benefits) when applying a change like that. >> >> For reference and convenience of readers, leaving the link to the >> patch below: https://review.openstack.org/#/c/135006/ >> >> >> >>> >>> -Sean >>> >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG/MacGPG2 v2.0.22 (Darwin) >> >> iQEcBAEBCgAGBQJUjxI1AAoJEC5aWaUY1u579M8H/RC+M7/9YYDClWRjhQLBNqEq >> 0pMxURZi8lTyDmi+cXA7wq1QzgOwUqWnJnOMYzq8nt9wLh8cgasjU5YXZokrqDLw >> zEu/a1Cd9Alp4iGYQ6upw94BptGrMvk+XwTedMX9zMLf0od1k8Gzp5xYH/GXInN3 >> E+wje40Huia0MmLu4i2GMr/13gD2aYhMeGxZtDMcxQsF0DBh0gy8OM9pfKgIiXVM >> T65nFbXUY1/PuAdzYwMto5leuWZH03YIddXlzkQbcZoH4PGgNEE3eKl1ctQSMGoo >> bz3l522VimQvVnP/XiM6xBjFqsnPM5Tc7Ylu942l+NfpfcAM5QB6Ihvw5kQI0uw= >> =gIsu >> -----END PGP SIGNATURE----- >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From maxime.leroy at 6wind.com Mon Dec 15 21:16:35 2014 From: maxime.leroy at 6wind.com (Maxime Leroy) Date: Mon, 15 Dec 2014 22:16:35 +0100 Subject: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver In-Reply-To: <20141212141601.GN32050@redhat.com> References: <20141209140411.GI29167@redhat.com> <20141210093101.GC6450@redhat.com> <20141211104137.GD23831@redhat.com> <20141212094639.GE32050@redhat.com> <20141212141601.GN32050@redhat.com> Message-ID: On Fri, Dec 12, 2014 at 3:16 PM, Daniel P. Berrange wrote: > On Fri, Dec 12, 2014 at 03:05:28PM +0100, Maxime Leroy wrote: >> On Fri, Dec 12, 2014 at 10:46 AM, Daniel P. Berrange >> wrote: >> > On Fri, Dec 12, 2014 at 01:21:36PM +0900, Ryu Ishimoto wrote: >> >> On Thu, Dec 11, 2014 at 7:41 PM, Daniel P. Berrange >> >> wrote: >> >> >> [..] >> >> Port binding mechanism could vary among different networking technologies, >> >> which is not nova's concern, so this proposal makes sense. Note that some >> >> vendors already provide port binding scripts that are currently executed >> >> directly from nova's vif.py ('mm-ctl' of midonet and 'ifc_ctl' for iovisor >> >> are two such examples), and this proposal makes it unnecessary to have >> >> these hard-coded in nova. The only question I have is, how would nova >> >> figure out the arguments for these scripts? Should nova dictate what they >> >> are? >> > >> > We could define some standard set of arguments & environment variables >> > to pass the information from the VIF to the script in a standard way. >> > >> >> Many information are used by the plug/unplug method: vif_id, >> vif_address, ovs_interfaceid, firewall, net_id, tenant_id, vnic_type, >> instance_uuid... >> >> Not sure we can define a set of standard arguments. > > That's really not a problem. There will be some set of common info > needed for all. Then for any particular vif type we know what extra > specific fields are define in the port binding metadata. We'll just > set env variables for each of those. > >> Maybe instead to use a script we should load some plug/unplug >> functions from a python module with importlib. So a vif_plug_module >> option instead to have a vif_plug_script ? > > No, we explicitly do *not* want any usage of the Nova python modules. > That is all private internal Nova implementation detail that nothing > is permitted to rely on - this is why the VIF plugin feature was > removed in the first place. > >> There are several other problems to solve if we are going to use this >> vif_plug_script: >> >> - How to have the authorization to run this script (i.e. rootwrap)? > > Yes, rootwrap. > >> - How to test plug/unplug function from these scripts? >> Now, we have unity tests in nova/tests/unit/virt/libvirt/test_vif.py >> for plug/unplug method. > > Integration and/or functional tests run for the VIF impl would > exercise this code still. > >> - How this script will be installed? >> -> should it be including in the L2 agent package? Some L2 switch >> doesn't have a L2 agent. > > That's just a normal downstream packaging task which is easily > handled by people doing that work. If there's no L2 agent package > they can trivially just create a new package for the script(s) > that need installing on the comput node. They would have to be > doing exactly this anyway if you had the VIF plugin as a python > module instead. > Ok, thank you for the details. I will look how to implement this feature. Regards, Maxime From morgan.fainberg at gmail.com Mon Dec 15 22:14:47 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Mon, 15 Dec 2014 17:14:47 -0500 Subject: [openstack-dev] [Keystone][python-keystoneclient][pycadf] Abandoning of inactive reviews In-Reply-To: References: Message-ID: The abandon sweep has been finished. Here is the list of the reviews that were abandoned. Keystone: https://review.openstack.org/#/c/73907/ https://review.openstack.org/#/c/111312/ https://review.openstack.org/#/c/93480/ https://review.openstack.org/#/c/123862/ https://review.openstack.org/#/c/75708/ https://review.openstack.org/#/c/92727/ https://review.openstack.org/#/c/95282/ https://review.openstack.org/#/c/108592/ https://review.openstack.org/#/c/117380/ https://review.openstack.org/#/c/103368/ https://review.openstack.org/#/c/113236/ https://review.openstack.org/#/c/116464/ https://review.openstack.org/#/c/65428/ https://review.openstack.org/#/c/98836/ https://review.openstack.org/#/c/120031/ https://review.openstack.org/#/c/126217/ python-keystoneclient: https://review.openstack.org/#/c/114856/ https://review.openstack.org/#/c/107926/ https://review.openstack.org/#/c/107328/ https://review.openstack.org/#/c/122569/ https://review.openstack.org/#/c/122515/ https://review.openstack.org/#/c/95680/ https://review.openstack.org/#/c/118531/ https://review.openstack.org/#/c/120822/ https://review.openstack.org/#/c/112752/ https://review.openstack.org/#/c/111665/ https://review.openstack.org/#/c/113163/ https://review.openstack.org/#/c/112564/ https://review.openstack.org/#/c/66137/ https://review.openstack.org/#/c/92726/ https://review.openstack.org/#/c/93244/ https://review.openstack.org/#/c/91895/ Keystone middleware: https://review.openstack.org/#/c/114261/ Cheers, Morgan --? Morgan Fainberg On December 11, 2014 at 1:05:37 PM, Morgan Fainberg (morgan.fainberg at gmail.com) wrote: This is a notification that at the start of next week, all projects under the Identity Program are going to see a cleanup of old/lingering open reviews. I will be reviewing all reviews. If there is a negative score (this would be, -1 or -2 from jenkins, -1 or -2 from a code reviewer, or -1 workflow) and the review has not seen an update in over 60days (more than ?rebase?, commenting/responding to comments is an update) I will be administratively abandoning the change. This will include reviews on: Keystone Keystone-specs python-keystoneclient keystonemiddleware pycadf python-keystoneclient-kerberos python-keystoneclient-federation Please take a look at your open reviews and get an update/response to negative scores to keep reviews active. You will always be able to un-abandon a review (as the author) or ask a Keystone-core member to unabandon a change.? Cheers, Morgan Fainberg --? Morgan Fainberg -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Dec 15 22:58:49 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 15 Dec 2014 17:58:49 -0500 Subject: [openstack-dev] oslo.db 1.2.1 release coming to fix stable/juno In-Reply-To: <27A76931-16EF-400B-8EDD-6C3E18F52053@doughellmann.com> References: <27A76931-16EF-400B-8EDD-6C3E18F52053@doughellmann.com> Message-ID: <193BFB76-D89B-4B3A-94F8-DB6FEFEC5138@doughellmann.com> On Dec 15, 2014, at 3:21 PM, Doug Hellmann wrote: > The issue with stable/juno jobs failing because of the difference in the SQLAlchemy requirements between the older applications and the newer oslo.db is being addressed with a new release of the 1.2.x series. We will then cap the requirements for stable/juno to 1.2.1. We decided we did not need to raise the minimum version of oslo.db allowed in kilo, because the old versions of the library do work, if they are installed from packages and not through setuptools. > > Jeremy created a feature/1.2 branch for us, and I have 2 patches up [1][2] to apply the requirements fix. The change to the oslo.db version in stable/juno is [3]. > > After the changes in oslo.db merge, I will tag 1.2.1. After spending several hours exploring a bunch of options to make this actually work, some of which require making changes to test job definitions, grenade, or other long-term changes, I?m proposing a new approach: 1. Undo the change in master that broke the compatibility with versions of SQLAlchemy by making master match juno: https://review.openstack.org/141927 2. Update oslo.db after ^^ lands. 3. Tag oslo.db 1.4.0 with a set of requirements compatible with Juno. 4. Change the requirements in stable/juno to skip oslo.db 1.1, 1.2, and 1.3. I?ll proceed with that plan tomorrow morning (~15 hours from now) unless someone points out why that won?t work in the mean time. Doug > > Doug > > [1] https://review.openstack.org/#/c/141893/ > [2] https://review.openstack.org/#/c/141894/ > [3] https://review.openstack.org/#/c/141896/ > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From vishvananda at gmail.com Mon Dec 15 22:59:03 2014 From: vishvananda at gmail.com (Vishvananda Ishaya) Date: Mon, 15 Dec 2014 14:59:03 -0800 Subject: [openstack-dev] [qa] Question about "nova boot --min-count " In-Reply-To: References: Message-ID: I suspect you are actually failing due to not having enough room in your cloud instead of not having enough quota. You will need to make instance sizes with less cpus/ram/disk or change your allocation ratios in the scheduler. Vish On Dec 13, 2014, at 8:43 AM, Danny Choi (dannchoi) wrote: > Hi, > > According to the help text, ??min-count ? boot at least servers (limited by quota): > > --min-count Boot at least servers (limited by > quota). > > I used devstack to deploy OpenStack (version Kilo) in a multi-node setup: > 1 Controller/Network + 2 Compute nodes > > I update the tenant demo default quota ?instances" and ?cores" from ?10? and ?20? to ?100? and ?200?: > > localadmin at qa4:~/devstack$ nova quota-show --tenant 62fe9a8a2d58407d8aee860095f11550 --user eacb7822ccf545eab9398b332829b476 > +-----------------------------+-------+ > | Quota | Limit | > +-----------------------------+-------+ > | instances | 100 | <<<<< > | cores | 200 | <<<<< > | ram | 51200 | > | floating_ips | 10 | > | fixed_ips | -1 | > | metadata_items | 128 | > | injected_files | 5 | > | injected_file_content_bytes | 10240 | > | injected_file_path_bytes | 255 | > | key_pairs | 100 | > | security_groups | 10 | > | security_group_rules | 20 | > | server_groups | 10 | > | server_group_members | 10 | > +-----------------------------+-------+ > > When I boot 50 VMs using ??min-count 50?, only 48 VMs come up. > > localadmin at qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=5b464333-bad0-4fc1-a2f0-310c47b77a17 --min-count 50 vm- > > There is no error in logs; and it happens consistently. > > I also tried ??min-count 60? and only 48 VMs com up. > > In Horizon, left pane ?Admin? -> ?System? -> ?Hypervisors?, it shows both Compute hosts, each with 32 total VCPUs for a grand total of 64, but only 48 used. > > Is this normal behavior or is there any other setting to change in order to use all 64 VCPUs? > > Thanks, > Danny > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From vishvananda at gmail.com Mon Dec 15 23:01:58 2014 From: vishvananda at gmail.com (Vishvananda Ishaya) Date: Mon, 15 Dec 2014 15:01:58 -0800 Subject: [openstack-dev] [qa] How to delete a VM which is in ERROR state? In-Reply-To: <548B6844.4020804@gmail.com> References: <548B6844.4020804@gmail.com> Message-ID: <2075A451-DE64-4AEE-9B86-6201A8612C5B@gmail.com> I have seen deadlocks in libvirt that could cause this. When you are in this state, check to see if you can do a virsh list on the node. If not, libvirt is deadlocked, and ubuntu may need to pull in a fix/newer version. Vish On Dec 12, 2014, at 2:12 PM, pcrews wrote: > On 12/09/2014 03:54 PM, Ken'ichi Ohmichi wrote: >> Hi, >> >> This case is always tested by Tempest on the gate. >> >> https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_delete_server.py#L152 >> >> So I guess this problem wouldn't happen on the latest version at least. >> >> Thanks >> Ken'ichi Ohmichi >> >> --- >> >> 2014-12-10 6:32 GMT+09:00 Joe Gordon : >>> >>> >>> On Sat, Dec 6, 2014 at 5:08 PM, Danny Choi (dannchoi) >>> wrote: >>>> >>>> Hi, >>>> >>>> I have a VM which is in ERROR state. >>>> >>>> >>>> +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ >>>> >>>> | ID | Name >>>> | Status | Task State | Power State | Networks | >>>> >>>> >>>> +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ >>>> >>>> | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | >>>> cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR | - | NOSTATE >>>> | | >>>> >>>> >>>> I tried in both CLI ?nova delete? and Horizon ?terminate instance?. >>>> Both accepted the delete command without any error. >>>> However, the VM never got deleted. >>>> >>>> Is there a way to remove the VM? >>> >>> >>> What version of nova are you using? This is definitely a serious bug, you >>> should be able to delete an instance in error state. Can you file a bug that >>> includes steps on how to reproduce the bug along with all relevant logs. >>> >>> bugs.launchpad.net/nova >>> >>>> >>>> >>>> Thanks, >>>> Danny >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > Hi, > > I've encountered this in my own testing and have found that it appears to be tied to libvirt. > > When I hit this, reset-state as the admin user reports success (and state is set), *but* things aren't really working as advertised and subsequent attempts to do anything with the errant vm's will send them right back into 'FLAIL' / can't delete / endless DELETING mode. > > restarting libvirt-bin on my machine fixes this - after restart, the deleting vm's are properly wiped without any further user input to nova/horizon and all seems right in the world. > > using: > devstack > ubuntu 14.04 > libvirtd (libvirt) 1.2.2 > > triggered via: > lots of random create/reboot/resize/delete requests of varying validity and sanity. > > Am in the process of cleaning up my test code so as not to hurt anyone's brain with the ugly and will file a bug once done, but thought this worth sharing. > > Thanks, > Patrick > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From douglas.mendizabal at RACKSPACE.COM Mon Dec 15 23:43:35 2014 From: douglas.mendizabal at RACKSPACE.COM (Douglas Mendizabal) Date: Mon, 15 Dec 2014 23:43:35 +0000 Subject: [openstack-dev] [barbican] Mid-Cycle Sprint Message-ID: Hi openstack-dev, The Barbican team is planning to have a mid-cycle sprint in Austin, TX on February 16-18, 2015. We?ll be meeting at Capital Factory, a co-working space in downtown Austin. For more details and RSVP, please see: https://wiki.openstack.org/wiki/Sprints/BarbicanKiloSprint Thanks, -Doug Mendiz?bal -------------------- Douglas Mendiz?bal IRC: redrobot PGP Key: 245C 7B6F 70E9 D8F3 F5D5 0CC9 AD14 1F30 2D58 923C -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Tue Dec 16 00:06:34 2014 From: clint at fewbar.com (Clint Byrum) Date: Mon, 15 Dec 2014 16:06:34 -0800 Subject: [openstack-dev] Unsafe Abandon In-Reply-To: <857945511.287944.1418675528377.JavaMail.yahoo@jws100112.mail.ne1.yahoo.com> References: <1418660028-sup-6503@fewbar.com> <857945511.287944.1418675528377.JavaMail.yahoo@jws100112.mail.ne1.yahoo.com> Message-ID: <1418688234-sup-7495@fewbar.com> Excerpts from Ari Rubenstein's message of 2014-12-15 12:32:08 -0800: > Hi there, > I'm new to the list, and trying to get more information about the following issue: > > https://bugs.launchpad.net/heat/+bug/1353670 > Is there anyone on the list who can explain under what conditions a user might hit this?? Workarounds?? ETA for a fix? Hi Ari. Welcome, and thanks for your interest in OpenStack and Heat! A bit of etiquette first: Please do not reply to existing threads to start a new one. That is known as a "hijack": https://wiki.openstack.org/wiki/MailingListEtiquette#Changing_Subject Also, for bugs, you'll find it's best to ask in the comments of the bug, as those who are most interested and able to answer should already be subscribed and can respond there. From slagun at mirantis.com Tue Dec 16 00:52:25 2014 From: slagun at mirantis.com (Stan Lagun) Date: Tue, 16 Dec 2014 03:52:25 +0300 Subject: [openstack-dev] [Murano] Murano Agent In-Reply-To: References: <6e0a979ee08e436f8b2b374855c4b6c3@BY2PR42MB101.048d.mgd.msft.net> Message-ID: Murano agent is required as long as you deploy applications that use it. You can take (write) application that uses Heat Software Configuration instead of Murano agent and use image without agent Sincerely yours, Stan Lagun Principal Software Engineer @ Mirantis On Mon, Dec 15, 2014 at 7:54 AM, Ruslan Kamaldinov wrote: > > On Mon, Dec 15, 2014 at 7:10 AM, wrote: > > Please let me kow why Murano-agent is required and the components that > needs > > to be installed in it. > > You can find more details about murano agent at: > https://wiki.openstack.org/wiki/Murano/UnifiedAgent > > It can be installed with diskimage-builder: > http://git.openstack.org/cgit/stackforge/murano-agent/tree/README.rst#n34 > > - Ruslan > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano at openstack.org Tue Dec 16 01:07:20 2014 From: stefano at openstack.org (Stefano Maffulli) Date: Mon, 15 Dec 2014 17:07:20 -0800 Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: References: <547CCA69.4000909@anteaya.info> <837B116B6E5B934DA06D9AD0FD79C6A3018E9C60@SHSMSX104.ccr.corp.intel.com> <547EB5C9.8020008@rackspace.com> <547F8DC1.5000202@anteaya.info> Message-ID: <548F85C8.6020400@openstack.org> On 12/05/2014 07:08 AM, Kurt Taylor wrote: > 1. Meeting content: Having 2 meetings per week is more than is needed at > this stage of the working group. There just isn't enough meeting content > to justify having two meetings every week. I'd like to discuss this further: the stated objectives of the meetings are very wide and may allow for more than one slot per week. In particular I'm seeing the two below as good candidates for 'meet as many times as possible': * to provide a forum for the curious and for OpenStack programs who are not yet in this space but may be in the future * to encourage questions from third party folks and support the sourcing of answers https://wiki.openstack.org/wiki/Meetings/ThirdParty#Goals_for_Third_Party_meetings > 2. Decisions: Any decision made at one meeting will potentially be > undone at the next, or at least not fully explained. It will be > difficult to keep consistent direction with the overall work group. I think this needs to be clarified for all teams, not just third-party: I disagree that important decisions should be taken on IRC. IRC is the place where discussions happen and agreements may form among a group of people, but ultimately the communication and the actual *decision* needs to happen on the wider email list channel. > My proposal was to have only 1 meeting per week at alternating times, [...] I'm not going to enter into the specific, as I'm sure you're already aware and weighted the disadvantages of alternating times and multiple dates. Only I'd ask you to communicate clearly times and objectives on the team's and meetings wiki pages. There are now three slots listed: Mondays at 1500 UTC, chair Anita Mondays at 1800 UTC, chair (I'm assuming it's Kurt) Tuesdays at 0800 UTC, chair Anita I would suggest you to make an effort to split the agenda across the three slots, if you think it's possible, or assign priority topics from the stated goals of the meetings so that people will know what to expect each time. The objective of the meetings should be to engage more people in different timezones, while avoiding confusion. Let me know if you need help editing the pages. As I mentioned above, probably one way to do this is to make some slots more focused on engaging newcomers and answering questions, more like serendipitous mentoring sessions with the less involved, while another slot could be dedicated to more focused and long term efforts, with more committed people? Cheers, stef PS let's continue the conversation only on -infra. I'm replying including -dev only because I think the point "2. Decisions" needed to be clarified for all. From dzimine at stackstorm.com Tue Dec 16 01:11:45 2014 From: dzimine at stackstorm.com (Dmitri Zimine) Date: Mon, 15 Dec 2014 17:11:45 -0800 Subject: [openstack-dev] [Mistral] For-each In-Reply-To: References: Message-ID: <58CB487D-D8AB-401C-8ABC-CF3F088552DC@stackstorm.com> I had a short user feedback sessions with Patrick and James, the short summary is: 1) simplify the syntax to optimize for the most common use case 2) 'concurrency' is the best word - but bring it out of for-each to task level, or task/policies 3) all-permutation - relatively rare case, either implement as a different construct - like ?for-every?, or use workaround. Another feedback is for-each as a term is confusing: people expect a different behavior (e.g., run sequentially, modify individual elements) while it is effectively a ?map? function. No good suggestion on the better name yet. Keep on looking. The details are added into the document. DZ. On Dec 15, 2014, at 2:00 AM, Nikolay Makhotkin wrote: > Hi, > > Here is the doc with suggestions on specification for for-each feature. > > You are free to comment and ask questions. > > https://docs.google.com/document/d/1iw0OgQcU0LV_i3Lnbax9NqAJ397zSYA3PMvl6F_uqm0/edit?usp=sharing > > > > -- > Best Regards, > Nikolay -------------- next part -------------- An HTML attachment was scrubbed... URL: From henry4hly at gmail.com Tue Dec 16 01:11:52 2014 From: henry4hly at gmail.com (henry hly) Date: Tue, 16 Dec 2014 09:11:52 +0800 Subject: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change In-Reply-To: <87egs0re9m.fsf@metaswitch.com> References: <87egs0re9m.fsf@metaswitch.com> Message-ID: On Tue, Dec 16, 2014 at 1:53 AM, Neil Jerram wrote: > Hi all, > > Following the approval for Neutron vendor code decomposition > (https://review.openstack.org/#/c/134680/), I just wanted to comment > that it appears to work fine to have an ML2 mechanism driver _entirely_ > out of tree, so long as the vendor repository that provides the ML2 > mechanism driver does something like this to register their driver as a > neutron.ml2.mechanism_drivers entry point: > > setuptools.setup( > ..., > entry_points = { > ..., > 'neutron.ml2.mechanism_drivers': [ > 'calico = xyz.openstack.mech_xyz:XyzMechanismDriver', > ], > }, > ) > > (Please see > https://github.com/Metaswitch/calico/commit/488dcd8a51d7c6a1a2f03789001c2139b16de85c > for the complete change and detail, for the example that works for me.) > > Then Neutron and the vendor package can be separately installed, and the > vendor's driver name configured in ml2_conf.ini, and everything works. > > Given that, I wonder: > > - is that what the architects of the decomposition are expecting? > > - other than for the reference OVS driver, are there any reasons in > principle for keeping _any_ ML2 mechanism driver code in tree? > Good questions. I'm also looking for the linux bridge MD, SRIOV MD... Who will be responsible for these drivers? The OVS driver is maintained by Neutron community, vendor specific hardware driver by vendor, SDN controllers driver by their own community or vendor. But there are also other drivers like SRIOV, which are general for a lot of vendor agonitsc backends, and can't be maintained by a certain vendor/community. So, it would be better to keep some "general backend" MD in tree besides SRIOV. There are also vif-type-tap, vif-type-vhostuser, hierarchy-binding-external-VTEP ... We can implement a very thin in-tree base MD that only handle "vif bind" which is backend agonitsc, then backend provider is free to implement their own service logic, either by an backend agent, or by a driver derived from the base MD for agentless scenery. Regards > Many thanks, > Neil > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From raodingyuan at chinacloud.com.cn Tue Dec 16 01:30:39 2014 From: raodingyuan at chinacloud.com.cn (Rao Dingyuan) Date: Tue, 16 Dec 2014 09:30:39 +0800 Subject: [openstack-dev] [Openstack] [Ceilometer] [API] Batch alarm creation Message-ID: <0e8901d018cf$e6eddeb0$b4c99c10$@chinacloud.com.cn> Yes Ryan, that's exactly what I'm thinking. Glad to know that we have the same opinion :) BR Kurt -----????----- ???: Ryan Brown [mailto:rybrown at redhat.com] ????: 2014?12?12? 23:30 ???: openstack-dev at lists.openstack.org ??: Re: [openstack-dev] [Openstack] [Ceilometer] [API] Batch alarm creation On 12/12/2014 03:37 AM, Rao Dingyuan wrote: > Hi Eoghan and folks, > > I'm thinking of adding an API to create multiple alarms in a batch. > > I think adding an API to create multiple alarms is a good option to solve the problem that once an *alarm target* (a vm or a new group of vms) is created, multiple requests will be fired because multiple alarms are to be created. > > In the our current project, this requiement is specially urgent since our alarm target is one VM, and 6 alarms are to be created when one VM is created. > > What do you guys think? > > > Best Regards, > Kurt Rao Allowing batch operations is definitely a good idea, though it may not be a solution to all of the problems you outlined. One way to batch object creations would be to give clients the option to POST a collection of alarms instead of a single alarm. Currently your API looks like[1]: POST /v2/alarms BODY: { "alarm_actions": ... ... } For batches you could modify your API to accept a body like: { "alarms": [ {"alarm_actions": ...}, {"alarm_actions": ...}, {"alarm_actions": ...}, {"alarm_actions": ...} ] } It could (pretty easily) be a backwards-compatible change since the schemata don't conflict, and you can even add some kind of a "batch":true flag to make it explicit that the user wants to create a collection. The API-WG has a spec[2] out right now explaining the rationale behind collection representations. [1]: http://docs.openstack.org/developer/ceilometer/webapi/v2.html#post--v2-alarms [2]: https://review.openstack.org/#/c/133660/11/guidelines/representation_structure.rst,unified > > > > ----- Original ----- > ???: Eoghan Glynn [mailto:eglynn at redhat.com] > ????: 2014?12?3? 20:34 > ???: Rao Dingyuan > ??: openstack at lists.openstack.org > ??: Re: [Openstack] [Ceilometer] looking for alarm best practice - > please help > > > >> Hi folks, >> >> >> >> I wonder if anyone could share some best practice regarding to the >> usage of ceilometer alarm. We are using the alarm >> evaluation/notification of ceilometer and we don?t feel very well of >> the way we use it. Below is our >> problem: >> >> >> >> ============================ >> >> Scenario: >> >> When cpu usage or memory usage above a certain threshold, alerts >> should be displayed on admin?s web page. There should be a 3 level >> alerts according to meter value, namely notice, warning, fatal. >> Notice means the meter value is between 50% ~ 70%, warning means >> between 70% ~ 85% and fatal means above 85% >> >> For example: >> >> * when one vm?s cpu usage is 72%, an alert message should be >> displayed saying >> ?Warning: vm[d9b7018b-06c4-4fba-8221-37f67f6c6b8c] cpu usage is above 70%?. >> >> * when one vm?s memory usage is 90%, another alert message should be >> created saying ?Fatal: vm[d9b7018b-06c4-4fba-8221-37f67f6c6b8c] >> memory usage is above 85%? >> >> >> >> Our current Solution: >> >> We used ceilometer alarm evaluation/notification to implement this. >> To distinguish which VM and which meter is above what value, we?ve >> created one alarm for each VM by each condition. So, to monitor 1 VM, >> 6 alarms will be created because there are 2 meters and for each meter there are 3 levels. >> That means, if there are 100 VMs to be monitored, 600 alarms will be >> created. >> >> >> >> Problems: >> >> * The first problem is, when the number of meters increases, the >> number of alarms will be multiplied. For example, customer may want >> alerts on disk and network IO rates, and if we do that, there will be >> 4*3=12 alarms for each VM. >> >> * The second problem is, when one VM is created, multiple alarms will >> be created, meaning multiple http requests will be fired. In the case >> above, 6 HTTP requests will be needed once a VM is created. And this >> number also increases as the number of meters goes up. > > One way of reducing both the number of alarms and the volume of notifications would be to group related VMs, if such a concept exists in your use-case. > > This is effectively how Heat autoscaling uses ceilometer, alarming on the average of some statistic over a set of instances (as opposed to triggering on individual instances). > > The VMs could be grouped by setting user-metadata of form: > > nova boot ... --meta metering.my_server_group=foobar > > Any user-metadata prefixed with 'metering.' will be preserved by ceilometer in the resource_metadata.user_metedata stored for each sample, so that it can used to select the statistics on which the alarm is based, e.g. > > ceilometer alarm-threshold-create --name cpu_high_foobar \ > --description 'warning: foobar instance group running hot' \ > --meter-name cpu_util --threshold 70.0 \ > --comparison-operator gt --statistic avg \ > ... > --query metadata.user_metedata.my_server_group=foobar > > This approach is of course predicated on the there being some natural grouping relation between instances in your environment. > > Cheers, > Eoghan > > >> ============================= >> >> >> >> Do anyone have any suggestions? >> >> >> >> >> >> >> >> Best Regards! >> >> Kurt Rao >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Ryan Brown / Software Engineer, Openstack / Red Hat, Inc. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dtroyer at gmail.com Tue Dec 16 01:32:24 2014 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 15 Dec 2014 19:32:24 -0600 Subject: [openstack-dev] [DevStack] A grenade for DevStack? In-Reply-To: <548F3E9D.20605@dague.net> References: <20141215193349.GD35938@HQSML-1081034> <548F3E9D.20605@dague.net> Message-ID: On Mon, Dec 15, 2014 at 2:03 PM, Sean Dague wrote: > > On 12/15/2014 02:33 PM, Collins, Sean wrote: > > It'd be great to somehow make a long lived dsvm node and job where > > DevStack is continually deployed to it and restacked, to check for these > > kinds of errors? > I want to be careful here to not let an expectation develop that DevStack should be used in any long-running deployment. Refreshing a VM or a new OS install on bare metal should be expected, often. IMO the only bits that you should expect to refresh quickly are the git repos. > One of the things we don't test on the devstack side at all is that > clean.sh takes us back down to baseline, which I think was the real > issue here - https://review.openstack.org/#/c/141891/ > > I would not be opposed to adding cleanup testing at the end of any > devstack run that ensures everything is shut down correctly and cleaned > up to a base level. > clean.sh makes an attempt to remove enough of OpenStack and its dependencies to be able to change the configuration, such as switching databases or adding/removing services, and re-run stack.sh. I'm not a fan of tracking installed OS package dependencies and Python packages so it can all be undone. But the above is a bug... dt -- Dean Troyer dtroyer at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wei.d.chen at intel.com Tue Dec 16 03:16:59 2014 From: wei.d.chen at intel.com (Chen, Wei D) Date: Tue, 16 Dec 2014 03:16:59 +0000 Subject: [openstack-dev] [cinder] Anyone knows whether there is freezing day of spec approval? Message-ID: Hi, I know nova has such day around Dec. 18, is there a similar day in Cinder project? thanks! Best Regards, Dave Chen -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6648 bytes Desc: not available URL: From jsbryant at electronicjungle.net Tue Dec 16 03:23:08 2014 From: jsbryant at electronicjungle.net (Jay Bryant) Date: Mon, 15 Dec 2014 21:23:08 -0600 Subject: [openstack-dev] [cinder] Anyone knows whether there is freezing day of spec approval? In-Reply-To: References: Message-ID: Dave, Yes, we are not taking new specs/blueprints after 12/18. Jay On Dec 15, 2014 9:18 PM, "Chen, Wei D" wrote: > Hi, > > I know nova has such day around Dec. 18, is there a similar day in Cinder > project? thanks! > > Best Regards, > Dave Chen > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From djhon9813 at gmail.com Tue Dec 16 03:36:00 2014 From: djhon9813 at gmail.com (david jhon) Date: Tue, 16 Dec 2014 08:36:00 +0500 Subject: [openstack-dev] SRIOV-error In-Reply-To: References: Message-ID: Hi Murali, Thanks for your response, I did the same, it has resolved errors apparently but 1) neutron agent-list shows no agent for sriov, 2) neutron port is created successfully but creating vm is erred in scheduling as follows: result from neutron agent-list: +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ | id | agent_type | host | alive | admin_state_up | binary | +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ | 2acc7044-e552-4601-b00b-00ba591b453f | Open vSwitch agent | blade08 | xxx | True | neutron-openvswitch-agent | | 595d07c6-120e-42ea-a950-6c77a6455f10 | Metadata agent | blade08 | :-) | True | neutron-metadata-agent | | a1f253a8-e02e-4498-8609-4e265285534b | DHCP agent | blade08 | :-) | True | neutron-dhcp-agent | | d46b29d8-4b5f-4838-bf25-b7925cb3e3a7 | L3 agent | blade08 | :-) | True | neutron-l3-agent | +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ 2014-12-15 19:30:44.546 40249 ERROR oslo.messaging.rpc.dispatcher [req-c7741cff-a7d8-422f-b605-6a1d976aeb09 ] Exception during message handling: PCI $ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 13$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 17$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 12$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 139, i$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher return func(*args, **kwargs) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 175, in s$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher filter_properties) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line $ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher filter_properties) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line $ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher chosen_host.obj.consume_from_instance(instance_properties) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line 246,$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher self.pci_stats.apply_requests(pci_requests.requests) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/pci/pci_stats.py", line 209, in apply$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher raise exception.PciDeviceRequestFailed(requests=requests) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher PciDeviceRequestFailed: PCI device request ({'requests': [InstancePCIRequest(alias_$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher. Moreover, no /var/log/sriov-agent.log file exists. Please help me to fix this issue. Thanks everyone! On Mon, Dec 15, 2014 at 5:18 PM, Murali B wrote: > > Hi David, > > Please add as per the Irena suggestion > > FYI: refer the below configuration > > http://pastebin.com/DGmW7ZEg > > > Thanks > -Murali > -------------- next part -------------- An HTML attachment was scrubbed... URL: From djhon9813 at gmail.com Tue Dec 16 03:54:23 2014 From: djhon9813 at gmail.com (david jhon) Date: Tue, 16 Dec 2014 08:54:23 +0500 Subject: [openstack-dev] SRIOV-error In-Reply-To: References: Message-ID: Just to be more clear, command $lspci | grep -i Ethernet gives following output: 01:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 03:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 03:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 03:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 04:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 04:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) How can I make SR-IOV agent run and fix this bug? On Tue, Dec 16, 2014 at 8:36 AM, david jhon wrote: > > Hi Murali, > > Thanks for your response, I did the same, it has resolved errors > apparently but 1) neutron agent-list shows no agent for sriov, 2) neutron > port is created successfully but creating vm is erred in scheduling as > follows: > > result from neutron agent-list: > > +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ > | id | agent_type | host | > alive | admin_state_up | binary | > > +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ > | 2acc7044-e552-4601-b00b-00ba591b453f | Open vSwitch agent | blade08 | > xxx | True | neutron-openvswitch-agent | > | 595d07c6-120e-42ea-a950-6c77a6455f10 | Metadata agent | blade08 | > :-) | True | neutron-metadata-agent | > | a1f253a8-e02e-4498-8609-4e265285534b | DHCP agent | blade08 | > :-) | True | neutron-dhcp-agent | > | d46b29d8-4b5f-4838-bf25-b7925cb3e3a7 | L3 agent | blade08 | > :-) | True | neutron-l3-agent | > > +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ > > 2014-12-15 19:30:44.546 40249 ERROR oslo.messaging.rpc.dispatcher > [req-c7741cff-a7d8-422f-b605-6a1d976aeb09 ] Exception during message > handling: PCI $ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > Traceback (most recent call last): > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line > 13$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > incoming.message)) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line > 17$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > return self._do_dispatch(endpoint, method, ctxt, args) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line > 12$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > result = getattr(endpoint, method)(ctxt, **new_args) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 139, > i$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > return func(*args, **kwargs) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 175, in > s$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > filter_properties) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line > $ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > filter_properties) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line > $ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > chosen_host.obj.consume_from_instance(instance_properties) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line > 246,$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > self.pci_stats.apply_requests(pci_requests.requests) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/pci/pci_stats.py", line 209, in > apply$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > raise exception.PciDeviceRequestFailed(requests=requests) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > PciDeviceRequestFailed: PCI device request ({'requests': > [InstancePCIRequest(alias_$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher. > > Moreover, no /var/log/sriov-agent.log file exists. Please help me to fix > this issue. Thanks everyone! > > On Mon, Dec 15, 2014 at 5:18 PM, Murali B wrote: >> >> Hi David, >> >> Please add as per the Irena suggestion >> >> FYI: refer the below configuration >> >> http://pastebin.com/DGmW7ZEg >> >> >> Thanks >> -Murali >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Tue Dec 16 04:11:41 2014 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 15 Dec 2014 23:11:41 -0500 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <548EF111.6000801@hp.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <5489363B.2060008@hp.com> <548A3E09.2040408@redhat.com> <548EF111.6000801@hp.com> Message-ID: <548FB0FD.20702@redhat.com> On 15/12/14 09:32, Anant Patil wrote: > On 12-Dec-14 06:29, Zane Bitter wrote: >> On 11/12/14 01:14, Anant Patil wrote: >>> On 04-Dec-14 10:49, Zane Bitter wrote: >>>> On 01/12/14 02:02, Anant Patil wrote: >>>>> On GitHub:https://github.com/anantpatil/heat-convergence-poc >>>> >>>> I'm trying to review this code at the moment, and finding some stuff I >>>> don't understand: >>>> >>>> https://github.com/anantpatil/heat-convergence-poc/blob/master/heat/engine/stack.py#L911-L916 >>>> >>>> This appears to loop through all of the resources *prior* to kicking off >>>> any actual updates to check if the resource will change. This is >>>> impossible to do in general, since a resource may obtain a property >>>> value from an attribute of another resource and there is no way to know >>>> whether an update to said other resource would cause a change in the >>>> attribute value. >>>> >>>> In addition, no attempt to catch UpdateReplace is made. Although that >>>> looks like a simple fix, I'm now worried about the level to which this >>>> code has been tested. >>>> >>> We were working on new branch and as we discussed on Skype, we have >>> handled all these cases. Please have a look at our current branch: >>> https://github.com/anantpatil/heat-convergence-poc/tree/graph-version >>> >>> When a new resource is taken for convergence, its children are loaded >>> and the resource definition is re-parsed. The frozen resource definition >>> will have all the "get_attr" resolved. >>> >>>> >>>> I'm also trying to wrap my head around how resources are cleaned up in >>>> dependency order. If I understand correctly, you store in the >>>> ResourceGraph table the dependencies between various resource names in >>>> the current template (presumably there could also be some left around >>>> from previous templates too?). For each resource name there may be a >>>> number of rows in the Resource table, each with an incrementing version. >>>> As far as I can tell though, there's nowhere that the dependency graph >>>> for _previous_ templates is persisted? So if the dependency order >>>> changes in the template we have no way of knowing the correct order to >>>> clean up in any more? (There's not even a mechanism to associate a >>>> resource version with a particular template, which might be one avenue >>>> by which to recover the dependencies.) >>>> >>>> I think this is an important case we need to be able to handle, so I >>>> added a scenario to my test framework to exercise it and discovered that >>>> my implementation was also buggy. Here's the fix: >>>> https://github.com/zaneb/heat-convergence-prototype/commit/786f367210ca0acf9eb22bea78fd9d51941b0e40 >>>> >>> >>> Thanks for pointing this out Zane. We too had a buggy implementation for >>> handling inverted dependency. I had a hard look at our algorithm where >>> we were continuously merging the edges from new template into the edges >>> from previous updates. It was an optimized way of traversing the graph >>> in both forward and reverse order with out missing any resources. But, >>> when the dependencies are inverted, this wouldn't work. >>> >>> We have changed our algorithm. The changes in edges are noted down in >>> DB, only the delta of edges from previous template is calculated and >>> kept. At any given point of time, the graph table has all the edges from >>> current template and delta from previous templates. Each edge has >>> template ID associated with it. >> >> The thing is, the cleanup dependencies aren't really about the template. >> The real resources really depend on other real resources. You can't >> delete a Volume before its VolumeAttachment, not because it says so in >> the template but because it will fail if you try. The template can give >> us a rough guide in advance to what those dependencies will be, but if >> that's all we keep then we are discarding information. >> >> There may be multiple versions of a resource corresponding to one >> template version. Even worse, the actual dependencies of a resource >> change on a smaller time scale than an entire stack update (this is the >> reason the current implementation updates the template one resource at a >> time as we go). >> > > Absolutely! The edges from the template are kept only for the reference > purposes. When we have a resource in new template, its template ID will > also be marked to current template. At any point of time, realized > resource will from current template, even if it were found in previous > templates. The template ID "moves" for a resource if it is found. In theory (disclaimer: I didn't implement this yet) it can change on an even smaller timescale than that. The existing plugins are something of a black box to us: if a failure occurs we don't necessarily know whether the real-world dependency is on the old or new version of another resource. >> Given that our Resource entries in the DB are in 1:1 correspondence with >> actual resources (we create a new one whenever we need to replace the >> underlying resource), I found it makes the most conceptual and practical >> sense to store the requirements in the resource itself, and update them >> at the time they actually change in the real world (bonus: introduces no >> new locking issues and no extra DB writes). I settled on this after a >> legitimate attempt at trying other options, but they didn't work out: >> https://github.com/zaneb/heat-convergence-prototype/commit/a62958342e8583f74e2aca90f6239ad457ba984d >> > > I am okay with the notion of graph stored in resource table. > >>> For resource clean up, we start from the >>> first template (template which was completed and updates were made on >>> top of it, empty template otherwise), and move towards the current >>> template in the order in which the updates were issued, and for each >>> template the graph (edges if found for the template) is traversed in >>> reverse order and resources are cleaned-up. >> >> I'm pretty sure this is backwards - you'll need to clean up newer >> resources first because they may reference resources from older >> templates. Also if you have a stubborn old resource that won't delete >> you don't want that to block cleanups of anything newer. >> >> You're also serialising some of the work unnecessarily because you've >> discarded the information about dependencies that cross template >> versions, forcing you to clean up only one template version at a time. >> > > Since only the changed set is stored, it is going to be smaller in most > of the cases. Within each previous template, if there are any edges, > they are tried for clean-up in reverse order. If a resource is not > updated, but available in new template, the template ID will be current > template ID as I mentioned above. With the template, the process is > concurrent, but across the templates, it will be serial. I am okay with > this since in real world, the change in dependencies may not be as much > as we think. I agree the serialisation probably won't make a big difference in most cases, although autoscaling is an obvious and important one where serialising a lot of things unnecessarily could have a big impact on the user. >>> The process ends up with >>> current template being traversed in reverse order and resources being >>> cleaned up. All the update-replaced resources from the older templates >>> (older updates in concurrent updates) are cleaned up in the order in >>> which they are suppose to be. >>> >>> Resources are now tied to template, they have a template_id instead of >>> version. As we traverse the graph, we know which template we are working >>> on, and can take the relevant action on resource. >>> >>> For rollback, another update is issued with the last completed template >>> (it is designed to have an empty template as last completed template by >>> default). The current template being worked on becomes predecessor for >>> the new incoming template. In case of rollback, the last completed >>> template becomes incoming new template, the current becomes the new >>> template's predecessor and the successor of last completed template will >>> have no predecessor. All these changes are available in the >>> 'graph-version' branch. (The branch name is a misnomer though!) >>> >>> I think it is simpler to think about stack and concurrent updates when >>> we associate resources and edges with template, and stack with current >>> template and its predecessors (if any). >> >> It doesn't seem simple to me because it's trying to reconstruct reality >> from a lossy version of history. The simplest way to think about it, in >> my opinion is this: >> - When updating resources, respect their dependencies as given in the >> template >> - When checking resources to clean up, respect their actual, current >> real-world dependencies, and check replacement resources before the >> resources that they replaced. >> - Don't check a resource for clean up until it has been updated to the >> latest template. >> >>> I also think that we should decouple Resource from Stack. This is really >>> a hindrance when workers work on individual resources. The resource >>> should be abstracted enough from stack for the worker to work on the >>> resource alone. The worker should load the required resource plug-in and >>> start converging. >> >> I think that's a worthy goal, and it would be really nice if we could >> load a Resource completely independently of its Stack, and I know this >> has always been a design goal of yours (hence you're caching the >> resource definition in the Resource rather than getting it from the >> template). >> >> That said, I am convinced it's an unachievable goal, and I believe we >> should give up on it. >> >> - We'll always need to load _some_ central thing (e.g. to find out if >> the current traversal is still the valid one), so it might as well be >> the Stack. >> - Existing plugin abominations like HARestarter expect a working Stack >> object to be provided so it can go hunting for other resources. >> >> I think the best we can do is try to make heat.engine.stack.Stack as >> lazy as possible so that it only does extra work when strictly required, >> and just accept that the stack will always be loaded from the database. >> >> I am also strongly in favour of treating the idea of caching the >> unresolved resource definition in the Resource table as a straight >> performance optimisation that is completely separate to the convergence >> work. It's going to be inevitably ugly because there is no >> template-format-independent way to serialise a resource definition >> (while resource definition objects themselves are designed to be >> inherently template-format-independent). Once phase 1 is complete we can >> decide whether it's worth it based on measuring the actual performance >> improvement. >> > > The idea of keeping the resource definition in DB was to decouple from > the stack as much as possible. It is not for performance optimization. OK, that's fair, but then you said... > When a worker job is received, the worker won't have to load the entire > template from the stack, get the realized definitions etc. ...that's a performance optimisation. > The worker > simply loads the resource with definition with minimal stack (lazily > loaded stack), and starts working on it. This work is towards making > stack lazily loaded. > > Like you said, the stack has to be minimal when it is loaded. There will > be few information needed by the worker (like resource definition and > requirers/requirements) which can be obtained without loading the > template and recalculating the graph again and again. None of this changes the fundamental design, so let's get the important stuff done first and when that's finished we can see if this makes a meaningful (positive) difference. >> (Note that we _already_ store the _resolved_ properties of the resource, >> which is what the observer will be comparing against for phase 2, so >> there should be no reason for the observer to need to load the stack.) >> >>> The READEME.rst is really helpful for bringing up the minimal devstack >>> and test the PoC. I also has some notes on design. >>> >> [snip] >>> >>> Zane, I have few questions: >>> 1. Our current implementation is based on notifications from worker so >>> that the engine can take up next set of tasks. I don't see this in your >>> case. I think we should be doing this. It gels well with observer >>> notification mechanism. When the observer comes, it would send a >>> converge notification. Both, the provisioning of stack and the >>> continuous observation, happens with notifications (async message >>> passing). I see that the workers in your case pick up the parent when/if >>> it is done and schedules it or updates the sync point. >> >> I'm not quite sure what you're asking here, so forgive me if I'm >> misunderstanding. What I think you're saying is that where my prototype >> propagates notifications thus: >> >> worker -> worker >> -> worker >> >> (where -> is an async message) >> you would prefer it to do: >> >> worker -> engine -> worker >> -> worker >> >> Is that right? >> >> To me the distinction seems somewhat academic, given that we've decided >> that the engine and the worker will be the same process. I don't see a >> disadvantage to doing right away stuff that we know needs to be done >> right away. Obviously we should factor the code out tidily into a >> separate method where we can _also_ expose it as a notification that can >> be triggered by the continuous observer. >> >> You mentioned above that you thought the workers should not ever load >> the Stack, and I think that's probably the reason you favour this >> approach: the 'worker' would always load just the Resource and the >> 'engine' (even though they're really the same) would always load just >> the Stack, right? >> >> However, as I mentioned above, I think we'll want/have to load the Stack >> in the worker anyway, so eliminating the extra asynchronous call >> eliminates the performance penalty for having to do so. >> > > When we started, we had the following idea: > > > (provisions with > converge API) (Observe) > Engine ------------------> Worker --------------> Observer > | | | | > | | | | > ^ (done) v ^ (converge) v > ------------------------ ------------------------ > > The boundary has to be clearly demarcated, as done above. While > provisioning a stack, the Engine uses the converge facility, as does the > observer when it sees a need to converge a resource. > > Even though we have these logical things in same process, by not clearly > demarcating the responsibilities of each logical entity, we will end-up > with issues which we face when we mix the responsibilities. Sure, and that's why we don't write all the code in one big method, but it doesn't follow that it has to be in a different process. Or in this case, since they're actually the same process anyway, it doesn't imply that they have to be called over RPC. > As you mentioned, the worker should have a notification API exposed > which can be used not only by the Observer, but also by the engine to > provision (CRUD) a stack resource. yep > The engine decides on which resource > to be converged next when a resource is done. Who says it has to? Each resource knows what depends on it... let it send the notifications directly. The engine is just there to do whatever setup requires everything to be loaded at once and then kick things off by starting the worker tasks for resources that don't depend on anything. > The observer is only > responsible for once resource and issues a converge request if it has > to. > >>> 2. The dependency graph travels everywhere. IMHO, we can keep the graph >>> in DB and let the workers work on a resource, and engine decide which >>> one to be scheduled next by looking at the graph. There wouldn't be a >>> need for a lock here, in the engine, the DB transactions should take >>> care of concurrent DB updates. Our new PoC follows this model. >> >> I'm fine with keeping the graph in the DB instead of having it flow with >> the notifications. >> >>> 3. The request ID is passed down to check_*_complete. Would the check >>> method be interrupted if new request arrives? IMHO, the check method >>> should not get interrupted. It should return back when the resource has >>> reached a concrete state, either failed or completed. >> >> I agree, it should not be interrupted. >> >> I've started to think of phase 1 and phase 2 like this: >> >> 1) Make locks more granular: stack-level lock becomes resource-level >> 2) Get rid of locks altogether >> > > I thought that the DB transactions were enough for concurrency issues. > The resources status is enough to know whether to trigger the update or > not. For resources currently in progress, the engine can wait for > notifications from then and trigger the update. With completely > distributed graph and decision making it is hard to achieve. If the > graph is stored in DB, the decision to trigger update is at one place > (by looking at the graph only) and hence locks are not required. Right, right, but it's still a lock (maybe I should have said mutex) because only one traversal can be operating on each resource at a time. We're just arguing semantics now. It doesn't matter whether you do select-for-update on the Resource table to mark it as in progress or if you do select-for-update on a separate Lock table to mark it as in progress, only one update can be working on a given resource at a time, and this is a big improvement in granularity because before only one update could be working on the whole stack at a time, but not as big an improvement as phase 2 will be. >> So in phase 1 we'll lock the resources and like you said, it will return >> back when it has reached a concrete state. In phase 2 we'll be able to >> just update the goal state for the resource and the observe/converge >> process will be able to automagically find the best way to that state >> regardless of whether it was in the middle of transitioning to another >> state or not. Or something. But that's for the future :) >> > > I am thinking about not having any locks at all. The concurrency is > handled by DB transaction and notifications gives us hint on deciding > when and whether to update the next set of resources. If a resource is > in progress currently, and an update is issued on that resource, the > graph API doesn't return it as a ready-to-be-converged resource as a > previous version is in progress. When it is done, and notification is > received, the graph DB API returns it as next to converge. So sort of a hybrid with Michal's approach. I think I get what you're saying here. If we reach a particular resource in the traverse and it is still in-progress from a previous update we can ignore it and use the notification from the previous update finishing to trigger it later, provided that the notifications go through the engine because the in-progress resource can't possibly know anything about traversals that started later. Or on the other hand we could just sleep for a while until it's done and then carry on. >>> 4. Lot of synchronization issues which we faced in our PoC cannot be >>> encountered with the framework. How do we evaluate what happens when >>> synchronization issues are encountered (like stack lock kind of issues >>> which we are replacing with DB transaction). >> >> Right, yeah, this is obviously the big known limitation of the >> simulator. I don't have a better answer other than to Think Very Hard >> about it. >> >> Designing software means solving for hundreds of constraints - too many >> for a human to hold in their brain at the same time. The purpose of >> prototyping is to fix enough of the responses to those constraints in a >> concrete form to allow reasoning about the remaining ones to become >> tractable. If you fix solutions for *all* of the constraints, then what >> you've built is by definition not a prototype but the final product. >> >> One technique available to us is to encapsulate the parts of the >> algorithm that are subject to synchronisation issues behind abstractions >> that offer stronger guarantees. Then in order to have confidence in the >> design we need only satisfy ourselves that we have analysed the >> guarantees correctly and that a concrete implementation offering those >> same guarantees is possible. For example, the SyncPoints are shown to >> work under the assumption that they are not subject to race conditions, >> and the SyncPoint code is small enough that we can easily see that it >> can be implemented in an atomic fashion using the same DB primitives >> already proven to work by StackLock. Therefore we can have a very high >> confidence (but not proof) that the overall algorithm will work when >> implemented in the final product. >> >> Having Thought Very Hard about it, I'm as confident as I can be that I'm >> not relying on any synchronisation properties that can't be implemented >> using select-for-update on a single database row. There will of course >> be surprises at implementation time, but I hope that won't be one of >> them and anticipate that any changes required to the plan will be >> localised and not wide-ranging. >> >> (This is in contrast BTW to my centralised-graph branch, linked above, >> where it became very obvious that it would require some sort of external >> locking - so there is reason to think that this process can reveal >> architectural problems related to synchronisation where they are present.) >> >> cheers, >> Zane. From zbitter at redhat.com Tue Dec 16 04:12:37 2014 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 15 Dec 2014 23:12:37 -0500 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <4641310AFBEE10419D0A020273367C140CA3A305@G1W3645.americas.hpqcorp.net> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> <548A3FB8.9030007@redhat.com> <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> <548B8480.9010506@redhat.com> <4641310AFBEE10419D0A020273367C140CA3A305@G1W3645.americas.hpqcorp.net> Message-ID: <548FB135.30209@redhat.com> On 15/12/14 07:47, Murugan, Visnusaran wrote: > Hi Zane, > > We have been going through this chain for quite some time now and we still feel a disconnect in our understanding. Yes, I thought last week that we were on the same page, but now it looks like we're drifting off again :( > Can you put up a etherpad where we can understand your approach. Maybe you could put up an etherpad with your questions. Practically all of the relevant code is in Stack._create_or_update, Stack._dependencies and Converger.check_resource. That's 134 lines of code by my count. There's not a lot more I can usefully say about it without knowing which parts exactly you're stuck on, but I can definitely answer specific questions. > For example: for storing resource dependencies, > Are you storing its name, version tuple or just its ID. I'm storing a tuple of its name and database ID. The data structure is resource.GraphKey. I was originally using the name for something, but I suspect I could probably drop it now and just store the database ID, but I haven't tried it yet. (Having the name in there definitely makes debugging more pleasant though ;) When I build the traversal graph each node is a tuple of the GraphKey and a boolean to indicate whether it corresponds to an update or a cleanup operation (both can appear for a single resource in the same graph). > If I am correct, you are updating all resources on update regardless > of their change which will be inefficient if stack contains a million resource. I'm calling update() on all resources regardless of change, but update() will only call handle_update() if something has changed (unless the plugin has overridden Resource._needs_update()). There's no way to know whether a resource needs to be updated before you're ready to update it, so I don't think of this as 'inefficient', just 'correct'. > We have similar questions regarding other > areas in your implementation, which we believe if we understand the outline of your implementation. It is difficult to get > a hold on your approach just by looking at code. Docs strings / Etherpad will help. > > > About streams, Yes in a million resource stack, the data will be huge, but less than template. No way, it's O(n^3) (cubed!) in the worst case to store streams for each resource. > Also this stream is stored > only In IN_PROGRESS resources. Now I'm really confused. Where does it come from if the resource doesn't get it until it's already in progress? And how will that information help it? > The reason to have entire dependency list to reduce DB queries while a stack update. But we never need to know that. We only need to know what just happened and what to do next. > When you have a singular dependency on each resources similar to your implantation, then we will end up loading > Dependencies one at a time and altering almost all resource's dependency regardless of their change. > > Regarding a 2 template approach for delete, it is not actually 2 different templates. Its just that we have a delete stream > To be taken up post update. That would be a regression from Heat's current behaviour, where we start cleaning up resources as soon as they have nothing depending on them. There's not even a reason to make it worse than what we already have, because it's actually a lot _easier_ to treat update and clean up as the same kind of operation and throw both into the same big graph. The dual implementations and all of the edge cases go away and you can just trust in the graph traversal to do the Right Thing in the most parallel way possible. > (Any post operation will be handled as an update) This approach is True when Rollback==True > We can always fall back to regular stream (non-delete stream) if Rollback=False I don't understand what you're saying here. > In our view we would like to have only one basic operation and that is UPDATE. > > 1. CREATE will be an update where a realized graph == Empty > 2. UPDATE will be an update where realized graph == Full/Partial realized (possibly with a delete stream as post operation if Rollback==True) > 3. DELETE will be just another update with an empty to_be_realized_graph. Yes, that goes without saying. In my implementation Stack._create_or_update() handles all three operations. > It would be great if we can freeze a stable approach by mid-week as Christmas vacations are round the corner. :) :) > >> -----Original Message----- >> From: Zane Bitter [mailto:zbitter at redhat.com] >> Sent: Saturday, December 13, 2014 5:43 AM >> To: openstack-dev at lists.openstack.org >> Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept >> showdown >> >> On 12/12/14 05:29, Murugan, Visnusaran wrote: >>> >>> >>>> -----Original Message----- >>>> From: Zane Bitter [mailto:zbitter at redhat.com] >>>> Sent: Friday, December 12, 2014 6:37 AM >>>> To: openstack-dev at lists.openstack.org >>>> Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept >>>> showdown >>>> >>>> On 11/12/14 08:26, Murugan, Visnusaran wrote: >>>>>>> [Murugan, Visnusaran] >>>>>>> In case of rollback where we have to cleanup earlier version of >>>>>>> resources, >>>>>> we could get the order from old template. We'd prefer not to have a >>>>>> graph table. >>>>>> >>>>>> In theory you could get it by keeping old templates around. But >>>>>> that means keeping a lot of templates, and it will be hard to keep >>>>>> track of when you want to delete them. It also means that when >>>>>> starting an update you'll need to load every existing previous >>>>>> version of the template in order to calculate the dependencies. It >>>>>> also leaves the dependencies in an ambiguous state when a resource >>>>>> fails, and although that can be worked around it will be a giant pain to >> implement. >>>>>> >>>>> >>>>> Agree that looking to all templates for a delete is not good. But >>>>> baring Complexity, we feel we could achieve it by way of having an >>>>> update and a delete stream for a stack update operation. I will >>>>> elaborate in detail in the etherpad sometime tomorrow :) >>>>> >>>>>> I agree that I'd prefer not to have a graph table. After trying a >>>>>> couple of different things I decided to store the dependencies in >>>>>> the Resource table, where we can read or write them virtually for >>>>>> free because it turns out that we are always reading or updating >>>>>> the Resource itself at exactly the same time anyway. >>>>>> >>>>> >>>>> Not sure how this will work in an update scenario when a resource >>>>> does not change and its dependencies do. >>>> >>>> We'll always update the requirements, even when the properties don't >>>> change. >>>> >>> >>> Can you elaborate a bit on rollback. >> >> I didn't do anything special to handle rollback. It's possible that we need to - >> obviously the difference in the UpdateReplace + rollback case is that the >> replaced resource is now the one we want to keep, and yet the >> replaced_by/replaces dependency will force the newer (replacement) >> resource to be checked for deletion first, which is an inversion of the usual >> order. >> >> However, I tried to think of a scenario where that would cause problems and >> I couldn't come up with one. Provided we know the actual, real-world >> dependencies of each resource I don't think the ordering of those two >> checks matters. >> >> In fact, I currently can't think of a case where the dependency order >> between replacement and replaced resources matters at all. It matters in the >> current Heat implementation because resources are artificially segmented >> into the current and backup stacks, but with a holistic view of dependencies >> that may well not be required. I tried taking that line out of the simulator >> code and all the tests still passed. If anybody can think of a scenario in which >> it would make a difference, I would be very interested to hear it. >> >> In any event though, it should be no problem to reverse the direction of that >> one edge in these particular circumstances if it does turn out to be a >> problem. >> >>> We had an approach with depends_on >>> and needed_by columns in ResourceTable. But dropped it when we >> figured >>> out we had too many DB operations for Update. >> >> Yeah, I initially ran into this problem too - you have a bunch of nodes that are >> waiting on the current node, and now you have to go look them all up in the >> database to see what else they're waiting on in order to tell if they're ready >> to be triggered. >> >> It turns out the answer is to distribute the writes but centralise the reads. So >> at the start of the update, we read all of the Resources, obtain their >> dependencies and build one central graph[1]. We than make that graph >> available to each resource (either by passing it as a notification parameter, or >> storing it somewhere central in the DB that they will all have to read anyway, >> i.e. the Stack). But when we update a dependency we don't update the >> central graph, we update the individual Resource so there's no global lock >> required. >> >> [1] >> https://github.com/zaneb/heat-convergence-prototype/blob/distributed- >> graph/converge/stack.py#L166-L168 >> >>>>> Also taking care of deleting resources in order will be an issue. >>>> >>>> It works fine. >>>> >>>>> This implies that there will be different versions of a resource >>>>> which will even complicate further. >>>> >>>> No it doesn't, other than the different versions we already have due >>>> to UpdateReplace. >>>> >>>>>>>> This approach reduces DB queries by waiting for completion >>>>>>>> notification >>>>>> on a topic. The drawback I see is that delete stack stream will be >>>>>> huge as it will have the entire graph. We can always dump such data >>>>>> in ResourceLock.data Json and pass a simple flag >>>>>> "load_stream_from_db" to converge RPC call as a workaround for >>>>>> delete >>>> operation. >>>>>>> >>>>>>> This seems to be essentially equivalent to my 'SyncPoint' >>>>>>> proposal[1], with >>>>>> the key difference that the data is stored in-memory in a Heat >>>>>> engine rather than the database. >>>>>>> >>>>>>> I suspect it's probably a mistake to move it in-memory for similar >>>>>>> reasons to the argument Clint made against synchronising the >>>>>>> marking off >>>>>> of dependencies in-memory. The database can handle that and the >>>>>> problem of making the DB robust against failures of a single >>>>>> machine has already been solved by someone else. If we do it >>>>>> in-memory we are just creating a single point of failure for not >>>>>> much gain. (I guess you could argue it doesn't matter, since if any >>>>>> Heat engine dies during the traversal then we'll have to kick off >>>>>> another one anyway, but it does limit our options if that changes >>>>>> in the >>>>>> future.) [Murugan, Visnusaran] Resource completes, removes itself >>>>>> from resource_lock and notifies engine. Engine will acquire parent >>>>>> lock and initiate parent only if all its children are satisfied (no >>>>>> child entry in >>>> resource_lock). >>>>>> This will come in place of Aggregator. >>>>>> >>>>>> Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly >>>>>> what I >>>> did. >>>>>> The three differences I can see are: >>>>>> >>>>>> 1) I think you are proposing to create all of the sync points at >>>>>> the start of the traversal, rather than on an as-needed basis. This >>>>>> is probably a good idea. I didn't consider it because of the way my >>>>>> prototype evolved, but there's now no reason I can see not to do this. >>>>>> If we could move the data to the Resource table itself then we >>>>>> could even get it for free from an efficiency point of view. >>>>> >>>>> +1. But we will need engine_id to be stored somewhere for recovery >>>> purpose (easy to be queried format). >>>> >>>> Yeah, so I'm starting to think you're right, maybe the/a Lock table >>>> is the right thing to use there. We could probably do it within the >>>> resource table using the same select-for-update to set the engine_id, >>>> but I agree that we might be starting to jam too much into that one table. >>>> >>> >>> yeah. Unrelated values in resource table. Upon resource completion we >>> have to unset engine_id as well as compared to dropping a row from >> resource lock. >>> Both are good. Having engine_id in resource_table will reduce db >>> operaions in half. We should go with just resource table along with >> engine_id. >> >> OK >> >>>>> Sync points are created as-needed. Single resource is enough to >>>>> restart >>>> that entire stream. >>>>> I think there is a disconnect in our understanding. I will detail it >>>>> as well in >>>> the etherpad. >>>> >>>> OK, that would be good. >>>> >>>>>> 2) You're using a single list from which items are removed, rather >>>>>> than two lists (one static, and one to which items are added) that >>>>>> get >>>> compared. >>>>>> Assuming (1) then this is probably a good idea too. >>>>> >>>>> Yeah. We have a single list per active stream which work by removing >>>>> Complete/satisfied resources from it. >>>> >>>> I went to change this and then remembered why I did it this way: the >>>> sync point is also storing data about the resources that are >>>> triggering it. Part of this is the RefID and attributes, and we could >>>> replace that by storing that data in the Resource itself and querying >>>> it rather than having it passed in via the notification. But the >>>> other part is the ID/key of those resources, which we _need_ to know >>>> in order to update the requirements in case one of them has been >>>> replaced and thus the graph doesn't reflect it yet. (Or, for that >>>> matter, we need it to know where to go looking for the RefId and/or >>>> attributes if they're in the >>>> DB.) So we have to store some data, we can't just remove items from >>>> the required list (although we could do that as well). >>>> >>>>>> 3) You're suggesting to notify the engine unconditionally and let >>>>>> the engine decide if the list is empty. That's probably not a good >>>>>> idea - not only does it require extra reads, it introduces a race >>>>>> condition that you then have to solve (it can be solved, it's just more >> work). >>>>>> Since the update to remove a child from the list is atomic, it's >>>>>> best to just trigger the engine only if the list is now empty. >>>>>> >>>>> >>>>> No. Notify only if stream has something to be processed. The newer >>>>> Approach based on db lock will be that the last resource will >>>>> initiate its >>>> parent. >>>>> This is opposite to what our Aggregator model had suggested. >>>> >>>> OK, I think we're on the same page on this one then. >>>> >>> >>> >>> Yeah. >>> >>>>>>> It's not clear to me how the 'streams' differ in practical terms >>>>>>> from just passing a serialisation of the Dependencies object, >>>>>>> other than being incomprehensible to me ;). The current >>>>>>> Dependencies implementation >>>>>>> (1) is a very generic implementation of a DAG, (2) works and has >>>>>>> plenty of >>>>>> unit tests, (3) has, with I think one exception, a pretty >>>>>> straightforward API, >>>>>> (4) has a very simple serialisation, returned by the edges() >>>>>> method, which can be passed back into the constructor to recreate >>>>>> it, and (5) has an API that is to some extent relied upon by >>>>>> resources, and so won't likely be removed outright in any event. >>>>>>> Whatever code we need to handle dependencies ought to just build >>>>>>> on >>>>>> this existing implementation. >>>>>>> [Murugan, Visnusaran] Our thought was to reduce payload size >>>>>> (template/graph). Just planning for worst case scenario (million >>>>>> resource >>>>>> stack) We could always dump them in ResourceLock.data to be loaded >>>>>> by Worker. >> >> With the latest updates to the Etherpad, I'm even more confused by streams >> than I was before. >> >> One thing I never understood is why do you need to store the whole path to >> reach each node in the graph? Surely you only need to know the nodes this >> one is waiting on, the nodes waiting on this one and the ones those are >> waiting on, not the entire history up to this point. The size of each stream is >> theoretically up to O(n^2) and you're storing n of them - that's going to get >> painful in this million-resource stack. >> >>>>>> If there's a smaller representation of a graph than a list of edges >>>>>> then I don't know what it is. The proposed stream structure >>>>>> certainly isn't it, unless you mean as an alternative to storing >>>>>> the entire graph once for each resource. A better alternative is to >>>>>> store it once centrally - in my current implementation it is passed >>>>>> down through the trigger messages, but since only one traversal can >>>>>> be in progress at a time it could just as easily be stored in the >>>>>> Stack table of the >>>> database at the slight cost of an extra write. >>>>>> >>>>> >>>>> Agree that edge is the smallest representation of a graph. But it >>>>> does not give us a complete picture without doing a DB lookup. Our >>>>> assumption was to store streams in IN_PROGRESS resource_lock.data >>>>> column. This could be in resource table instead. >>>> >>>> That's true, but I think in practice at any point where we need to >>>> look at this we will always have already loaded the Stack from the DB >>>> for some other reason, so we actually can get it for free. (See >>>> detailed discussion in my reply to Anant.) >>>> >>> >>> Aren't we planning to stop loading stack with all resource objects in >>> future to Address scalability concerns we currently have? >> >> We plan on not loading all of the Resource objects each time we load the >> Stack object, but I think we will always need to have loaded the Stack object >> (for example, we'll need to check the current traversal ID, amongst other >> reasons). So if the serialised dependency graph is stored in the Stack it will be >> no big deal. >> >>>>>> I'm not opposed to doing that, BTW. In fact, I'm really interested >>>>>> in your input on how that might help make recovery from failure >>>>>> more robust. I know Anant mentioned that not storing enough data to >>>>>> recover when a node dies was his big concern with my current >> approach. >>>>>> >>>>> >>>>> With streams, We feel recovery will be easier. All we need is a >>>>> trigger :) >>>>> >>>>>> I can see that by both creating all the sync points at the start of >>>>>> the traversal and storing the dependency graph in the database >>>>>> instead of letting it flow through the RPC messages, we would be >>>>>> able to resume a traversal where it left off, though I'm not sure >>>>>> what that buys >>>> us. >>>>>> >>>>>> And I guess what you're suggesting is that by having an explicit >>>>>> lock with the engine ID specified, we can detect when a resource is >>>>>> stuck in IN_PROGRESS due to an engine going down? That's actually >>>>>> pretty >>>> interesting. >>>>>> >>>>> >>>>> Yeah :) >>>>> >>>>>>> Based on our call on Thursday, I think you're taking the idea of >>>>>>> the Lock >>>>>> table too literally. The point of referring to locks is that we can >>>>>> use the same concepts as the Lock table relies on to do atomic >>>>>> updates on a particular row of the database, and we can use those >>>>>> atomic updates to prevent race conditions when implementing >>>>>> SyncPoints/Aggregators/whatever you want to call them. It's not >>>>>> that we'd actually use the Lock table itself, which implements a >>>>>> mutex and therefore offers only a much slower and more stateful way >>>>>> of doing what we want (lock mutex, change data, unlock mutex). >>>>>>> [Murugan, Visnusaran] Are you suggesting something like a >>>>>>> select-for- >>>>>> update in resource table itself without having a lock table? >>>>>> >>>>>> Yes, that's exactly what I was suggesting. >>>>> >>>>> DB is always good for sync. But we need to be careful not to overdo it. >>>> >>>> Yeah, I see what you mean now, it's starting to _feel_ like there'd >>>> be too many things mixed together in the Resource table. Are you >>>> aware of some concrete harm that might cause though? What happens if >>>> we overdo it? Is select-for-update on a huge row more expensive than >>>> the whole overhead of manipulating the Lock? >>>> >>>> Just trying to figure out if intuition is leading me astray here. >>>> >>> >>> You are right. There should be no difference apart from little bump In >>> memory usage. But I think it should be fine. >>> >>>>> Will update etherpad by tomorrow. >>>> >>>> OK, thanks. >>>> >>>> cheers, >>>> Zane. From zbitter at redhat.com Tue Dec 16 04:18:05 2014 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 15 Dec 2014 23:18:05 -0500 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <548EFB12.2090303@hp.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> <548A3FB8.9030007@redhat.com> <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> <548B8480.9010506@redhat.com> <548EFB12.2090303@hp.com> Message-ID: <548FB27D.4070505@redhat.com> On 15/12/14 10:15, Anant Patil wrote: > On 13-Dec-14 05:42, Zane Bitter wrote: >> On 12/12/14 05:29, Murugan, Visnusaran wrote: >>> >>> >>>> -----Original Message----- >>>> From: Zane Bitter [mailto:zbitter at redhat.com] >>>> Sent: Friday, December 12, 2014 6:37 AM >>>> To: openstack-dev at lists.openstack.org >>>> Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept >>>> showdown >>>> >>>> On 11/12/14 08:26, Murugan, Visnusaran wrote: >>>>>>> [Murugan, Visnusaran] >>>>>>> In case of rollback where we have to cleanup earlier version of >>>>>>> resources, >>>>>> we could get the order from old template. We'd prefer not to have a >>>>>> graph table. >>>>>> >>>>>> In theory you could get it by keeping old templates around. But that >>>>>> means keeping a lot of templates, and it will be hard to keep track >>>>>> of when you want to delete them. It also means that when starting an >>>>>> update you'll need to load every existing previous version of the >>>>>> template in order to calculate the dependencies. It also leaves the >>>>>> dependencies in an ambiguous state when a resource fails, and >>>>>> although that can be worked around it will be a giant pain to implement. >>>>>> >>>>> >>>>> Agree that looking to all templates for a delete is not good. But >>>>> baring Complexity, we feel we could achieve it by way of having an >>>>> update and a delete stream for a stack update operation. I will >>>>> elaborate in detail in the etherpad sometime tomorrow :) >>>>> >>>>>> I agree that I'd prefer not to have a graph table. After trying a >>>>>> couple of different things I decided to store the dependencies in the >>>>>> Resource table, where we can read or write them virtually for free >>>>>> because it turns out that we are always reading or updating the >>>>>> Resource itself at exactly the same time anyway. >>>>>> >>>>> >>>>> Not sure how this will work in an update scenario when a resource does >>>>> not change and its dependencies do. >>>> >>>> We'll always update the requirements, even when the properties don't >>>> change. >>>> >>> >>> Can you elaborate a bit on rollback. >> >> I didn't do anything special to handle rollback. It's possible that we >> need to - obviously the difference in the UpdateReplace + rollback case >> is that the replaced resource is now the one we want to keep, and yet >> the replaced_by/replaces dependency will force the newer (replacement) >> resource to be checked for deletion first, which is an inversion of the >> usual order. >> > > This is where the version is so handy! For UpdateReplaced ones, there is *sometimes* > an older version to go back to. This version could just be template ID, > as I mentioned in another e-mail. All resources are at the current > template ID if they are found in the current template, even if they is > no need to update them. Otherwise, they need to be cleaned-up in the > order given in the previous templates. > > I think the template ID is used as version as far as I can see in Zane's > PoC. If the resource template key doesn't match the current template > key, the resource is deleted. The version is misnomer here, but that > field (template id) is used as though we had versions of resources. Correct. Because if we had wanted to keep it, we would have updated it to the new template version already. >> However, I tried to think of a scenario where that would cause problems >> and I couldn't come up with one. Provided we know the actual, real-world >> dependencies of each resource I don't think the ordering of those two >> checks matters. >> >> In fact, I currently can't think of a case where the dependency order >> between replacement and replaced resources matters at all. It matters in >> the current Heat implementation because resources are artificially >> segmented into the current and backup stacks, but with a holistic view >> of dependencies that may well not be required. I tried taking that line >> out of the simulator code and all the tests still passed. If anybody can >> think of a scenario in which it would make a difference, I would be very >> interested to hear it. >> >> In any event though, it should be no problem to reverse the direction of >> that one edge in these particular circumstances if it does turn out to >> be a problem. >> >>> We had an approach with depends_on >>> and needed_by columns in ResourceTable. But dropped it when we figured out >>> we had too many DB operations for Update. >> >> Yeah, I initially ran into this problem too - you have a bunch of nodes >> that are waiting on the current node, and now you have to go look them >> all up in the database to see what else they're waiting on in order to >> tell if they're ready to be triggered. >> >> It turns out the answer is to distribute the writes but centralise the >> reads. So at the start of the update, we read all of the Resources, >> obtain their dependencies and build one central graph[1]. We than make >> that graph available to each resource (either by passing it as a >> notification parameter, or storing it somewhere central in the DB that >> they will all have to read anyway, i.e. the Stack). But when we update a >> dependency we don't update the central graph, we update the individual >> Resource so there's no global lock required. >> >> [1] >> https://github.com/zaneb/heat-convergence-prototype/blob/distributed-graph/converge/stack.py#L166-L168 >> > > A centralized graph and decision making will make the implementation far > more simpler than distributed. The centralised graph is not in dispute; my approach has a centralised graph that gets passed down through oslo.messaging notifications, and I've already said I'm fine with putting that in the database instead. So I don't know what you're arguing against, but it isn't what I've been saying. And the decision-making is only centralised if you send all the notifications to a single engine. Maybe this is why I'm so confused, because I thought you had dropped that idea after Clint argued very convincingly against it. > This looks academic, but the simplicity > beats everything! When each worker has to decide, there needs to be > lock, only DB transactions are not enough. It needs select-for-update and that is all. I don't see how either locks or transactions come into it. > In contrast, when the > decision making is centralized, that particular critical section can be > attempted with transaction and re-attempted if needed. That makes no sense... if you need to re-attempt a transaction it can only be because the decision is not centralised. > With the distributed approach, I see following drawbacks: I have no idea what distributed approach you're referring to, or indeed what 'centralised' approach you're referring to. > 1. Every time a resource is done, the peer resources (siblings) are > checked to see if they are done and the parent is propagated. This > happens for each resource. As Clint mentioned, this is not correct. In your terminology, each parent is checked to see if all of its children are done and if they are then the notification is propagated. > 2. The worker has to run through all the resources to see if the stack > is done, to mark it as completed. Not true, the worker can send a notification to the Stack to indicate that everything it was waiting for is done, in exactly the same way that it can send notifications to the next Resource to say that everything it was waiting for is done. I already implemented this: https://github.com/zaneb/heat-convergence-prototype/commit/d2be14d2f711ece927cbf0f9382d744a3e881d4c (It has been refactored a little since then.) > 3. The decision to converge is made by each worker resulting in lot of > contention. The centralized graph restricts the contention point to one > place where we can use DB transactions. It is easier to maintain code > where particular decisions are made at a place rather than at many > places. That's backwards. Where everything is fighting for the same centralised point you have a lot of contention. Where only the things that can conflict ever meet at the same point, contention is reduced to the minimum possible. Furthermore, just because it is happening different rows in the database doesn't mean it is happening in different parts of the code. e.g. in my implementation it always happens in sync_point.py. > 4. The complex part we are trying to solve is to decide on what to do > next when a resource is done. With centralized graph, this is abstracted > out to the DB API. The API will return the next set of nodes. A smart > SQL query can reduce a lot of logic currently being coded in > worker/engine. I guess that was meant to sound like a good thing, but no part of that sounded attractive to me. Fancy queries are good for when you have a large amount of data of which you need to work with a subset. When you need all of the data then it's better to work with it in code, and when you already know the primary key of the data you need it's much much much better to just grab it. And that's how my code works. At the start of the update, we grab all of the data and figure out what to do with it in code, build the graph, and use the primary keys stored in the graph to traverse through it, at each step only grabbing the one row we need. > 5. What would be the starting point for resource clean-up? There's no fixed starting point, cleaning up is part of the graph and it happens when things it depends on are complete, which is different for each resource. > The clean-up > has to start when all the resources are updated. That's wrong. > With no centralized > graph, the DB has to be searched for all the resources with no > dependencies and with older versions (or having older template keys) and > start removing them. And we do this search by loading _all_ of the existing resources at the beginning of the Stack update (which we have to do anyway!), calculating the graph and storing it centrally. > With centralized graph, this would be a simpler > with a SQL queries returning what needs to be done. Nothing could be simpler than getting rows by key where you need them. > The search space for > where to start with clean-up will be huge. It will be 0, because nodes to be cleaned up are nodes like any other in the graph and we don't have to search for them, we have their IDs. > 6. When engine restarts, the search space on where to start will be > huge. With a centralized graph, the abstracted API to get next set of > nodes makes the implementation of decision simpler. If an engine dies I'm fine with starting again at the beginning. Or we can just search for resources that are IN_PROGRESS with the engine_id of the engine that died. > I am convinced enough that it is simpler to assign the responsibility to > engine on what needs to be done next. No locks will be required, not > even resource locks! It is simpler from implementation, understanding > and maintenance perspective. Again, this is just semantics. The only thing I'm proposing we use is select-for-update to do atomic updates. >>>>> Also taking care of deleting resources in order will be an issue. >>>> >>>> It works fine. >>>> >>>>> This implies that there will be different versions of a resource which >>>>> will even complicate further. >>>> >>>> No it doesn't, other than the different versions we already have due to >>>> UpdateReplace. >>>> >>>>>>>> This approach reduces DB queries by waiting for completion >>>>>>>> notification >>>>>> on a topic. The drawback I see is that delete stack stream will be >>>>>> huge as it will have the entire graph. We can always dump such data >>>>>> in ResourceLock.data Json and pass a simple flag >>>>>> "load_stream_from_db" to converge RPC call as a workaround for delete >>>> operation. >>>>>>> >>>>>>> This seems to be essentially equivalent to my 'SyncPoint' >>>>>>> proposal[1], with >>>>>> the key difference that the data is stored in-memory in a Heat engine >>>>>> rather than the database. >>>>>>> >>>>>>> I suspect it's probably a mistake to move it in-memory for similar >>>>>>> reasons to the argument Clint made against synchronising the marking >>>>>>> off >>>>>> of dependencies in-memory. The database can handle that and the >>>>>> problem of making the DB robust against failures of a single machine >>>>>> has already been solved by someone else. If we do it in-memory we are >>>>>> just creating a single point of failure for not much gain. (I guess >>>>>> you could argue it doesn't matter, since if any Heat engine dies >>>>>> during the traversal then we'll have to kick off another one anyway, >>>>>> but it does limit our options if that changes in the >>>>>> future.) [Murugan, Visnusaran] Resource completes, removes itself >>>>>> from resource_lock and notifies engine. Engine will acquire parent >>>>>> lock and initiate parent only if all its children are satisfied (no child entry in >>>> resource_lock). >>>>>> This will come in place of Aggregator. >>>>>> >>>>>> Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly what I >>>> did. >>>>>> The three differences I can see are: >>>>>> >>>>>> 1) I think you are proposing to create all of the sync points at the >>>>>> start of the traversal, rather than on an as-needed basis. This is >>>>>> probably a good idea. I didn't consider it because of the way my >>>>>> prototype evolved, but there's now no reason I can see not to do this. >>>>>> If we could move the data to the Resource table itself then we could >>>>>> even get it for free from an efficiency point of view. >>>>> >>>>> +1. But we will need engine_id to be stored somewhere for recovery >>>> purpose (easy to be queried format). >>>> >>>> Yeah, so I'm starting to think you're right, maybe the/a Lock table is the right >>>> thing to use there. We could probably do it within the resource table using >>>> the same select-for-update to set the engine_id, but I agree that we might >>>> be starting to jam too much into that one table. >>>> >>> >>> yeah. Unrelated values in resource table. Upon resource completion we have to >>> unset engine_id as well as compared to dropping a row from resource lock. >>> Both are good. Having engine_id in resource_table will reduce db operaions >>> in half. We should go with just resource table along with engine_id. >> >> OK >> >>>>> Sync points are created as-needed. Single resource is enough to restart >>>> that entire stream. >>>>> I think there is a disconnect in our understanding. I will detail it as well in >>>> the etherpad. >>>> >>>> OK, that would be good. >>>> >>>>>> 2) You're using a single list from which items are removed, rather >>>>>> than two lists (one static, and one to which items are added) that get >>>> compared. >>>>>> Assuming (1) then this is probably a good idea too. >>>>> >>>>> Yeah. We have a single list per active stream which work by removing >>>>> Complete/satisfied resources from it. >>>> >>>> I went to change this and then remembered why I did it this way: the sync >>>> point is also storing data about the resources that are triggering it. Part of this >>>> is the RefID and attributes, and we could replace that by storing that data in >>>> the Resource itself and querying it rather than having it passed in via the >>>> notification. But the other part is the ID/key of those resources, which we >>>> _need_ to know in order to update the requirements in case one of them >>>> has been replaced and thus the graph doesn't reflect it yet. (Or, for that >>>> matter, we need it to know where to go looking for the RefId and/or >>>> attributes if they're in the >>>> DB.) So we have to store some data, we can't just remove items from the >>>> required list (although we could do that as well). >>>> >>>>>> 3) You're suggesting to notify the engine unconditionally and let the >>>>>> engine decide if the list is empty. That's probably not a good idea - >>>>>> not only does it require extra reads, it introduces a race condition >>>>>> that you then have to solve (it can be solved, it's just more work). >>>>>> Since the update to remove a child from the list is atomic, it's best >>>>>> to just trigger the engine only if the list is now empty. >>>>>> >>>>> >>>>> No. Notify only if stream has something to be processed. The newer >>>>> Approach based on db lock will be that the last resource will initiate its >>>> parent. >>>>> This is opposite to what our Aggregator model had suggested. >>>> >>>> OK, I think we're on the same page on this one then. >>>> >>> >>> >>> Yeah. >>> >>>>>>> It's not clear to me how the 'streams' differ in practical terms >>>>>>> from just passing a serialisation of the Dependencies object, other >>>>>>> than being incomprehensible to me ;). The current Dependencies >>>>>>> implementation >>>>>>> (1) is a very generic implementation of a DAG, (2) works and has >>>>>>> plenty of >>>>>> unit tests, (3) has, with I think one exception, a pretty >>>>>> straightforward API, >>>>>> (4) has a very simple serialisation, returned by the edges() method, >>>>>> which can be passed back into the constructor to recreate it, and (5) >>>>>> has an API that is to some extent relied upon by resources, and so >>>>>> won't likely be removed outright in any event. >>>>>>> Whatever code we need to handle dependencies ought to just build on >>>>>> this existing implementation. >>>>>>> [Murugan, Visnusaran] Our thought was to reduce payload size >>>>>> (template/graph). Just planning for worst case scenario (million >>>>>> resource >>>>>> stack) We could always dump them in ResourceLock.data to be loaded by >>>>>> Worker. >> >> With the latest updates to the Etherpad, I'm even more confused by >> streams than I was before. >> >> One thing I never understood is why do you need to store the whole path >> to reach each node in the graph? Surely you only need to know the nodes >> this one is waiting on, the nodes waiting on this one and the ones those >> are waiting on, not the entire history up to this point. The size of >> each stream is theoretically up to O(n^2) and you're storing n of them - >> that's going to get painful in this million-resource stack. >> >>>>>> If there's a smaller representation of a graph than a list of edges >>>>>> then I don't know what it is. The proposed stream structure certainly >>>>>> isn't it, unless you mean as an alternative to storing the entire >>>>>> graph once for each resource. A better alternative is to store it >>>>>> once centrally - in my current implementation it is passed down >>>>>> through the trigger messages, but since only one traversal can be in >>>>>> progress at a time it could just as easily be stored in the Stack table of the >>>> database at the slight cost of an extra write. >>>>>> >>>>> >>>>> Agree that edge is the smallest representation of a graph. But it does >>>>> not give us a complete picture without doing a DB lookup. Our >>>>> assumption was to store streams in IN_PROGRESS resource_lock.data >>>>> column. This could be in resource table instead. >>>> >>>> That's true, but I think in practice at any point where we need to look at this >>>> we will always have already loaded the Stack from the DB for some other >>>> reason, so we actually can get it for free. (See detailed discussion in my reply >>>> to Anant.) >>>> >>> >>> Aren't we planning to stop loading stack with all resource objects in future to >>> Address scalability concerns we currently have? >> >> We plan on not loading all of the Resource objects each time we load the >> Stack object, but I think we will always need to have loaded the Stack >> object (for example, we'll need to check the current traversal ID, >> amongst other reasons). So if the serialised dependency graph is stored >> in the Stack it will be no big deal. >> >>>>>> I'm not opposed to doing that, BTW. In fact, I'm really interested in >>>>>> your input on how that might help make recovery from failure more >>>>>> robust. I know Anant mentioned that not storing enough data to >>>>>> recover when a node dies was his big concern with my current approach. >>>>>> >>>>> >>>>> With streams, We feel recovery will be easier. All we need is a >>>>> trigger :) >>>>> >>>>>> I can see that by both creating all the sync points at the start of >>>>>> the traversal and storing the dependency graph in the database >>>>>> instead of letting it flow through the RPC messages, we would be able >>>>>> to resume a traversal where it left off, though I'm not sure what that buys >>>> us. >>>>>> >>>>>> And I guess what you're suggesting is that by having an explicit lock >>>>>> with the engine ID specified, we can detect when a resource is stuck >>>>>> in IN_PROGRESS due to an engine going down? That's actually pretty >>>> interesting. >>>>>> >>>>> >>>>> Yeah :) >>>>> >>>>>>> Based on our call on Thursday, I think you're taking the idea of the >>>>>>> Lock >>>>>> table too literally. The point of referring to locks is that we can >>>>>> use the same concepts as the Lock table relies on to do atomic >>>>>> updates on a particular row of the database, and we can use those >>>>>> atomic updates to prevent race conditions when implementing >>>>>> SyncPoints/Aggregators/whatever you want to call them. It's not that >>>>>> we'd actually use the Lock table itself, which implements a mutex and >>>>>> therefore offers only a much slower and more stateful way of doing >>>>>> what we want (lock mutex, change data, unlock mutex). >>>>>>> [Murugan, Visnusaran] Are you suggesting something like a >>>>>>> select-for- >>>>>> update in resource table itself without having a lock table? >>>>>> >>>>>> Yes, that's exactly what I was suggesting. >>>>> >>>>> DB is always good for sync. But we need to be careful not to overdo it. >>>> >>>> Yeah, I see what you mean now, it's starting to _feel_ like there'd be too >>>> many things mixed together in the Resource table. Are you aware of some >>>> concrete harm that might cause though? What happens if we overdo it? Is >>>> select-for-update on a huge row more expensive than the whole overhead >>>> of manipulating the Lock? >>>> >>>> Just trying to figure out if intuition is leading me astray here. >>>> >>> >>> You are right. There should be no difference apart from little bump >>> In memory usage. But I think it should be fine. >>> >>>>> Will update etherpad by tomorrow. >>>> >>>> OK, thanks. >>>> >>>> cheers, >>>> Zane. >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sumitnaiksatam at gmail.com Tue Dec 16 04:22:16 2014 From: sumitnaiksatam at gmail.com (Sumit Naiksatam) Date: Mon, 15 Dec 2014 20:22:16 -0800 Subject: [openstack-dev] [Neutron][Advanced Services] Suspending weekly IRC meetings Message-ID: Hi All, Since the split of the Neutron services (FWaaS, LBaaS, VPNaaS) into individual repositories is done, and the follow-up activities are progressing, I am proposing that we suspend the weekly IRC Advanced Services' meeting [1] until we need it again. Thanks, ~Sumit. [1] https://wiki.openstack.org/wiki/Meetings/AdvancedServices -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Tue Dec 16 04:26:06 2014 From: clint at fewbar.com (Clint Byrum) Date: Mon, 15 Dec 2014 20:26:06 -0800 Subject: [openstack-dev] [TripleO] mid-cycle details -- CONFIRMED Feb. 18 - 20 In-Reply-To: <1417474177-sup-8420@fewbar.com> References: <1417474177-sup-8420@fewbar.com> Message-ID: <1418703212-sup-8801@fewbar.com> I'm happy to announce we've cleared the schedule and the Mid-Cycle is confirmed for February 18 - 20 in Seattle, WA at HP's downtown offices. Please refer to the etherpad linked below for details including address and instructions for access to the building. PLEASE make sure you add yourself to the list of confirmed attendees on the etherpad *BEFORE* booking travel. We have a hard limit of 30 participants, so if you are not certain you have a spot, please contact me before booking travel. Excerpts from Clint Byrum's message of 2014-12-01 14:58:58 -0800: > Hello! I've received confirmation that our venue, the HP offices in > downtown Seattle, will be available for the most-often-preferred > least-often-cannot week of Feb 16 - 20. > > Our venue has a maximum of 20 participants, but I only have 16 possible > attendees now. Please add yourself to that list _now_ if you will be > joining us. > > I've asked our office staff to confirm Feb 18 - 20 (Wed-Fri). When they > do, I will reply to this thread to let everyone know so you can all > start to book travel. See the etherpad for travel details. > > https://etherpad.openstack.org/p/kilo-tripleo-midcycle-meetup From cbkyeoh at gmail.com Tue Dec 16 05:07:31 2014 From: cbkyeoh at gmail.com (Christopher Yeoh) Date: Tue, 16 Dec 2014 05:07:31 +0000 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group References: <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> <548A1A34.40105@dague.net> <548B60D0.7090600@dague.net> <548F1235.2020502@redhat.com> <936303430.2524172.1418663608280.JavaMail.zimbra@redhat.com> <695A9601-ECFA-4A85-BCBF-B4C89F473875@redhat.com> Message-ID: So I think this is something we really should get agreement on across the open stack API first before flipping back and forth on a case by case basis. Personally I think we should be using uuids for uniqueness and leave any extra restrictions to a ui layer if really required. If we try to have name uniqueness then "test " should be considered the same as " test" as " test " and it introduces all sorts of slightly different combos that look the same except under very close comparison. Add unicode for extra fun. Chris On Tue, 16 Dec 2014 at 7:24 am, Maru Newby wrote: > > On Dec 15, 2014, at 9:13 AM, Assaf Muller wrote: > > > > > > > ----- Original Message ----- > >> -----BEGIN PGP SIGNED MESSAGE----- > >> Hash: SHA512 > >> > >> I was (rightfully) asked to share my comments on the matter that I > >> left in gerrit here. See below. > >> > >> On 12/12/14 22:40, Sean Dague wrote: > >>> On 12/12/2014 01:05 PM, Maru Newby wrote: > >>>> > >>>> On Dec 11, 2014, at 2:27 PM, Sean Dague wrote: > >>>> > >>>>> On 12/11/2014 04:16 PM, Jay Pipes wrote: > >>>>>> On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote: > >>>>>>> On Dec 11, 2014, at 1:04 PM, Jay Pipes > >>>>>>> wrote: > >>>>>>>> On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote: > >>>>>>>>> > >>>>>>>>> On Dec 11, 2014, at 8:00 AM, Henry Gessau > >>>>>>>>> wrote: > >>>>>>>>> > >>>>>>>>>> On Thu, Dec 11, 2014, Mark McClain > >>>>>>>>>> wrote: > >>>>>>>>>>> > >>>>>>>>>>>> On Dec 11, 2014, at 8:43 AM, Jay Pipes > >>>>>>>>>>>> > > >>>>>>>>>>>> wrote: > >>>>>>>>>>>> > >>>>>>>>>>>> I'm generally in favor of making name attributes > >>>>>>>>>>>> opaque, utf-8 strings that are entirely > >>>>>>>>>>>> user-defined and have no constraints on them. I > >>>>>>>>>>>> consider the name to be just a tag that the user > >>>>>>>>>>>> places on some resource. It is the resource's ID > >>>>>>>>>>>> that is unique. > >>>>>>>>>>>> > >>>>>>>>>>>> I do realize that Nova takes a different approach > >>>>>>>>>>>> to *some* resources, including the security group > >>>>>>>>>>>> name. > >>>>>>>>>>>> > >>>>>>>>>>>> End of the day, it's probably just a personal > >>>>>>>>>>>> preference whether names should be unique to a > >>>>>>>>>>>> tenant/user or not. > >>>>>>>>>>>> > >>>>>>>>>>>> Maru had asked me my opinion on whether names > >>>>>>>>>>>> should be unique and I answered my personal > >>>>>>>>>>>> opinion that no, they should not be, and if > >>>>>>>>>>>> Neutron needed to ensure that there was one and > >>>>>>>>>>>> only one default security group for a tenant, > >>>>>>>>>>>> that a way to accomplish such a thing in a > >>>>>>>>>>>> race-free way, without use of SELECT FOR UPDATE, > >>>>>>>>>>>> was to use the approach I put into the pastebin > >>>>>>>>>>>> on the review above. > >>>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> I agree with Jay. We should not care about how a > >>>>>>>>>>> user names the resource. There other ways to > >>>>>>>>>>> prevent this race and Jay?s suggestion is a good > >>>>>>>>>>> one. > >>>>>>>>>> > >>>>>>>>>> However we should open a bug against Horizon because > >>>>>>>>>> the user experience there is terrible with duplicate > >>>>>>>>>> security group names. > >>>>>>>>> > >>>>>>>>> The reason security group names are unique is that the > >>>>>>>>> ec2 api supports source rule specifications by > >>>>>>>>> tenant_id (user_id in amazon) and name, so not > >>>>>>>>> enforcing uniqueness means that invocation in the ec2 > >>>>>>>>> api will either fail or be non-deterministic in some > >>>>>>>>> way. > >>>>>>>> > >>>>>>>> So we should couple our API evolution to EC2 API then? > >>>>>>>> > >>>>>>>> -jay > >>>>>>> > >>>>>>> No I was just pointing out the historical reason for > >>>>>>> uniqueness, and hopefully encouraging someone to find the > >>>>>>> best behavior for the ec2 api if we are going to keep the > >>>>>>> incompatibility there. Also I personally feel the ux is > >>>>>>> better with unique names, but it is only a slight > >>>>>>> preference. > >>>>>> > >>>>>> Sorry for snapping, you made a fair point. > >>>>> > >>>>> Yeh, honestly, I agree with Vish. I do feel that the UX of > >>>>> that constraint is useful. Otherwise you get into having to > >>>>> show people UUIDs in a lot more places. While those are good > >>>>> for consistency, they are kind of terrible to show to people. > >>>> > >>>> While there is a good case for the UX of unique names - it also > >>>> makes orchestration via tools like puppet a heck of a lot simpler > >>>> - the fact is that most OpenStack resources do not require unique > >>>> names. That being the case, why would we want security groups to > >>>> deviate from this convention? > >>> > >>> Maybe the other ones are the broken ones? > >>> > >>> Honestly, any sanely usable system makes names unique inside a > >>> container. Like files in a directory. > >> > >> Correct. Or take git: it does not use hashes to identify objects, right? > >> > >>> In this case the tenant is the container, which makes sense. > >>> > >>> It is one of many places that OpenStack is not consistent. But I'd > >>> rather make things consistent and more usable than consistent and > >>> less. > >> > >> Are we only proposing to make security group name unique? I assume > >> that, since that's what we currently have in review. The change would > >> make API *more* inconsistent, not less, since other objects still use > >> uuid for identification. > >> > >> You may say that we should move *all* neutron objects to the new > >> identification system by name. But what's the real benefit? > >> > >> If there are problems in UX (client, horizon, ...), we should fix the > >> view and not data models used. If we decide we want users to avoid > >> using objects with the same names, fine, let's add warnings in UI > >> (probably with an option to disable it so that we don't push the > >> validation into their throats). > >> > >> Finally, I have concern about us changing user visible object > >> attributes like names during db migrations, as it's proposed in the > >> patch discussed here. I think such behaviour can be quite unexpected > >> for some users, if not breaking their workflow and/or scripts. > >> > >> My belief is that responsible upstream does not apply ad-hoc changes > >> to API to fix a race condition that is easily solvable in other ways > >> (see Assaf's proposal to introduce a new DefaultSecurityGroups table > >> in patchset 12 comments). > >> > > > > As usual you explain yourself better than I can... I think my main > > original objection to the patch is that it feels like an accidental > > API change to fix a bug. If you want unique naming: > > 1) We need to be consistent across different resources > > 2) It needs to be in a dedicate change, and perhaps blueprint > > > > Since there's conceivable alternative solutions to the bug that aren't > > substantially more costly or complicated, I don't see why we would pursue > > the proposed approach. > > > +1 > > Regardless of the merits of security groups having unique names, I don?t > think it is a change that should be slipped in as part of a bugfix. If we > want to see this kind of API-modifying change introduced in Neutron (or any > other OpenStack project), there is a process that needs to be followed. > > > > Maru > > > > > >> As for the whole object identification scheme change, for this to > >> work, it probably needs a spec and a long discussion on any possible > >> complications (and benefits) when applying a change like that. > >> > >> For reference and convenience of readers, leaving the link to the > >> patch below: https://review.openstack.org/#/c/135006/ > >> > >> > >> > >>> > >>> -Sean > >>> > >> -----BEGIN PGP SIGNATURE----- > >> Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > >> > >> iQEcBAEBCgAGBQJUjxI1AAoJEC5aWaUY1u579M8H/RC+M7/9YYDClWRjhQLBNqEq > >> 0pMxURZi8lTyDmi+cXA7wq1QzgOwUqWnJnOMYzq8nt9wLh8cgasjU5YXZokrqDLw > >> zEu/a1Cd9Alp4iGYQ6upw94BptGrMvk+XwTedMX9zMLf0od1k8Gzp5xYH/GXInN3 > >> E+wje40Huia0MmLu4i2GMr/13gD2aYhMeGxZtDMcxQsF0DBh0gy8OM9pfKgIiXVM > >> T65nFbXUY1/PuAdzYwMto5leuWZH03YIddXlzkQbcZoH4PGgNEE3eKl1ctQSMGoo > >> bz3l522VimQvVnP/XiM6xBjFqsnPM5Tc7Ylu942l+NfpfcAM5QB6Ihvw5kQI0uw= > >> =gIsu > >> -----END PGP SIGNATURE----- > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From irenab at mellanox.com Tue Dec 16 05:42:06 2014 From: irenab at mellanox.com (Irena Berezovsky) Date: Tue, 16 Dec 2014 05:42:06 +0000 Subject: [openstack-dev] SRIOV-error In-Reply-To: References: Message-ID: Hi David, You error is not related to agent. I would suggest to check: 1. nova.conf at your compute node for pci whitelist configuration 2. Neutron server configuration for correct physical_network label matching the label in pci whitelist 3. Nova DB tables containing PCI devices entries: "#echo 'use nova;select hypervisor_hostname,pci_stats from compute_nodes;' | mysql -u root" You should not run SR-IOV agent in you setup. SR-IOV agent is an optional and currently does not add value if you use Intel NIC. Regards, Irena From: david jhon [mailto:djhon9813 at gmail.com] Sent: Tuesday, December 16, 2014 5:54 AM To: Murali B Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] SRIOV-error Just to be more clear, command $lspci | grep -i Ethernet gives following output: 01:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 03:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 03:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 03:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 04:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 04:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) How can I make SR-IOV agent run and fix this bug? On Tue, Dec 16, 2014 at 8:36 AM, david jhon > wrote: Hi Murali, Thanks for your response, I did the same, it has resolved errors apparently but 1) neutron agent-list shows no agent for sriov, 2) neutron port is created successfully but creating vm is erred in scheduling as follows: result from neutron agent-list: +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ | id | agent_type | host | alive | admin_state_up | binary | +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ | 2acc7044-e552-4601-b00b-00ba591b453f | Open vSwitch agent | blade08 | xxx | True | neutron-openvswitch-agent | | 595d07c6-120e-42ea-a950-6c77a6455f10 | Metadata agent | blade08 | :-) | True | neutron-metadata-agent | | a1f253a8-e02e-4498-8609-4e265285534b | DHCP agent | blade08 | :-) | True | neutron-dhcp-agent | | d46b29d8-4b5f-4838-bf25-b7925cb3e3a7 | L3 agent | blade08 | :-) | True | neutron-l3-agent | +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ 2014-12-15 19:30:44.546 40249 ERROR oslo.messaging.rpc.dispatcher [req-c7741cff-a7d8-422f-b605-6a1d976aeb09 ] Exception during message handling: PCI $ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 13$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 17$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 12$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 139, i$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher return func(*args, **kwargs) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 175, in s$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher filter_properties) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line $ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher filter_properties) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line $ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher chosen_host.obj.consume_from_instance(instance_properties) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line 246,$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher self.pci_stats.apply_requests(pci_requests.requests) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/pci/pci_stats.py", line 209, in apply$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher raise exception.PciDeviceRequestFailed(requests=requests) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher PciDeviceRequestFailed: PCI device request ({'requests': [InstancePCIRequest(alias_$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher. Moreover, no /var/log/sriov-agent.log file exists. Please help me to fix this issue. Thanks everyone! On Mon, Dec 15, 2014 at 5:18 PM, Murali B > wrote: Hi David, Please add as per the Irena suggestion FYI: refer the below configuration http://pastebin.com/DGmW7ZEg Thanks -Murali -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald.d.dugger at intel.com Tue Dec 16 06:24:59 2014 From: donald.d.dugger at intel.com (Dugger, Donald D) Date: Tue, 16 Dec 2014 06:24:59 +0000 Subject: [openstack-dev] [gantt] Scheduler sub-group meeting agenda 12/16 Message-ID: <6AF484C0160C61439DE06F17668F3BCB5344F948@ORSMSX114.amr.corp.intel.com> Meeting on #openstack-meeting at 1500 UTC (8:00AM MST) 1) Status on cleanup work - https://wiki.openstack.org/wiki/Gantt/kilo -- Don Dugger "Censeo Toto nos in Kansa esse decisse." - D. Gale Ph: 303/443-3786 -------------- next part -------------- An HTML attachment was scrubbed... URL: From irenab at mellanox.com Tue Dec 16 06:29:30 2014 From: irenab at mellanox.com (Irena Berezovsky) Date: Tue, 16 Dec 2014 06:29:30 +0000 Subject: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change In-Reply-To: References: <87egs0re9m.fsf@metaswitch.com> Message-ID: -----Original Message----- From: henry hly [mailto:] Sent: Tuesday, December 16, 2014 3:12 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change On Tue, Dec 16, 2014 at 1:53 AM, Neil Jerram wrote: > Hi all, > > Following the approval for Neutron vendor code decomposition > (https://review.openstack.org/#/c/134680/), I just wanted to comment > that it appears to work fine to have an ML2 mechanism driver > _entirely_ out of tree, so long as the vendor repository that provides > the ML2 mechanism driver does something like this to register their > driver as a neutron.ml2.mechanism_drivers entry point: > > setuptools.setup( > ..., > entry_points = { > ..., > 'neutron.ml2.mechanism_drivers': [ > 'calico = xyz.openstack.mech_xyz:XyzMechanismDriver', > ], > }, > ) > > (Please see > https://github.com/Metaswitch/calico/commit/488dcd8a51d7c6a1a2f0378900 > 1c2139b16de85c for the complete change and detail, for the example > that works for me.) > > Then Neutron and the vendor package can be separately installed, and > the vendor's driver name configured in ml2_conf.ini, and everything works. > > Given that, I wonder: > > - is that what the architects of the decomposition are expecting? > > - other than for the reference OVS driver, are there any reasons in > principle for keeping _any_ ML2 mechanism driver code in tree? > Good questions. I'm also looking for the linux bridge MD, SRIOV MD... Who will be responsible for these drivers? Excellent question. In my opinion, 'technology' specific but not vendor specific MD (like SRIOV) should not be maintained by specific vendor. It should be accessible for all interested parties for contribution. The OVS driver is maintained by Neutron community, vendor specific hardware driver by vendor, SDN controllers driver by their own community or vendor. But there are also other drivers like SRIOV, which are general for a lot of vendor agonitsc backends, and can't be maintained by a certain vendor/community. So, it would be better to keep some "general backend" MD in tree besides SRIOV. There are also vif-type-tap, vif-type-vhostuser, hierarchy-binding-external-VTEP ... We can implement a very thin in-tree base MD that only handle "vif bind" which is backend agonitsc, then backend provider is free to implement their own service logic, either by an backend agent, or by a driver derived from the base MD for agentless scenery. Keeping general backend MDs in tree sounds reasonable. Regards > Many thanks, > Neil > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From trinath.somanchi at freescale.com Tue Dec 16 06:37:03 2014 From: trinath.somanchi at freescale.com (trinath.somanchi at freescale.com) Date: Tue, 16 Dec 2014 06:37:03 +0000 Subject: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change In-Reply-To: References: <87egs0re9m.fsf@metaswitch.com> Message-ID: But then, is this decomposition is for the ML2 MDs or for all vendor specific plugins/drivers in all the sub divisions around neutron? Any comments on the same. -- Trinath Somanchi - B39208 trinath.somanchi at freescale.com | extn: 4048 -----Original Message----- From: Irena Berezovsky [mailto:irenab at mellanox.com] Sent: Tuesday, December 16, 2014 12:00 PM To: OpenStack Development Mailing List (not for usage questions) Cc: henry4hly at gmail.com Subject: Re: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change -----Original Message----- From: henry hly [mailto:] Sent: Tuesday, December 16, 2014 3:12 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change On Tue, Dec 16, 2014 at 1:53 AM, Neil Jerram wrote: > Hi all, > > Following the approval for Neutron vendor code decomposition > (https://review.openstack.org/#/c/134680/), I just wanted to comment > that it appears to work fine to have an ML2 mechanism driver > _entirely_ out of tree, so long as the vendor repository that provides > the ML2 mechanism driver does something like this to register their > driver as a neutron.ml2.mechanism_drivers entry point: > > setuptools.setup( > ..., > entry_points = { > ..., > 'neutron.ml2.mechanism_drivers': [ > 'calico = xyz.openstack.mech_xyz:XyzMechanismDriver', > ], > }, > ) > > (Please see > https://github.com/Metaswitch/calico/commit/488dcd8a51d7c6a1a2f0378900 > 1c2139b16de85c for the complete change and detail, for the example > that works for me.) > > Then Neutron and the vendor package can be separately installed, and > the vendor's driver name configured in ml2_conf.ini, and everything works. > > Given that, I wonder: > > - is that what the architects of the decomposition are expecting? > > - other than for the reference OVS driver, are there any reasons in > principle for keeping _any_ ML2 mechanism driver code in tree? > Good questions. I'm also looking for the linux bridge MD, SRIOV MD... Who will be responsible for these drivers? Excellent question. In my opinion, 'technology' specific but not vendor specific MD (like SRIOV) should not be maintained by specific vendor. It should be accessible for all interested parties for contribution. The OVS driver is maintained by Neutron community, vendor specific hardware driver by vendor, SDN controllers driver by their own community or vendor. But there are also other drivers like SRIOV, which are general for a lot of vendor agonitsc backends, and can't be maintained by a certain vendor/community. So, it would be better to keep some "general backend" MD in tree besides SRIOV. There are also vif-type-tap, vif-type-vhostuser, hierarchy-binding-external-VTEP ... We can implement a very thin in-tree base MD that only handle "vif bind" which is backend agonitsc, then backend provider is free to implement their own service logic, either by an backend agent, or by a driver derived from the base MD for agentless scenery. Keeping general backend MDs in tree sounds reasonable. Regards > Many thanks, > Neil > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From armamig at gmail.com Tue Dec 16 06:36:42 2014 From: armamig at gmail.com (Armando M.) Date: Mon, 15 Dec 2014 22:36:42 -0800 Subject: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change In-Reply-To: <87egs0re9m.fsf@metaswitch.com> References: <87egs0re9m.fsf@metaswitch.com> Message-ID: On 15 December 2014 at 09:53, Neil Jerram wrote: > > Hi all, > > Following the approval for Neutron vendor code decomposition > (https://review.openstack.org/#/c/134680/), I just wanted to comment > that it appears to work fine to have an ML2 mechanism driver _entirely_ > out of tree, so long as the vendor repository that provides the ML2 > mechanism driver does something like this to register their driver as a > neutron.ml2.mechanism_drivers entry point: > > setuptools.setup( > ..., > entry_points = { > ..., > 'neutron.ml2.mechanism_drivers': [ > 'calico = xyz.openstack.mech_xyz:XyzMechanismDriver', > ], > }, > ) > > (Please see > > https://github.com/Metaswitch/calico/commit/488dcd8a51d7c6a1a2f03789001c2139b16de85c > for the complete change and detail, for the example that works for me.) > > Then Neutron and the vendor package can be separately installed, and the > vendor's driver name configured in ml2_conf.ini, and everything works. > > Given that, I wonder: > > - is that what the architects of the decomposition are expecting? > - other than for the reference OVS driver, are there any reasons in > principle for keeping _any_ ML2 mechanism driver code in tree? > The approach you outlined is reasonable, and new plugins/drivers, like yours, may find it easier to approach Neutron integration this way. However, to ensure a smoother migration path for existing plugins and drivers, it was deemed more sensible to go down the path being proposed in the spec referenced above. > > Many thanks, > Neil > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbirru at gmail.com Tue Dec 16 06:45:13 2014 From: mbirru at gmail.com (Murali B) Date: Tue, 16 Dec 2014 12:15:13 +0530 Subject: [openstack-dev] SRIOV-error In-Reply-To: References: Message-ID: Hi David, >From the neutron agent-list I can see that you are not running the sriov-agent. Please run the sriov-agent after adding the below configuration. Add the below configuration at controller in /etc/neutron/plugins/ml2/ml2_conf_sriov.ini [ml2_sriov] # (ListOpt) Comma-separated list of # supported Vendor PCI Devices, in format vendor_id:product_id # #* supported_pci_vendor_devs = 15b3:1004, 8086:10c9* *supported_pci_vendor_devs = 8086:10c9* # Example: supported_pci_vendor_devs = 15b3:1004 # # (BoolOpt) Requires running SRIOV neutron agent for port binding agent_required = True Thanks -Murali On Tue, Dec 16, 2014 at 9:06 AM, david jhon wrote: > > Hi Murali, > > Thanks for your response, I did the same, it has resolved errors > apparently but 1) neutron agent-list shows no agent for sriov, 2) neutron > port is created successfully but creating vm is erred in scheduling as > follows: > > result from neutron agent-list: > > +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ > | id | agent_type | host | > alive | admin_state_up | binary | > > +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ > | 2acc7044-e552-4601-b00b-00ba591b453f | Open vSwitch agent | blade08 | > xxx | True | neutron-openvswitch-agent | > | 595d07c6-120e-42ea-a950-6c77a6455f10 | Metadata agent | blade08 | > :-) | True | neutron-metadata-agent | > | a1f253a8-e02e-4498-8609-4e265285534b | DHCP agent | blade08 | > :-) | True | neutron-dhcp-agent | > | d46b29d8-4b5f-4838-bf25-b7925cb3e3a7 | L3 agent | blade08 | > :-) | True | neutron-l3-agent | > > +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ > > 2014-12-15 19:30:44.546 40249 ERROR oslo.messaging.rpc.dispatcher > [req-c7741cff-a7d8-422f-b605-6a1d976aeb09 ] Exception during message > handling: PCI $ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > Traceback (most recent call last): > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line > 13$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > incoming.message)) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line > 17$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > return self._do_dispatch(endpoint, method, ctxt, args) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line > 12$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > result = getattr(endpoint, method)(ctxt, **new_args) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 139, > i$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > return func(*args, **kwargs) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 175, in > s$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > filter_properties) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line > $ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > filter_properties) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line > $ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > chosen_host.obj.consume_from_instance(instance_properties) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line > 246,$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > self.pci_stats.apply_requests(pci_requests.requests) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/pci/pci_stats.py", line 209, in > apply$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > raise exception.PciDeviceRequestFailed(requests=requests) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > PciDeviceRequestFailed: PCI device request ({'requests': > [InstancePCIRequest(alias_$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher. > > Moreover, no /var/log/sriov-agent.log file exists. Please help me to fix > this issue. Thanks everyone! > > On Mon, Dec 15, 2014 at 5:18 PM, Murali B wrote: >> >> Hi David, >> >> Please add as per the Irena suggestion >> >> FYI: refer the below configuration >> >> http://pastebin.com/DGmW7ZEg >> >> >> Thanks >> -Murali >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Tue Dec 16 06:53:52 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Tue, 16 Dec 2014 12:53:52 +0600 Subject: [openstack-dev] [Mistral] For-each In-Reply-To: <58CB487D-D8AB-401C-8ABC-CF3F088552DC@stackstorm.com> References: <58CB487D-D8AB-401C-8ABC-CF3F088552DC@stackstorm.com> Message-ID: Thanks Nikolay, I also left my comments and tend to like Alt2 better than others. Agree with Dmitri that ?all-permutations? thing can be just a different construct in the language and ?concurrency? should be rather a policy than a property of ?for-each? because it doesn?t have any impact on workflow logic itself, it only influence the way how engine runs a task. So again, policies are engine capabilities, not workflow ones. One tricky question that?s still in the air is how to deal with publishing. I mean in terms of requirements it?s pretty clear: we need to apply ?publish? once after all iterations and be able to access an array of iteration results as $. But technically, it may be a problem to implement such behavior, need to think about it more. Renat Akhmerov @ Mirantis Inc. From armamig at gmail.com Tue Dec 16 06:54:24 2014 From: armamig at gmail.com (Armando M.) Date: Mon, 15 Dec 2014 22:54:24 -0800 Subject: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change In-Reply-To: References: <87egs0re9m.fsf@metaswitch.com> Message-ID: > > > > Good questions. I'm also looking for the linux bridge MD, SRIOV MD... > Who will be responsible for these drivers? > > Excellent question. In my opinion, 'technology' specific but not vendor > specific MD (like SRIOV) should not be maintained by specific vendor. It > should be accessible for all interested parties for contribution. > I don't think that anyone is making the suggestion of making these drivers develop in silos, but instead one of the objective is to allow them to evolve more rapidly, and in the open, where anyone can participate. > > The OVS driver is maintained by Neutron community, vendor specific > hardware driver by vendor, SDN controllers driver by their own community or > vendor. But there are also other drivers like SRIOV, which are general for > a lot of vendor agonitsc backends, and can't be maintained by a certain > vendor/community. > Certain technologies, like the ones mentioned above may require specific hardware; even though they may not be particularly associated with a specific vendor, some sort of vendor support is indeed required, like 3rd party CI. So, grouping them together under an hw-accelerated umbrella, or whichever other name that sticks, may make sense long term should the number of drivers really ramp up as hinted below. > > So, it would be better to keep some "general backend" MD in tree besides > SRIOV. There are also vif-type-tap, vif-type-vhostuser, > hierarchy-binding-external-VTEP ... We can implement a very thin in-tree > base MD that only handle "vif bind" which is backend agonitsc, then backend > provider is free to implement their own service logic, either by an backend > agent, or by a driver derived from the base MD for agentless scenery. > > Keeping general backend MDs in tree sounds reasonable. > Regards > > > Many thanks, > > Neil > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From djhon9813 at gmail.com Tue Dec 16 07:43:32 2014 From: djhon9813 at gmail.com (david jhon) Date: Tue, 16 Dec 2014 12:43:32 +0500 Subject: [openstack-dev] SRIOV-error In-Reply-To: References: Message-ID: Hi Irena and Murali, Thanks a lot for your reply! Here is the output from pci_devices table of nova db: select * from pci_devices; +---------------------+------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+-----------------------------------+---------------+------------+ | created_at | updated_at | deleted_at | deleted | id | compute_node_id | address | product_id | vendor_id | dev_type | dev_id | label | status | extra_info | instance_uuid | request_id | +---------------------+------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+-----------------------------------+---------------+------------+ | 2014-12-15 12:10:52 | NULL | NULL | 0 | 1 | 1 | 0000:03:10.0 | 10ed | 8086 | type-VF | pci_0000_03_10_0 | label_8086_10ed | available | {"phys_function": "0000:03:00.0"} | NULL | NULL | | 2014-12-15 12:10:52 | NULL | NULL | 0 | 2 | 1 | 0000:03:10.2 | 10ed | 8086 | type-VF | pci_0000_03_10_2 | label_8086_10ed | available | {"phys_function": "0000:03:00.0"} | NULL | NULL | | 2014-12-15 12:10:52 | NULL | NULL | 0 | 3 | 1 | 0000:03:10.4 | 10ed | 8086 | type-VF | pci_0000_03_10_4 | label_8086_10ed | available | {"phys_function": "0000:03:00.0"} | NULL | NULL | | 2014-12-15 12:10:52 | NULL | NULL | 0 | 4 | 1 | 0000:03:10.6 | 10ed | 8086 | type-VF | pci_0000_03_10_6 | label_8086_10ed | available | {"phys_function": "0000:03:00.0"} | NULL | NULL | | 2014-12-15 12:10:53 | NULL | NULL | 0 | 5 | 1 | 0000:03:10.1 | 10ed | 8086 | type-VF | pci_0000_03_10_1 | label_8086_10ed | available | {"phys_function": "0000:03:00.1"} | NULL | NULL | | 2014-12-15 12:10:53 | NULL | NULL | 0 | 6 | 1 | 0000:03:10.3 | 10ed | 8086 | type-VF | pci_0000_03_10_3 | label_8086_10ed | available | {"phys_function": "0000:03:00.1"} | NULL | NULL | | 2014-12-15 12:10:53 | NULL | NULL | 0 | 7 | 1 | 0000:03:10.5 | 10ed | 8086 | type-VF | pci_0000_03_10_5 | label_8086_10ed | available | {"phys_function": "0000:03:00.1"} | NULL | NULL | | 2014-12-15 12:10:53 | NULL | NULL | 0 | 8 | 1 | 0000:03:10.7 | 10ed | 8086 | type-VF | pci_0000_03_10_7 | label_8086_10ed | available | {"phys_function": "0000:03:00.1"} | NULL | NULL | +---------------------+------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+-----------------------------------+---------------+------------+ output from select hypervisor_hostname,pci_stats from compute_nodes; is: +---------------------+-------------------------------------------------------------------------------------------+ | hypervisor_hostname | pci_stats | +---------------------+-------------------------------------------------------------------------------------------+ | blade08 | [{"count": 8, "vendor_id": "8086", "physical_network": "ext-net", "product_id": "10ed"}] | +---------------------+-------------------------------------------------------------------------------------------+ Moreover, I have set agent_required = True in /etc/neutron/plugins/ml2/ml2_conf_sriov.ini. but still found no sriov agent running. # Defines configuration options for SRIOV NIC Switch MechanismDriver # and Agent [ml2_sriov] # (ListOpt) Comma-separated list of # supported Vendor PCI Devices, in format vendor_id:product_id # #supported_pci_vendor_devs = 8086:10ca, 8086:10ed supported_pci_vendor_devs = 8086:10ed # Example: supported_pci_vendor_devs = 15b3:1004 # # (BoolOpt) Requires running SRIOV neutron agent for port binding agent_required = True [sriov_nic] # (ListOpt) Comma-separated list of : # tuples mapping physical network names to the agent's node-specific # physical network device interfaces of SR-IOV physical function to be used # for VLAN networks. All physical networks listed in network_vlan_ranges on # the server should have mappings to appropriate interfaces on each agent. # physical_device_mappings = ext-net:br-ex # Example: physical_device_mappings = physnet1:eth1 # # (ListOpt) Comma-separated list of : # tuples, mapping network_device to the agent's node-specific list of virtual # functions that should not be used for virtual networking. # vfs_to_exclude is a semicolon-separated list of virtual # functions to exclude from network_device. The network_device in the # mapping should appear in the physical_device_mappings list. # exclude_devices = # Example: exclude_devices = eth1:0000:07:00.2; 0000:07:00.3 ======================================================================================== pci_passthrough_whitelist from /etc/nova/nova.conf: pci_passthrough_whitelist = {"address":"*:03:10.*","physical_network":"ext-net"} ==================================================== /etc/neutron/plugins/ml2/ml2_conf.ini: [ml2] # (ListOpt) List of network type driver entrypoints to be loaded from # the neutron.ml2.type_drivers namespace. # #type_drivers = local,flat,vlan,gre,vxlan #Example: type_drivers = flat,vlan,gre,vxlan #type_drivers = flat,gre, vlan type_drivers = flat,vlan # (ListOpt) Ordered list of network_types to allocate as tenant # networks. The default value 'local' is useful for single-box testing # but provides no connectivity between hosts. # # tenant_network_types = local # Example: tenant_network_types = vlan,gre,vxlan #tenant_network_types = gre, vlan tenant_network_types = vlan # (ListOpt) Ordered list of networking mechanism driver entrypoints # to be loaded from the neutron.ml2.mechanism_drivers namespace. mechanism_drivers = openvswitch, sriovnicswitch # Example: mechanism_drivers = openvswitch,mlnx # Example: mechanism_drivers = arista # Example: mechanism_drivers = cisco,logger # Example: mechanism_drivers = openvswitch,brocade # Example: mechanism_drivers = linuxbridge,brocade # (ListOpt) Ordered list of extension driver entrypoints # to be loaded from the neutron.ml2.extension_drivers namespace. # extension_drivers = # Example: extension_drivers = anewextensiondriver [ml2_type_flat] # (ListOpt) List of physical_network names with which flat networks # can be created. Use * to allow flat networks with arbitrary # physical_network names. # flat_networks = ext-net # Example:flat_networks = physnet1,physnet2 # Example:flat_networks = * [ml2_type_vlan] # (ListOpt) List of [::] tuples # specifying physical_network names usable for VLAN provider and # tenant networks, as well as ranges of VLAN tags on each # physical_network available for allocation as tenant networks. network_vlan_ranges = ext-net:2:100 # Example: network_vlan_ranges = physnet1:1000:2999,physnet2 [ml2_type_gre] # (ListOpt) Comma-separated list of : tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation #tunnel_id_ranges = 1:1000 [ml2_type_vxlan] # (ListOpt) Comma-separated list of : tuples enumerating # ranges of VXLAN VNI IDs that are available for tenant network allocation. # # vni_ranges = # (StrOpt) Multicast group for the VXLAN interface. When configured, will # enable sending all broadcast traffic to this multicast group. When left # unconfigured, will disable multicast VXLAN mode. # # vxlan_group = # Example: vxlan_group = 239.1.1.1 [securitygroup] # Controls if neutron security group is enabled or not. # It should be false when you use nova security group. enable_security_group = True # Use ipset to speed-up the iptables security groups. Enabling ipset support # requires that ipset is installed on L2 agent node. enable_ipset = True firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [ovs] local_ip = controller enable_tunneling = True bridge_mappings = external:br-ex [agent] tunnel_types = vlan [ml2_sriov] agent_required = True Please tell me what is wrong in there plus what exactly "physnet1" should be? Thanks again for all your help and suggestion.... Regards, On Tue, Dec 16, 2014 at 10:42 AM, Irena Berezovsky wrote: > > Hi David, > > You error is not related to agent. > > I would suggest to check: > > 1. nova.conf at your compute node for pci whitelist configuration > > 2. Neutron server configuration for correct physical_network label > matching the label in pci whitelist > > 3. Nova DB tables containing PCI devices entries: > > "#echo 'use nova;select hypervisor_hostname,pci_stats from compute_nodes;' > | mysql -u root" > > You should not run SR-IOV agent in you setup. SR-IOV agent is an > optional and currently does not add value if you use Intel NIC. > > > > > > Regards, > > Irena > > *From:* david jhon [mailto:djhon9813 at gmail.com] > *Sent:* Tuesday, December 16, 2014 5:54 AM > *To:* Murali B > *Cc:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] SRIOV-error > > > > Just to be more clear, command $lspci | grep -i Ethernet gives following > output: > > 01:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port > Backplane Connection (rev 01) > 01:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port > Backplane Connection (rev 01) > 03:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port > Backplane Connection (rev 01) > 03:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port > Backplane Connection (rev 01) > 03:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 03:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 03:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 03:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 03:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 03:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 03:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 03:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port > Backplane Connection (rev 01) > 04:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port > Backplane Connection (rev 01) > 04:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > > How can I make SR-IOV agent run and fix this bug? > > > > > > On Tue, Dec 16, 2014 at 8:36 AM, david jhon wrote: > > Hi Murali, > > Thanks for your response, I did the same, it has resolved errors > apparently but 1) neutron agent-list shows no agent for sriov, 2) neutron > port is created successfully but creating vm is erred in scheduling as > follows: > > result from neutron agent-list: > > +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ > | id | agent_type | host | > alive | admin_state_up | binary | > > +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ > | 2acc7044-e552-4601-b00b-00ba591b453f | Open vSwitch agent | blade08 | > xxx | True | neutron-openvswitch-agent | > | 595d07c6-120e-42ea-a950-6c77a6455f10 | Metadata agent | blade08 | > :-) | True | neutron-metadata-agent | > | a1f253a8-e02e-4498-8609-4e265285534b | DHCP agent | blade08 | > :-) | True | neutron-dhcp-agent | > | d46b29d8-4b5f-4838-bf25-b7925cb3e3a7 | L3 agent | blade08 | > :-) | True | neutron-l3-agent | > > +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ > > 2014-12-15 19:30:44.546 40249 ERROR oslo.messaging.rpc.dispatcher > [req-c7741cff-a7d8-422f-b605-6a1d976aeb09 ] Exception during message > handling: PCI $ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > Traceback (most recent call last): > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line > 13$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > incoming.message)) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line > 17$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > return self._do_dispatch(endpoint, method, ctxt, args) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line > 12$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > result = getattr(endpoint, method)(ctxt, **new_args) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 139, > i$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > return func(*args, **kwargs) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 175, in > s$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > filter_properties) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line > $ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > filter_properties) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line > $ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > chosen_host.obj.consume_from_instance(instance_properties) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line > 246,$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > self.pci_stats.apply_requests(pci_requests.requests) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/pci/pci_stats.py", line 209, in > apply$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > raise exception.PciDeviceRequestFailed(requests=requests) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > PciDeviceRequestFailed: PCI device request ({'requests': > [InstancePCIRequest(alias_$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher. > > Moreover, no /var/log/sriov-agent.log file exists. Please help me to fix > this issue. Thanks everyone! > > > > On Mon, Dec 15, 2014 at 5:18 PM, Murali B wrote: > > Hi David, > > > > Please add as per the Irena suggestion > > > > FYI: refer the below configuration > > > > http://pastebin.com/DGmW7ZEg > > > > > > Thanks > > -Murali > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eduard.matei at cloudfounders.com Tue Dec 16 08:41:06 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Tue, 16 Dec 2014 10:41:06 +0200 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: References: <5486D947.4090209@hp.com> Message-ID: Hi, Can someone point me to some working documentation on how to setup third party CI? (joinfu's instructions don't seem to work, and manually running devstack-gate scripts fails: Running gate_hook Job timeout set to: 163 minutes timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory ERROR: the main setup script run by this job failed - exit code: 127 please look at the relevant log files to determine the root cause Cleaning up host ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) Build step 'Execute shell' marked build as failure. I have a working Jenkins slave with devstack and our internal libraries, i have Gerrit Trigger Plugin working and triggering on patches created, i just need the actual job contents so that it can get to comment with the test results. Thanks, Eduard On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei wrote: > Hi Darragh, thanks for your input > > I double checked the job settings and fixed it: > - build triggers is set to Gerrit event > - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin > and tested separately) > - Trigger on: Patchset Created > - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: > Type: Path, Pattern: ** (was Type Plain on both) > Now the job is triggered by commit on openstack-dev/sandbox :) > > Regarding the Query and Trigger Gerrit Patches, i found my patch using > query: status:open project:openstack-dev/sandbox change:139585 and i can > trigger it manually and it executes the job. > > But i still have the problem: what should the job do? It doesn't actually > do anything, it doesn't run tests or comment on the patch. > Do you have an example of job? > > Thanks, > Eduard > > On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh wrote: > >> Hi Eduard, >> >> >> I would check the trigger settings in the job, particularly which "type" >> of pattern matching is being used for the branches. Found it tends to be >> the spot that catches most people out when configuring jobs with the >> Gerrit Trigger plugin. If you're looking to trigger against all branches >> then you would want "Type: Path" and "Pattern: **" appearing in the UI. >> >> If you have sufficient access using the 'Query and Trigger Gerrit >> Patches' page accessible from the main view will make it easier to >> confirm that your Jenkins instance can actually see changes in gerrit >> for the given project (which should mean that it can see the >> corresponding events as well). Can also use the same page to re-trigger >> for PatchsetCreated events to see if you've set the patterns on the job >> correctly. >> >> Regards, >> Darragh Bailey >> >> "Nothing is foolproof to a sufficiently talented fool" - Unknown >> >> On 08/12/14 14:33, Eduard Matei wrote: >> > Resending this to dev ML as it seems i get quicker response :) >> > >> > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: >> > Patchset Created", chose as server the configured Gerrit server that >> > was previously tested, then added the project openstack-dev/sandbox >> > and saved. >> > I made a change on dev sandbox repo but couldn't trigger my job. >> > >> > Any ideas? >> > >> > Thanks, >> > Eduard >> > >> > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei >> > > > > wrote: >> > >> > Hello everyone, >> > >> > Thanks to the latest changes to the creation of service accounts >> > process we're one step closer to setting up our own CI platform >> > for Cinder. >> > >> > So far we've got: >> > - Jenkins master (with Gerrit plugin) and slave (with DevStack and >> > our storage solution) >> > - Service account configured and tested (can manually connect to >> > review.openstack.org and get events >> > and publish comments) >> > >> > Next step would be to set up a job to do the actual testing, this >> > is where we're stuck. >> > Can someone please point us to a clear example on how a job should >> > look like (preferably for testing Cinder on Kilo)? Most links >> > we've found are broken, or tools/scripts are no longer working. >> > Also, we cannot change the Jenkins master too much (it's owned by >> > Ops team and they need a list of tools/scripts to review before >> > installing/running so we're not allowed to experiment). >> > >> > Thanks, >> > Eduard >> > >> > -- >> > >> > *Eduard Biceri Matei, Senior Software Developer* >> > www.cloudfounders.com >> > | eduard.matei at cloudfounders.com >> > >> > >> > >> > >> > *CloudFounders, The Private Cloud Software Company* >> > >> > Disclaimer: >> > This email and any files transmitted with it are confidential and >> > intended solely for the use of the individual or entity to whom >> > they are addressed.If you are not the named addressee or an >> > employee or agent responsible for delivering this message to the >> > named addressee, you are hereby notified that you are not >> > authorized to read, print, retain, copy or disseminate this >> > message or any part of it. If you have received this email in >> > error we request you to notify us by reply e-mail and to delete >> > all electronic files of the message. If you are not the intended >> > recipient you are notified that disclosing, copying, distributing >> > or taking any action in reliance on the contents of this >> > information is strictly prohibited. E-mail transmission cannot be >> > guaranteed to be secure or error free as information could be >> > intercepted, corrupted, lost, destroyed, arrive late or >> > incomplete, or contain viruses. The sender therefore does not >> > accept liability for any errors or omissions in the content of >> > this message, and shall have no liability for any loss or damage >> > suffered by the user, which arise as a result of e-mail >> transmission. >> > >> > >> > >> > >> > -- >> > *Eduard Biceri Matei, Senior Software Developer* >> > www.cloudfounders.com >> > | eduard.matei at cloudfounders.com >> > >> > >> > >> > >> > *CloudFounders, The Private Cloud Software Company* >> > >> > Disclaimer: >> > This email and any files transmitted with it are confidential and >> > intended solely for the use of the individual or entity to whom they >> > are addressed.If you are not the named addressee or an employee or >> > agent responsible for delivering this message to the named addressee, >> > you are hereby notified that you are not authorized to read, print, >> > retain, copy or disseminate this message or any part of it. If you >> > have received this email in error we request you to notify us by reply >> > e-mail and to delete all electronic files of the message. If you are >> > not the intended recipient you are notified that disclosing, copying, >> > distributing or taking any action in reliance on the contents of this >> > information is strictly prohibited. E-mail transmission cannot be >> > guaranteed to be secure or error free as information could be >> > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or >> > contain viruses. The sender therefore does not accept liability for >> > any errors or omissions in the content of this message, and shall have >> > no liability for any loss or damage suffered by the user, which arise >> > as a result of e-mail transmission. >> > >> > >> > _______________________________________________ >> > OpenStack-Infra mailing list >> > OpenStack-Infra at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra >> >> > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Dec 16 09:20:05 2014 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 16 Dec 2014 10:20:05 +0100 Subject: [openstack-dev] Cross-Project meeting, Tue December 16th, 21:00 UTC Message-ID: <548FF945.8080406@openstack.org> Dear PTLs, cross-project liaisons and anyone else interested, We'll have a cross-project meeting Tuesday at 21:00 UTC, with the following agenda: * Next steps for cascading (joehuang) * Providing an alternative to shipping default config file (ttx) * operators thread at [1] * https://bugs.launchpad.net/nova/+bug/1301519 * tools/config/generate_sample.sh -b . -p nova -o etc/nova * Open discussion & announcements [1] http://lists.openstack.org/pipermail/openstack-operators/2014-December/005658.html See you there ! For more details on this meeting, please see: https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting -- Thierry Carrez (ttx) From joehuang at huawei.com Tue Dec 16 09:33:14 2014 From: joehuang at huawei.com (joehuang) Date: Tue, 16 Dec 2014 09:33:14 +0000 Subject: [openstack-dev] Cross-Project meeting, Tue December 16th, 21:00 UTC In-Reply-To: <548FF945.8080406@openstack.org> References: <548FF945.8080406@openstack.org> Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF541FF2DA@szxema505-mbs.china.huawei.com> Hello, I'll attend the IRC meeting if the network is not blocked. Best Regards Chaoyi Huang ( Joe Huang ) -----Original Message----- From: Thierry Carrez [mailto:thierry at openstack.org] Sent: Tuesday, December 16, 2014 5:20 PM To: OpenStack Development Mailing List Subject: [openstack-dev] Cross-Project meeting, Tue December 16th, 21:00 UTC Dear PTLs, cross-project liaisons and anyone else interested, We'll have a cross-project meeting Tuesday at 21:00 UTC, with the following agenda: * Next steps for cascading (joehuang) * Providing an alternative to shipping default config file (ttx) * operators thread at [1] * https://bugs.launchpad.net/nova/+bug/1301519 * tools/config/generate_sample.sh -b . -p nova -o etc/nova * Open discussion & announcements [1] http://lists.openstack.org/pipermail/openstack-operators/2014-December/005658.html See you there ! For more details on this meeting, please see: https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting -- Thierry Carrez (ttx) _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From raghavendra.lad at accenture.com Tue Dec 16 09:39:37 2014 From: raghavendra.lad at accenture.com (raghavendra.lad at accenture.com) Date: Tue, 16 Dec 2014 09:39:37 +0000 Subject: [openstack-dev] [Murano] Murano Agent Message-ID: <70bff7c448b84794a05a2383daba26d5@BY2PR42MB101.048d.mgd.msft.net> Hi Team, I am installing Murano on the Ubuntu 14.04 Juno setup and would like to know what components need to be installed in a separate VM for Murano agent. Please let me kow why Murano-agent is required and the components that needs to be installed in it. Warm Regards, Raghavendra Lad ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Neil.Jerram at metaswitch.com Tue Dec 16 10:05:12 2014 From: Neil.Jerram at metaswitch.com (Neil Jerram) Date: Tue, 16 Dec 2014 10:05:12 +0000 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: <548F2DC9.5070907@openstack.org> (Stefano Maffulli's message of "Mon, 15 Dec 2014 10:51:53 -0800") References: <548F2DC9.5070907@openstack.org> Message-ID: <878ui7rjtz.fsf@metaswitch.com> Stefano Maffulli writes: > On 12/09/2014 04:11 PM, by wrote: >>>>[vad] how about the documentation in this case?... bcos it needs some >> place to document (a short desc and a link to vendor page) or list these >> kind of out-of-tree plugins/drivers... just to make the user aware of >> the availability of such plugins/driers which is compatible with so and >> so openstack release. >> I checked with the documentation team and according to them, only the >> following plugins/drivers only will get documented... >> 1) in-tree plugins/drivers (full documentation) >> 2) third-party plugins/drivers (ie, one implements and follows this new >> proposal, a.k.a partially-in-tree due to the integration module/code)... >> >> *** no listing/mention about such completely out-of-tree plugins/drivers*** > > Discoverability of documentation is a serious issue. As I commented on > docs spec [1], I think there are already too many places, mini-sites and > random pages holding information that is relevant to users. We should > make an effort to keep things discoverable, even if not maintained > necessarily by the same, single team. > > I think the docs team means that they are not able to guarantee > documentation for third-party *themselves* (and has not been able, too). > The docs team is already overworked as it is now, they can't take on > more responsibilities. > > So once Neutron's code will be split, documentation for the users of all > third-party modules should find a good place to live in, indexed and > searchable together where the rest of the docs are. I'm hoping that we > can find a place (ideally under docs.openstack.org?) where third-party > documentation can live and be maintained by the teams responsible for > the code, too. > > Thoughts? I suggest a simple table, under docs.openstack.org, where each row has the plugin/driver name, and then links to the documentation and code. There should ideally be a very lightweight process for vendors to add their row(s) to this table, and to edit those rows. I don't think it makes sense for the vendor documentation itself to be under docs.openstack.org, while the code is out of tree. Regards, Neil From ihrachys at redhat.com Tue Dec 16 10:13:41 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 16 Dec 2014 11:13:41 +0100 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: References: <548EED0D.6020600@redhat.com> Message-ID: <549005D5.7070707@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 15/12/14 18:57, Doug Hellmann wrote: > There may be a similar problem managing dependencies on libraries > that live outside of either tree. I assume you already decided how > to handle that. Are you doing the same thing, and adding the > requirements to neutron?s lists? I guess the idea is to keep in neutron-*aas only those oslo-incubator modules that are used there solely (=not used in main repo). I think requirements are a bit easier and should track all direct dependencies in each of the repos, so that in case main repo decides to drop one, neutron-*aas repos are not broken. For requirements, it's different because there is no major burden due to duplicate entries in repos. > > On Dec 15, 2014, at 12:16 PM, Doug Wiegley > wrote: > >> Hi all, >> >> Ihar and I discussed this on IRC, and are going forward with >> option 2 unless someone has a big problem with it. >> >> Thanks, Doug >> >> >> On 12/15/14, 8:22 AM, "Doug Wiegley" >> wrote: >> >>> Hi Ihar, >>> >>> I?m actually in favor of option 2, but it implies a few things >>> about your time, and I wanted to chat with you before >>> presuming. >>> >>> Maintenance can not involve breaking changes. At this point, >>> the co-gate will block it. Also, oslo graduation changes will >>> have to be made in the services repos first, and then Neutron. >>> >>> Thanks, doug >>> >>> >>> On 12/15/14, 6:15 AM, "Ihar Hrachyshka" >>> wrote: >>> > Hi all, > > the question arose recently in one of reviews for neutron-*aas > repos to remove all oslo-incubator code from those repos since > it's duplicated in neutron main repo. (You can find the link to the > review at the end of the email.) > > Brief hostory: neutron repo was recently split into 4 pieces > (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split > resulted in each repository keeping their own copy of > neutron/openstack/common/... tree (currently unused in all > neutron-*aas repos that are still bound to modules from main > repo). > > As a oslo liaison for the project, I wonder what's the best way to > manage oslo-incubator files. We have several options: > > 1. just kill all the neutron/openstack/common/ trees from > neutron-*aas repositories and continue using modules from main > repo. > > 2. kill all duplicate modules from neutron-*aas repos and leave > only those that are used in those repos but not in main repo. > > 3. fully duplicate all those modules in each of four repos that use > them. > > I think option 1. is a straw man, since we should be able to > introduce new oslo-incubator modules into neutron-*aas repos even > if they are not used in main repo. > > Option 2. is good when it comes to synching non-breaking bug fixes > (or security fixes) from oslo-incubator, in that it will require > only one sync patch instead of e.g. four. At the same time there > may be potential issues when synchronizing updates from > oslo-incubator that would break API and hence require changes to > each of the modules that use it. Since we don't support atomic > merges for multiple projects in gate, we will need to be cautious > about those updates, and we will still need to leave neutron-*aas > repos broken for some time (though the time may be mitigated with > care). > > Option 3. is vice versa - in theory, you get total decoupling, > meaning no oslo-incubator updates in main repo are expected to > break neutron-*aas repos, but bug fixing becomes a huge PITA. > > I would vote for option 2., for two reasons: - most oslo-incubator > syncs are non-breaking, and we may effectively apply care to > updates that may result in potential breakage (f.e. being able to > trigger an integrated run for each of neutron-*aas repos with the > main sync patch, if there are any concerns). - it will make oslo > liaison life a lot easier. OK, I'm probably too selfish on that. > ;) - it will make stable maintainers life a lot easier. The main > reason why stable maintainers and distributions like recent oslo > graduation movement is that we don't need to track each bug fix we > need in every project, and waste lots of cycles on it. Being able > to fix a bug in one place only is *highly* anticipated. [OK, I'm > quite selfish on that one too.] - it's a delusion that there will > be no neutron-main syncs that will break neutron-*aas repos ever. > There can still be problems due to incompatibility between neutron > main and neutron-*aas code resulted EXACTLY because multiple parts > of the same process use different versions of the same module. > > That said, Doug Wiegley (lbaas core) seems to be in favour of > option 3. due to lower coupling that is achieved in that way. I > know that lbaas team had a bad experience due to tight coupling to > neutron project in the past, so I appreciate their concerns. > > All in all, we should come up with some standard solution for both > advanced services that are already split out, *and* upcoming > vendor plugin shrinking initiative. > > The initial discussion is captured at: > https://review.openstack.org/#/c/141427/ > > Thanks, /Ihar >>> >> >> _______________________________________________ OpenStack-dev >> mailing list OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUkAXVAAoJEC5aWaUY1u57vmsIAJ9tz7MSqQ+j/nfmRyXARfwu KISUKfDHS5V31BaMtgvmHQRx0wUCA45EXAuqrau3hEfMt/u9mpP7rR69FDRluQeZ pZiz9t3igud5e+UWHgsu3Ja5h6MZoF55CjG7jUY6im+NzvvQdi7PeIMHrE8gZ9P4 UHey/QlNEntwUYefDacCMr3aZSCb4y++Cq0wRbZPI0uMAdEBQUYMNP+/eJNfhjny LUn0vEX2zjKaGLap8uuksvptom9HkRo2v6MlCtalrblxtn0MVg38UyVRv/ik7MOD 381tVXfUbDeZVi+v7tUcYdFmW806GfrUm939w4ryY0oGqUElD4Fch0XKsowD51I= =o8HW -----END PGP SIGNATURE----- From ihrachys at redhat.com Tue Dec 16 10:22:38 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 16 Dec 2014 11:22:38 +0100 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: References: <548EED0D.6020600@redhat.com> Message-ID: <549007EE.1060205@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 15/12/14 17:22, Doug Wiegley wrote: > Hi Ihar, > > I?m actually in favor of option 2, but it implies a few things > about your time, and I wanted to chat with you before presuming. I think split didn't mean moving project trees under separate governance, so I assume oslo (doc, qa, ...) liaisons should not be split either. > > Maintenance can not involve breaking changes. At this point, the > co-gate will block it. Also, oslo graduation changes will have to > be made in the services repos first, and then Neutron. Do you mean that a change to oslo-incubator modules is co-gated (not just co-checked with no vote) with each of advanced services? As I pointed in my previous email, sometimes breakages are inescapable. Consider a change to neutron oslo-incubator module used commonly in all repos that breaks API (they are quite rare, but still have a chance of happening once in a while). If we would co-gate main neutron repo changes with services, it will mean that we won't be able to merge the change. That would probably suggest that we go forward with option 3 and manage all incubator files separately in each of the trees, though, again, breakages are still possible in that scenario via introducing incompatibility between versions of incubator modules in separate repos. So we should be realistic about it and plan forward how we deal potential breakages that *may* occur. As for oslo library graduations, the order is not really significant. What is significant is that we drop oslo-incubator module from main neutron repo only after all other neutron-*aas repos migrate to appropriate oslo.* library. The neutron migration itself may occur in parallel (by postponing module drop later). > > Thanks, doug > > > On 12/15/14, 6:15 AM, "Ihar Hrachyshka" > wrote: > > Hi all, > > the question arose recently in one of reviews for neutron-*aas > repos to remove all oslo-incubator code from those repos since > it's duplicated in neutron main repo. (You can find the link to the > review at the end of the email.) > > Brief hostory: neutron repo was recently split into 4 pieces > (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split > resulted in each repository keeping their own copy of > neutron/openstack/common/... tree (currently unused in all > neutron-*aas repos that are still bound to modules from main > repo). > > As a oslo liaison for the project, I wonder what's the best way to > manage oslo-incubator files. We have several options: > > 1. just kill all the neutron/openstack/common/ trees from > neutron-*aas repositories and continue using modules from main > repo. > > 2. kill all duplicate modules from neutron-*aas repos and leave > only those that are used in those repos but not in main repo. > > 3. fully duplicate all those modules in each of four repos that use > them. > > I think option 1. is a straw man, since we should be able to > introduce new oslo-incubator modules into neutron-*aas repos even > if they are not used in main repo. > > Option 2. is good when it comes to synching non-breaking bug fixes > (or security fixes) from oslo-incubator, in that it will require > only one sync patch instead of e.g. four. At the same time there > may be potential issues when synchronizing updates from > oslo-incubator that would break API and hence require changes to > each of the modules that use it. Since we don't support atomic > merges for multiple projects in gate, we will need to be cautious > about those updates, and we will still need to leave neutron-*aas > repos broken for some time (though the time may be mitigated with > care). > > Option 3. is vice versa - in theory, you get total decoupling, > meaning no oslo-incubator updates in main repo are expected to > break neutron-*aas repos, but bug fixing becomes a huge PITA. > > I would vote for option 2., for two reasons: - most oslo-incubator > syncs are non-breaking, and we may effectively apply care to > updates that may result in potential breakage (f.e. being able to > trigger an integrated run for each of neutron-*aas repos with the > main sync patch, if there are any concerns). - it will make oslo > liaison life a lot easier. OK, I'm probably too selfish on that. > ;) - it will make stable maintainers life a lot easier. The main > reason why stable maintainers and distributions like recent oslo > graduation movement is that we don't need to track each bug fix we > need in every project, and waste lots of cycles on it. Being able > to fix a bug in one place only is *highly* anticipated. [OK, I'm > quite selfish on that one too.] - it's a delusion that there will > be no neutron-main syncs that will break neutron-*aas repos ever. > There can still be problems due to incompatibility between neutron > main and neutron-*aas code resulted EXACTLY because multiple parts > of the same process use different versions of the same module. > > That said, Doug Wiegley (lbaas core) seems to be in favour of > option 3. due to lower coupling that is achieved in that way. I > know that lbaas team had a bad experience due to tight coupling to > neutron project in the past, so I appreciate their concerns. > > All in all, we should come up with some standard solution for both > advanced services that are already split out, *and* upcoming > vendor plugin shrinking initiative. > > The initial discussion is captured at: > https://review.openstack.org/#/c/141427/ > > Thanks, /Ihar > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUkAfuAAoJEC5aWaUY1u57HnwH/3FtSQ+ul/zVqj21OVsy/3H2 7QHKp8BuszSRn0bRg7lkzGaV7OZnonXLWcmC/2k7LNrrSRy79mO20Zl4yMVx2TEz WYXUo3RI7MMCQRJcZk/BqNdB3zcfST70l4s1i0wHAJQ972kidVku3CqNECg11+KH RjMLskcT3sdmwiD5BwmXIqJtNrK02mjjrcQhkm/R8Mcc0hNk/oGbVy9s++Koplnw iod21WV7ndlMXsAwKaCLfpjwsS4DxTK/UPPdXobmaM8EKaSB7xesldmxwp4HwO0c P7NNgH+kSJXvrMeaSnfDjc5zM6bFlcc16+/hXPCKTKOz5sdgUjfsQbNMiO7TlEI= =qtHJ -----END PGP SIGNATURE----- From rgerganov at vmware.com Tue Dec 16 10:27:15 2014 From: rgerganov at vmware.com (Radoslav Gerganov) Date: Tue, 16 Dec 2014 12:27:15 +0200 Subject: [openstack-dev] [all][gerrit] Showing all inline comments from all patch sets Message-ID: <54900903.6090008@vmware.com> I never liked how Gerrit is displaying inline comments and I find it hard to follow discussions on changes with many patch sets and inline comments. So I tried to hack together an html view which display all comments grouped by patch set, file and commented line. You can see the result at http://gerrit-mirror.appspot.com/. Some examples: http://gerrit-mirror.appspot.com/127283 http://gerrit-mirror.appspot.com/128508 http://gerrit-mirror.appspot.com/83207 There is room for many improvements (my css skills are very limited) but I am just curious if someone else finds the idea useful. The frontend is using the same APIs as the Gerrit UI and the backend running on GoogleAppEngine is just proxying the requests to review.openstack.org. So in theory if we serve the html page from our Gerrit it will work. You can find all sources here: https://github.com/rgerganov/gerrit-hacks. Let me know what you think. Thanks, Rado From pasquale.porreca at dektech.com.au Tue Dec 16 10:41:31 2014 From: pasquale.porreca at dektech.com.au (Pasquale Porreca) Date: Tue, 16 Dec 2014 11:41:31 +0100 Subject: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled Message-ID: <54900C5B.6020405@dektech.com.au> Is there a specific reason for which a fixed ip is bound to a port on a subnet where dhcp is disabled? it is confusing to have this info shown when the instance doesn't have actually an ip on that port. Should I fill a bug report, or is this a wanted behavior? -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr From apevec at gmail.com Tue Dec 16 10:59:25 2014 From: apevec at gmail.com (Alan Pevec) Date: Tue, 16 Dec 2014 11:59:25 +0100 Subject: [openstack-dev] Do all OpenStack daemons support sd_notify? In-Reply-To: <1418658934-sup-597@fewbar.com> References: <548D4E35.6070101@debian.org> <548EFC60.2020505@redhat.com> <1418658934-sup-597@fewbar.com> Message-ID: 2014-12-15 17:00 GMT+01:00 Clint Byrum : > Excerpts from Ihar Hrachyshka's message of 2014-12-15 07:21:04 -0800: >> I guess Type=notify is supposed to be used with daemons that use >> Service class from oslo-incubator that provides systemd notification >> mechanism, or call to systemd.notify_once() otherwise. ... >> BTW now that more distributions are interested in shipping unit files >> for services, should we upstream them and ship the same thing in all >> interested distributions? > > Since we can expect the five currently implemented OS's in TripleO to all > have systemd by default soon (Debian, Fedora, openSUSE, RHEL, Ubuntu), > it would make a lot of sense for us to make the systemd unit files that > TripleO generates set Type=notify wherever possible. So hopefully we can > actually make such a guarantee upstream sometime in the not-so-distant > future, especially since our CI will run two of the more distinct forks, > Ubuntu and Fedora. There's one issue with Type=notify that Dan Prince discovered and where Type=simple works better for his use-case: if service startup takes more than DefaultTimeoutStartSec (90s by default) and systemd does not get notification from the service, it will consider it failed and try to restart it if Restart is defined in the service unit file. I tried to fix that by disabling timeout (example in Nova package https://review.gerrithub.io/13054 ) but then systemctl start blocks until notification is received. Copying Dan's review comment: "This still isn't quite right. It is better in that the systemctl doesn't think the service fails... rather it just seems to hang indefinately on 'systemctl start openstack-nova-compute', as in I never get my terminal back. My test case is: 1) Stop Nova compute. 2) Stop RabbitMQ. 3) Try to start Nova compute via systemctl. Could the RabbitMQ retry loop be preventing the notification? " Current implementation in oslo service sends notification only just before entering the wait loop, because at that point all initialization should be done and service ready to serve. Does anyone have a suggestion for the better place where to notify service readiness? Or should just Dan's test-case step 3) be modified as: 3) Start Nova compute via systemctl start ... & (i.e. in the background) ? Cheers, Alan From jp at jamezpolley.com Tue Dec 16 11:06:43 2014 From: jp at jamezpolley.com (James Polley) Date: Tue, 16 Dec 2014 12:06:43 +0100 Subject: [openstack-dev] [TripleO] mid-cycle details -- CONFIRMED Feb. 18 - 20 In-Reply-To: <1418703212-sup-8801@fewbar.com> References: <1417474177-sup-8420@fewbar.com> <1418703212-sup-8801@fewbar.com> Message-ID: Is there a group hotel booking being arranged? On Tue, Dec 16, 2014 at 5:26 AM, Clint Byrum wrote: > > I'm happy to announce we've cleared the schedule and the Mid-Cycle is > confirmed for February 18 - 20 in Seattle, WA at HP's downtown offices. > > Please refer to the etherpad linked below for details including address > and instructions for access to the building. > > PLEASE make sure you add yourself to the list of confirmed attendees > on the etherpad *BEFORE* booking travel. We have a hard limit of 30 > participants, so if you are not certain you have a spot, please contact > me before booking travel. > > Excerpts from Clint Byrum's message of 2014-12-01 14:58:58 -0800: > > Hello! I've received confirmation that our venue, the HP offices in > > downtown Seattle, will be available for the most-often-preferred > > least-often-cannot week of Feb 16 - 20. > > > > Our venue has a maximum of 20 participants, but I only have 16 possible > > attendees now. Please add yourself to that list _now_ if you will be > > joining us. > > > > I've asked our office staff to confirm Feb 18 - 20 (Wed-Fri). When they > > do, I will reply to this thread to let everyone know so you can all > > start to book travel. See the etherpad for travel details. > > > > https://etherpad.openstack.org/p/kilo-tripleo-midcycle-meetup > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorlando at nicira.com Tue Dec 16 11:25:22 2014 From: sorlando at nicira.com (Salvatore Orlando) Date: Tue, 16 Dec 2014 11:25:22 +0000 Subject: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled In-Reply-To: <54900C5B.6020405@dektech.com.au> References: <54900C5B.6020405@dektech.com.au> Message-ID: In Neutron IP address management and distribution are separated concepts. IP addresses are assigned to ports even when DHCP is disabled. That IP address is indeed used to configure anti-spoofing rules and security groups. It is however understandable that one wonders why an IP address is assigned to a port if there is no DHCP server to communicate that address. Operators might decide to use different tools to ensure the IP address is then assigned to the instance's ports. On XenServer for instance one could use a guest agent reading network configuration from XenStore; as another example, older versions of Openstack used to inject network configuration into the instance file system; I reckon that today's configdrive might also be used to configure instance's networking. Summarising I don't think this is a bug. Nevertheless if you have any idea regarding improvements on the API UX feel free to file a bug report. Salvatore On 16 December 2014 at 10:41, Pasquale Porreca < pasquale.porreca at dektech.com.au> wrote: > > Is there a specific reason for which a fixed ip is bound to a port on a > subnet where dhcp is disabled? it is confusing to have this info shown > when the instance doesn't have actually an ip on that port. > Should I fill a bug report, or is this a wanted behavior? > > -- > Pasquale Porreca > > DEK Technologies > Via dei Castelli Romani, 22 > 00040 Pomezia (Roma) > > Mobile +39 3394823805 > Skype paskporr > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From irenab at mellanox.com Tue Dec 16 11:40:08 2014 From: irenab at mellanox.com (Irena Berezovsky) Date: Tue, 16 Dec 2014 11:40:08 +0000 Subject: [openstack-dev] SRIOV-error In-Reply-To: References: Message-ID: Hi David, As I mentioned before, you do not need to run sriov agent in your setup, just set agent_required=False in your neutron-server configuration. I think that initially this may be easier to make things work this way. I also cannot understand why you have two neutron config files that contain same sections with different settings. You can find me on #openstack-neuron IRC channel, I can try to help. BR, Irena From: david jhon [mailto:djhon9813 at gmail.com] Sent: Tuesday, December 16, 2014 9:44 AM To: Irena Berezovsky Cc: OpenStack Development Mailing List (not for usage questions); Murali B Subject: Re: [openstack-dev] SRIOV-error Hi Irena and Murali, Thanks a lot for your reply! Here is the output from pci_devices table of nova db: select * from pci_devices; +---------------------+------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+-----------------------------------+---------------+------------+ | created_at | updated_at | deleted_at | deleted | id | compute_node_id | address | product_id | vendor_id | dev_type | dev_id | label | status | extra_info | instance_uuid | request_id | +---------------------+------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+-----------------------------------+---------------+------------+ | 2014-12-15 12:10:52 | NULL | NULL | 0 | 1 | 1 | 0000:03:10.0 | 10ed | 8086 | type-VF | pci_0000_03_10_0 | label_8086_10ed | available | {"phys_function": "0000:03:00.0"} | NULL | NULL | | 2014-12-15 12:10:52 | NULL | NULL | 0 | 2 | 1 | 0000:03:10.2 | 10ed | 8086 | type-VF | pci_0000_03_10_2 | label_8086_10ed | available | {"phys_function": "0000:03:00.0"} | NULL | NULL | | 2014-12-15 12:10:52 | NULL | NULL | 0 | 3 | 1 | 0000:03:10.4 | 10ed | 8086 | type-VF | pci_0000_03_10_4 | label_8086_10ed | available | {"phys_function": "0000:03:00.0"} | NULL | NULL | | 2014-12-15 12:10:52 | NULL | NULL | 0 | 4 | 1 | 0000:03:10.6 | 10ed | 8086 | type-VF | pci_0000_03_10_6 | label_8086_10ed | available | {"phys_function": "0000:03:00.0"} | NULL | NULL | | 2014-12-15 12:10:53 | NULL | NULL | 0 | 5 | 1 | 0000:03:10.1 | 10ed | 8086 | type-VF | pci_0000_03_10_1 | label_8086_10ed | available | {"phys_function": "0000:03:00.1"} | NULL | NULL | | 2014-12-15 12:10:53 | NULL | NULL | 0 | 6 | 1 | 0000:03:10.3 | 10ed | 8086 | type-VF | pci_0000_03_10_3 | label_8086_10ed | available | {"phys_function": "0000:03:00.1"} | NULL | NULL | | 2014-12-15 12:10:53 | NULL | NULL | 0 | 7 | 1 | 0000:03:10.5 | 10ed | 8086 | type-VF | pci_0000_03_10_5 | label_8086_10ed | available | {"phys_function": "0000:03:00.1"} | NULL | NULL | | 2014-12-15 12:10:53 | NULL | NULL | 0 | 8 | 1 | 0000:03:10.7 | 10ed | 8086 | type-VF | pci_0000_03_10_7 | label_8086_10ed | available | {"phys_function": "0000:03:00.1"} | NULL | NULL | +---------------------+------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+-----------------------------------+---------------+------------+ output from select hypervisor_hostname,pci_stats from compute_nodes; is: +---------------------+-------------------------------------------------------------------------------------------+ | hypervisor_hostname | pci_stats | +---------------------+-------------------------------------------------------------------------------------------+ | blade08 | [{"count": 8, "vendor_id": "8086", "physical_network": "ext-net", "product_id": "10ed"}] | +---------------------+-------------------------------------------------------------------------------------------+ Moreover, I have set agent_required = True in /etc/neutron/plugins/ml2/ml2_conf_sriov.ini. but still found no sriov agent running. # Defines configuration options for SRIOV NIC Switch MechanismDriver # and Agent [ml2_sriov] # (ListOpt) Comma-separated list of # supported Vendor PCI Devices, in format vendor_id:product_id # #supported_pci_vendor_devs = 8086:10ca, 8086:10ed supported_pci_vendor_devs = 8086:10ed # Example: supported_pci_vendor_devs = 15b3:1004 # # (BoolOpt) Requires running SRIOV neutron agent for port binding agent_required = True [sriov_nic] # (ListOpt) Comma-separated list of : # tuples mapping physical network names to the agent's node-specific # physical network device interfaces of SR-IOV physical function to be used # for VLAN networks. All physical networks listed in network_vlan_ranges on # the server should have mappings to appropriate interfaces on each agent. # physical_device_mappings = ext-net:br-ex # Example: physical_device_mappings = physnet1:eth1 # # (ListOpt) Comma-separated list of : # tuples, mapping network_device to the agent's node-specific list of virtual # functions that should not be used for virtual networking. # vfs_to_exclude is a semicolon-separated list of virtual # functions to exclude from network_device. The network_device in the # mapping should appear in the physical_device_mappings list. # exclude_devices = # Example: exclude_devices = eth1:0000:07:00.2; 0000:07:00.3 ======================================================================================== pci_passthrough_whitelist from /etc/nova/nova.conf: pci_passthrough_whitelist = {"address":"*:03:10.*","physical_network":"ext-net"} ==================================================== /etc/neutron/plugins/ml2/ml2_conf.ini: [ml2] # (ListOpt) List of network type driver entrypoints to be loaded from # the neutron.ml2.type_drivers namespace. # #type_drivers = local,flat,vlan,gre,vxlan #Example: type_drivers = flat,vlan,gre,vxlan #type_drivers = flat,gre, vlan type_drivers = flat,vlan # (ListOpt) Ordered list of network_types to allocate as tenant # networks. The default value 'local' is useful for single-box testing # but provides no connectivity between hosts. # # tenant_network_types = local # Example: tenant_network_types = vlan,gre,vxlan #tenant_network_types = gre, vlan tenant_network_types = vlan # (ListOpt) Ordered list of networking mechanism driver entrypoints # to be loaded from the neutron.ml2.mechanism_drivers namespace. mechanism_drivers = openvswitch, sriovnicswitch # Example: mechanism_drivers = openvswitch,mlnx # Example: mechanism_drivers = arista # Example: mechanism_drivers = cisco,logger # Example: mechanism_drivers = openvswitch,brocade # Example: mechanism_drivers = linuxbridge,brocade # (ListOpt) Ordered list of extension driver entrypoints # to be loaded from the neutron.ml2.extension_drivers namespace. # extension_drivers = # Example: extension_drivers = anewextensiondriver [ml2_type_flat] # (ListOpt) List of physical_network names with which flat networks # can be created. Use * to allow flat networks with arbitrary # physical_network names. # flat_networks = ext-net # Example:flat_networks = physnet1,physnet2 # Example:flat_networks = * [ml2_type_vlan] # (ListOpt) List of [::] tuples # specifying physical_network names usable for VLAN provider and # tenant networks, as well as ranges of VLAN tags on each # physical_network available for allocation as tenant networks. network_vlan_ranges = ext-net:2:100 # Example: network_vlan_ranges = physnet1:1000:2999,physnet2 [ml2_type_gre] # (ListOpt) Comma-separated list of : tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation #tunnel_id_ranges = 1:1000 [ml2_type_vxlan] # (ListOpt) Comma-separated list of : tuples enumerating # ranges of VXLAN VNI IDs that are available for tenant network allocation. # # vni_ranges = # (StrOpt) Multicast group for the VXLAN interface. When configured, will # enable sending all broadcast traffic to this multicast group. When left # unconfigured, will disable multicast VXLAN mode. # # vxlan_group = # Example: vxlan_group = 239.1.1.1 [securitygroup] # Controls if neutron security group is enabled or not. # It should be false when you use nova security group. enable_security_group = True # Use ipset to speed-up the iptables security groups. Enabling ipset support # requires that ipset is installed on L2 agent node. enable_ipset = True firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [ovs] local_ip = controller enable_tunneling = True bridge_mappings = external:br-ex [agent] tunnel_types = vlan [ml2_sriov] agent_required = True Please tell me what is wrong in there plus what exactly "physnet1" should be? Thanks again for all your help and suggestion.... Regards, On Tue, Dec 16, 2014 at 10:42 AM, Irena Berezovsky > wrote: Hi David, You error is not related to agent. I would suggest to check: 1. nova.conf at your compute node for pci whitelist configuration 2. Neutron server configuration for correct physical_network label matching the label in pci whitelist 3. Nova DB tables containing PCI devices entries: "#echo 'use nova;select hypervisor_hostname,pci_stats from compute_nodes;' | mysql -u root" You should not run SR-IOV agent in you setup. SR-IOV agent is an optional and currently does not add value if you use Intel NIC. Regards, Irena From: david jhon [mailto:djhon9813 at gmail.com] Sent: Tuesday, December 16, 2014 5:54 AM To: Murali B Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] SRIOV-error Just to be more clear, command $lspci | grep -i Ethernet gives following output: 01:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 03:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 03:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 03:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 04:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 04:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 04:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) How can I make SR-IOV agent run and fix this bug? On Tue, Dec 16, 2014 at 8:36 AM, david jhon > wrote: Hi Murali, Thanks for your response, I did the same, it has resolved errors apparently but 1) neutron agent-list shows no agent for sriov, 2) neutron port is created successfully but creating vm is erred in scheduling as follows: result from neutron agent-list: +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ | id | agent_type | host | alive | admin_state_up | binary | +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ | 2acc7044-e552-4601-b00b-00ba591b453f | Open vSwitch agent | blade08 | xxx | True | neutron-openvswitch-agent | | 595d07c6-120e-42ea-a950-6c77a6455f10 | Metadata agent | blade08 | :-) | True | neutron-metadata-agent | | a1f253a8-e02e-4498-8609-4e265285534b | DHCP agent | blade08 | :-) | True | neutron-dhcp-agent | | d46b29d8-4b5f-4838-bf25-b7925cb3e3a7 | L3 agent | blade08 | :-) | True | neutron-l3-agent | +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ 2014-12-15 19:30:44.546 40249 ERROR oslo.messaging.rpc.dispatcher [req-c7741cff-a7d8-422f-b605-6a1d976aeb09 ] Exception during message handling: PCI $ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 13$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 17$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 12$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 139, i$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher return func(*args, **kwargs) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 175, in s$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher filter_properties) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line $ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher filter_properties) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line $ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher chosen_host.obj.consume_from_instance(instance_properties) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line 246,$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher self.pci_stats.apply_requests(pci_requests.requests) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/pci/pci_stats.py", line 209, in apply$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher raise exception.PciDeviceRequestFailed(requests=requests) 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher PciDeviceRequestFailed: PCI device request ({'requests': [InstancePCIRequest(alias_$ 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher. Moreover, no /var/log/sriov-agent.log file exists. Please help me to fix this issue. Thanks everyone! On Mon, Dec 15, 2014 at 5:18 PM, Murali B > wrote: Hi David, Please add as per the Irena suggestion FYI: refer the below configuration http://pastebin.com/DGmW7ZEg Thanks -Murali -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuxinguo at huawei.com Tue Dec 16 11:40:13 2014 From: liuxinguo at huawei.com (liuxinguo) Date: Tue, 16 Dec 2014 11:40:13 +0000 Subject: [openstack-dev] [cinder driver] A question about Kilo merge point Message-ID: If a cinder driver can not be mergerd into Kilo before Kilo-1, does it means that this driver will has very little chance to be merged into Kilo? And what percentage of drivers will be merged before Kilo-1 according to the whole drivers that will be merged into Kilo at last? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Dec 16 11:50:01 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 16 Dec 2014 06:50:01 -0500 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: <549005D5.7070707@redhat.com> References: <548EED0D.6020600@redhat.com> <549005D5.7070707@redhat.com> Message-ID: <88210AFF-9388-488F-A443-3480682F0D51@doughellmann.com> On Dec 16, 2014, at 5:13 AM, Ihar Hrachyshka wrote: > Signed PGP part > On 15/12/14 18:57, Doug Hellmann wrote: > > There may be a similar problem managing dependencies on libraries > > that live outside of either tree. I assume you already decided how > > to handle that. Are you doing the same thing, and adding the > > requirements to neutron?s lists? > > I guess the idea is to keep in neutron-*aas only those oslo-incubator > modules that are used there solely (=not used in main repo). How are the *aas packages installed? Are they separate libraries or applications that are installed on top of neutron? Or are their files copied into the neutron namespace? > > I think requirements are a bit easier and should track all direct > dependencies in each of the repos, so that in case main repo decides > to drop one, neutron-*aas repos are not broken. > > For requirements, it's different because there is no major burden due > to duplicate entries in repos. > > > > > On Dec 15, 2014, at 12:16 PM, Doug Wiegley > > wrote: > > > >> Hi all, > >> > >> Ihar and I discussed this on IRC, and are going forward with > >> option 2 unless someone has a big problem with it. > >> > >> Thanks, Doug > >> > >> > >> On 12/15/14, 8:22 AM, "Doug Wiegley" > >> wrote: > >> > >>> Hi Ihar, > >>> > >>> I?m actually in favor of option 2, but it implies a few things > >>> about your time, and I wanted to chat with you before > >>> presuming. > >>> > >>> Maintenance can not involve breaking changes. At this point, > >>> the co-gate will block it. Also, oslo graduation changes will > >>> have to be made in the services repos first, and then Neutron. > >>> > >>> Thanks, doug > >>> > >>> > >>> On 12/15/14, 6:15 AM, "Ihar Hrachyshka" > >>> wrote: > >>> > > Hi all, > > > > the question arose recently in one of reviews for neutron-*aas > > repos to remove all oslo-incubator code from those repos since > > it's duplicated in neutron main repo. (You can find the link to the > > review at the end of the email.) > > > > Brief hostory: neutron repo was recently split into 4 pieces > > (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split > > resulted in each repository keeping their own copy of > > neutron/openstack/common/... tree (currently unused in all > > neutron-*aas repos that are still bound to modules from main > > repo). > > > > As a oslo liaison for the project, I wonder what's the best way to > > manage oslo-incubator files. We have several options: > > > > 1. just kill all the neutron/openstack/common/ trees from > > neutron-*aas repositories and continue using modules from main > > repo. > > > > 2. kill all duplicate modules from neutron-*aas repos and leave > > only those that are used in those repos but not in main repo. > > > > 3. fully duplicate all those modules in each of four repos that use > > them. > > > > I think option 1. is a straw man, since we should be able to > > introduce new oslo-incubator modules into neutron-*aas repos even > > if they are not used in main repo. > > > > Option 2. is good when it comes to synching non-breaking bug fixes > > (or security fixes) from oslo-incubator, in that it will require > > only one sync patch instead of e.g. four. At the same time there > > may be potential issues when synchronizing updates from > > oslo-incubator that would break API and hence require changes to > > each of the modules that use it. Since we don't support atomic > > merges for multiple projects in gate, we will need to be cautious > > about those updates, and we will still need to leave neutron-*aas > > repos broken for some time (though the time may be mitigated with > > care). > > > > Option 3. is vice versa - in theory, you get total decoupling, > > meaning no oslo-incubator updates in main repo are expected to > > break neutron-*aas repos, but bug fixing becomes a huge PITA. > > > > I would vote for option 2., for two reasons: - most oslo-incubator > > syncs are non-breaking, and we may effectively apply care to > > updates that may result in potential breakage (f.e. being able to > > trigger an integrated run for each of neutron-*aas repos with the > > main sync patch, if there are any concerns). - it will make oslo > > liaison life a lot easier. OK, I'm probably too selfish on that. > > ;) - it will make stable maintainers life a lot easier. The main > > reason why stable maintainers and distributions like recent oslo > > graduation movement is that we don't need to track each bug fix we > > need in every project, and waste lots of cycles on it. Being able > > to fix a bug in one place only is *highly* anticipated. [OK, I'm > > quite selfish on that one too.] - it's a delusion that there will > > be no neutron-main syncs that will break neutron-*aas repos ever. > > There can still be problems due to incompatibility between neutron > > main and neutron-*aas code resulted EXACTLY because multiple parts > > of the same process use different versions of the same module. > > > > That said, Doug Wiegley (lbaas core) seems to be in favour of > > option 3. due to lower coupling that is achieved in that way. I > > know that lbaas team had a bad experience due to tight coupling to > > neutron project in the past, so I appreciate their concerns. > > > > All in all, we should come up with some standard solution for both > > advanced services that are already split out, *and* upcoming > > vendor plugin shrinking initiative. > > > > The initial discussion is captured at: > > https://review.openstack.org/#/c/141427/ > > > > Thanks, /Ihar > >>> > >> > >> _______________________________________________ OpenStack-dev > >> mailing list OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From doug at doughellmann.com Tue Dec 16 11:52:18 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 16 Dec 2014 06:52:18 -0500 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: <549007EE.1060205@redhat.com> References: <548EED0D.6020600@redhat.com> <549007EE.1060205@redhat.com> Message-ID: <38253D73-25BD-4E24-9676-6450ABD6CEBE@doughellmann.com> On Dec 16, 2014, at 5:22 AM, Ihar Hrachyshka wrote: > Signed PGP part > On 15/12/14 17:22, Doug Wiegley wrote: > > Hi Ihar, > > > > I?m actually in favor of option 2, but it implies a few things > > about your time, and I wanted to chat with you before presuming. > > I think split didn't mean moving project trees under separate > governance, so I assume oslo (doc, qa, ...) liaisons should not be > split either. > > > > > Maintenance can not involve breaking changes. At this point, the > > co-gate will block it. Also, oslo graduation changes will have to > > be made in the services repos first, and then Neutron. > > Do you mean that a change to oslo-incubator modules is co-gated (not > just co-checked with no vote) with each of advanced services? > > As I pointed in my previous email, sometimes breakages are inescapable. > > Consider a change to neutron oslo-incubator module used commonly in > all repos that breaks API (they are quite rare, but still have a > chance of happening once in a while). If we would co-gate main neutron > repo changes with services, it will mean that we won't be able to > merge the change. > > That would probably suggest that we go forward with option 3 and > manage all incubator files separately in each of the trees, though, > again, breakages are still possible in that scenario via introducing > incompatibility between versions of incubator modules in separate repos. > > So we should be realistic about it and plan forward how we deal > potential breakages that *may* occur. > > As for oslo library graduations, the order is not really significant. > What is significant is that we drop oslo-incubator module from main > neutron repo only after all other neutron-*aas repos migrate to > appropriate oslo.* library. The neutron migration itself may occur in > parallel (by postponing module drop later). Don?t assume that it?s safe to combine the incubated version and library version of a module. We?ve had some examples where the APIs change or global state changes in a way that make the two incompatible. We definitely don?t take any care to ensure that the two copies can be run together. > > > > > Thanks, doug > > > > > > On 12/15/14, 6:15 AM, "Ihar Hrachyshka" > > wrote: > > > > Hi all, > > > > the question arose recently in one of reviews for neutron-*aas > > repos to remove all oslo-incubator code from those repos since > > it's duplicated in neutron main repo. (You can find the link to the > > review at the end of the email.) > > > > Brief hostory: neutron repo was recently split into 4 pieces > > (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split > > resulted in each repository keeping their own copy of > > neutron/openstack/common/... tree (currently unused in all > > neutron-*aas repos that are still bound to modules from main > > repo). > > > > As a oslo liaison for the project, I wonder what's the best way to > > manage oslo-incubator files. We have several options: > > > > 1. just kill all the neutron/openstack/common/ trees from > > neutron-*aas repositories and continue using modules from main > > repo. > > > > 2. kill all duplicate modules from neutron-*aas repos and leave > > only those that are used in those repos but not in main repo. > > > > 3. fully duplicate all those modules in each of four repos that use > > them. > > > > I think option 1. is a straw man, since we should be able to > > introduce new oslo-incubator modules into neutron-*aas repos even > > if they are not used in main repo. > > > > Option 2. is good when it comes to synching non-breaking bug fixes > > (or security fixes) from oslo-incubator, in that it will require > > only one sync patch instead of e.g. four. At the same time there > > may be potential issues when synchronizing updates from > > oslo-incubator that would break API and hence require changes to > > each of the modules that use it. Since we don't support atomic > > merges for multiple projects in gate, we will need to be cautious > > about those updates, and we will still need to leave neutron-*aas > > repos broken for some time (though the time may be mitigated with > > care). > > > > Option 3. is vice versa - in theory, you get total decoupling, > > meaning no oslo-incubator updates in main repo are expected to > > break neutron-*aas repos, but bug fixing becomes a huge PITA. > > > > I would vote for option 2., for two reasons: - most oslo-incubator > > syncs are non-breaking, and we may effectively apply care to > > updates that may result in potential breakage (f.e. being able to > > trigger an integrated run for each of neutron-*aas repos with the > > main sync patch, if there are any concerns). - it will make oslo > > liaison life a lot easier. OK, I'm probably too selfish on that. > > ;) - it will make stable maintainers life a lot easier. The main > > reason why stable maintainers and distributions like recent oslo > > graduation movement is that we don't need to track each bug fix we > > need in every project, and waste lots of cycles on it. Being able > > to fix a bug in one place only is *highly* anticipated. [OK, I'm > > quite selfish on that one too.] - it's a delusion that there will > > be no neutron-main syncs that will break neutron-*aas repos ever. > > There can still be problems due to incompatibility between neutron > > main and neutron-*aas code resulted EXACTLY because multiple parts > > of the same process use different versions of the same module. > > > > That said, Doug Wiegley (lbaas core) seems to be in favour of > > option 3. due to lower coupling that is achieved in that way. I > > know that lbaas team had a bad experience due to tight coupling to > > neutron project in the past, so I appreciate their concerns. > > > > All in all, we should come up with some standard solution for both > > advanced services that are already split out, *and* upcoming > > vendor plugin shrinking initiative. > > > > The initial discussion is captured at: > > https://review.openstack.org/#/c/141427/ > > > > Thanks, /Ihar > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ihrachys at redhat.com Tue Dec 16 12:27:43 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 16 Dec 2014 13:27:43 +0100 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: <88210AFF-9388-488F-A443-3480682F0D51@doughellmann.com> References: <548EED0D.6020600@redhat.com> <549005D5.7070707@redhat.com> <88210AFF-9388-488F-A443-3480682F0D51@doughellmann.com> Message-ID: <5490253F.7020802@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 16/12/14 12:50, Doug Hellmann wrote: > > On Dec 16, 2014, at 5:13 AM, Ihar Hrachyshka > wrote: > >> Signed PGP part On 15/12/14 18:57, Doug Hellmann wrote: >>> There may be a similar problem managing dependencies on >>> libraries that live outside of either tree. I assume you >>> already decided how to handle that. Are you doing the same >>> thing, and adding the requirements to neutron?s lists? >> >> I guess the idea is to keep in neutron-*aas only those >> oslo-incubator modules that are used there solely (=not used in >> main repo). > > How are the *aas packages installed? Are they separate libraries or > applications that are installed on top of neutron? Or are their > files copied into the neutron namespace? They are separate libraries with their own setup.py, dependencies, tarballs, all that, but they are free to use (public) code from main neutron package. > >> >> I think requirements are a bit easier and should track all >> direct dependencies in each of the repos, so that in case main >> repo decides to drop one, neutron-*aas repos are not broken. >> >> For requirements, it's different because there is no major burden >> due to duplicate entries in repos. >> >>> >>> On Dec 15, 2014, at 12:16 PM, Doug Wiegley >>> wrote: >>> >>>> Hi all, >>>> >>>> Ihar and I discussed this on IRC, and are going forward with >>>> option 2 unless someone has a big problem with it. >>>> >>>> Thanks, Doug >>>> >>>> >>>> On 12/15/14, 8:22 AM, "Doug Wiegley" >>>> wrote: >>>> >>>>> Hi Ihar, >>>>> >>>>> I?m actually in favor of option 2, but it implies a few >>>>> things about your time, and I wanted to chat with you >>>>> before presuming. >>>>> >>>>> Maintenance can not involve breaking changes. At this >>>>> point, the co-gate will block it. Also, oslo graduation >>>>> changes will have to be made in the services repos first, >>>>> and then Neutron. >>>>> >>>>> Thanks, doug >>>>> >>>>> >>>>> On 12/15/14, 6:15 AM, "Ihar Hrachyshka" >>>>> wrote: >>>>> >>> Hi all, >>> >>> the question arose recently in one of reviews for neutron-*aas >>> repos to remove all oslo-incubator code from those repos since >>> it's duplicated in neutron main repo. (You can find the link to >>> the review at the end of the email.) >>> >>> Brief hostory: neutron repo was recently split into 4 pieces >>> (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The >>> split resulted in each repository keeping their own copy of >>> neutron/openstack/common/... tree (currently unused in all >>> neutron-*aas repos that are still bound to modules from main >>> repo). >>> >>> As a oslo liaison for the project, I wonder what's the best way >>> to manage oslo-incubator files. We have several options: >>> >>> 1. just kill all the neutron/openstack/common/ trees from >>> neutron-*aas repositories and continue using modules from main >>> repo. >>> >>> 2. kill all duplicate modules from neutron-*aas repos and >>> leave only those that are used in those repos but not in main >>> repo. >>> >>> 3. fully duplicate all those modules in each of four repos that >>> use them. >>> >>> I think option 1. is a straw man, since we should be able to >>> introduce new oslo-incubator modules into neutron-*aas repos >>> even if they are not used in main repo. >>> >>> Option 2. is good when it comes to synching non-breaking bug >>> fixes (or security fixes) from oslo-incubator, in that it will >>> require only one sync patch instead of e.g. four. At the same >>> time there may be potential issues when synchronizing updates >>> from oslo-incubator that would break API and hence require >>> changes to each of the modules that use it. Since we don't >>> support atomic merges for multiple projects in gate, we will >>> need to be cautious about those updates, and we will still need >>> to leave neutron-*aas repos broken for some time (though the >>> time may be mitigated with care). >>> >>> Option 3. is vice versa - in theory, you get total decoupling, >>> meaning no oslo-incubator updates in main repo are expected to >>> break neutron-*aas repos, but bug fixing becomes a huge PITA. >>> >>> I would vote for option 2., for two reasons: - most >>> oslo-incubator syncs are non-breaking, and we may effectively >>> apply care to updates that may result in potential breakage >>> (f.e. being able to trigger an integrated run for each of >>> neutron-*aas repos with the main sync patch, if there are any >>> concerns). - it will make oslo liaison life a lot easier. OK, >>> I'm probably too selfish on that. ;) - it will make stable >>> maintainers life a lot easier. The main reason why stable >>> maintainers and distributions like recent oslo graduation >>> movement is that we don't need to track each bug fix we need in >>> every project, and waste lots of cycles on it. Being able to >>> fix a bug in one place only is *highly* anticipated. [OK, I'm >>> quite selfish on that one too.] - it's a delusion that there >>> will be no neutron-main syncs that will break neutron-*aas >>> repos ever. There can still be problems due to incompatibility >>> between neutron main and neutron-*aas code resulted EXACTLY >>> because multiple parts of the same process use different >>> versions of the same module. >>> >>> That said, Doug Wiegley (lbaas core) seems to be in favour of >>> option 3. due to lower coupling that is achieved in that way. >>> I know that lbaas team had a bad experience due to tight >>> coupling to neutron project in the past, so I appreciate their >>> concerns. >>> >>> All in all, we should come up with some standard solution for >>> both advanced services that are already split out, *and* >>> upcoming vendor plugin shrinking initiative. >>> >>> The initial discussion is captured at: >>> https://review.openstack.org/#/c/141427/ >>> >>> Thanks, /Ihar >>>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUkCU/AAoJEC5aWaUY1u57XPgH/jNEpZCqWP0R3CluOLuUHhWp yzJNSN8hscdfi3+cn65VDhH6iy2lnOsg8SAr/SZ8jk3JRa/ic7QhKQqXatTdcq58 iAmKBij+slngLhTcv0GJtbLPUdUyiKPnE0+TA88P7ijgMrj6OoF3PCzFpYHGv/ra z0clRdwv9CSnG1S/+wAZlexawt6qnm/M2da6wgHUrVmoNLMsimxtWGN8r9TZISaZ mf43DMh4+XDt2rFgZ3Pb3tvgyzUshA5rWykQJ6PBXhyQgaNojVrvBsfQpiP1PuTK BzidEL9jNNoi6BVq3DkMAnVIPHX1bYhO928svWgUCVLEhr9DFnKI2sNIhyPaWqY= =mt2Z -----END PGP SIGNATURE----- From pasquale.porreca at dektech.com.au Tue Dec 16 12:30:18 2014 From: pasquale.porreca at dektech.com.au (Pasquale Porreca) Date: Tue, 16 Dec 2014 13:30:18 +0100 Subject: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled In-Reply-To: References: <54900C5B.6020405@dektech.com.au> Message-ID: <549025DA.2060408@dektech.com.au> I understood and I agree that assigning the ip address to the port is not a bug, however showing it to the user, at least in Horizon dashboard where it pops up in the main instance screen without a specific search, can be very confusing. On 12/16/14 12:25, Salvatore Orlando wrote: > In Neutron IP address management and distribution are separated concepts. > IP addresses are assigned to ports even when DHCP is disabled. That IP > address is indeed used to configure anti-spoofing rules and security groups. > > It is however understandable that one wonders why an IP address is assigned > to a port if there is no DHCP server to communicate that address. Operators > might decide to use different tools to ensure the IP address is then > assigned to the instance's ports. On XenServer for instance one could use a > guest agent reading network configuration from XenStore; as another > example, older versions of Openstack used to inject network configuration > into the instance file system; I reckon that today's configdrive might also > be used to configure instance's networking. > > Summarising I don't think this is a bug. Nevertheless if you have any idea > regarding improvements on the API UX feel free to file a bug report. > > Salvatore > > On 16 December 2014 at 10:41, Pasquale Porreca < > pasquale.porreca at dektech.com.au> wrote: >> >> Is there a specific reason for which a fixed ip is bound to a port on a >> subnet where dhcp is disabled? it is confusing to have this info shown >> when the instance doesn't have actually an ip on that port. >> Should I fill a bug report, or is this a wanted behavior? >> >> -- >> Pasquale Porreca >> >> DEK Technologies >> Via dei Castelli Romani, 22 >> 00040 Pomezia (Roma) >> >> Mobile +39 3394823805 >> Skype paskporr >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr From doug at doughellmann.com Tue Dec 16 12:41:07 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 16 Dec 2014 07:41:07 -0500 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: <5490253F.7020802@redhat.com> References: <548EED0D.6020600@redhat.com> <549005D5.7070707@redhat.com> <88210AFF-9388-488F-A443-3480682F0D51@doughellmann.com> <5490253F.7020802@redhat.com> Message-ID: On Dec 16, 2014, at 7:27 AM, Ihar Hrachyshka wrote: > Signed PGP part > On 16/12/14 12:50, Doug Hellmann wrote: > > > > On Dec 16, 2014, at 5:13 AM, Ihar Hrachyshka > > wrote: > > > >> Signed PGP part On 15/12/14 18:57, Doug Hellmann wrote: > >>> There may be a similar problem managing dependencies on > >>> libraries that live outside of either tree. I assume you > >>> already decided how to handle that. Are you doing the same > >>> thing, and adding the requirements to neutron?s lists? > >> > >> I guess the idea is to keep in neutron-*aas only those > >> oslo-incubator modules that are used there solely (=not used in > >> main repo). > > > > How are the *aas packages installed? Are they separate libraries or > > applications that are installed on top of neutron? Or are their > > files copied into the neutron namespace? > > They are separate libraries with their own setup.py, dependencies, > tarballs, all that, but they are free to use (public) code from main > neutron package. OK. If they don?t have copies of all of the incubated modules they use, how are they tested? Is neutron a dependency? > > > > >> > >> I think requirements are a bit easier and should track all > >> direct dependencies in each of the repos, so that in case main > >> repo decides to drop one, neutron-*aas repos are not broken. > >> > >> For requirements, it's different because there is no major burden > >> due to duplicate entries in repos. > >> > >>> > >>> On Dec 15, 2014, at 12:16 PM, Doug Wiegley > >>> wrote: > >>> > >>>> Hi all, > >>>> > >>>> Ihar and I discussed this on IRC, and are going forward with > >>>> option 2 unless someone has a big problem with it. > >>>> > >>>> Thanks, Doug > >>>> > >>>> > >>>> On 12/15/14, 8:22 AM, "Doug Wiegley" > >>>> wrote: > >>>> > >>>>> Hi Ihar, > >>>>> > >>>>> I?m actually in favor of option 2, but it implies a few > >>>>> things about your time, and I wanted to chat with you > >>>>> before presuming. > >>>>> > >>>>> Maintenance can not involve breaking changes. At this > >>>>> point, the co-gate will block it. Also, oslo graduation > >>>>> changes will have to be made in the services repos first, > >>>>> and then Neutron. > >>>>> > >>>>> Thanks, doug > >>>>> > >>>>> > >>>>> On 12/15/14, 6:15 AM, "Ihar Hrachyshka" > >>>>> wrote: > >>>>> > >>> Hi all, > >>> > >>> the question arose recently in one of reviews for neutron-*aas > >>> repos to remove all oslo-incubator code from those repos since > >>> it's duplicated in neutron main repo. (You can find the link to > >>> the review at the end of the email.) > >>> > >>> Brief hostory: neutron repo was recently split into 4 pieces > >>> (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The > >>> split resulted in each repository keeping their own copy of > >>> neutron/openstack/common/... tree (currently unused in all > >>> neutron-*aas repos that are still bound to modules from main > >>> repo). > >>> > >>> As a oslo liaison for the project, I wonder what's the best way > >>> to manage oslo-incubator files. We have several options: > >>> > >>> 1. just kill all the neutron/openstack/common/ trees from > >>> neutron-*aas repositories and continue using modules from main > >>> repo. > >>> > >>> 2. kill all duplicate modules from neutron-*aas repos and > >>> leave only those that are used in those repos but not in main > >>> repo. > >>> > >>> 3. fully duplicate all those modules in each of four repos that > >>> use them. > >>> > >>> I think option 1. is a straw man, since we should be able to > >>> introduce new oslo-incubator modules into neutron-*aas repos > >>> even if they are not used in main repo. > >>> > >>> Option 2. is good when it comes to synching non-breaking bug > >>> fixes (or security fixes) from oslo-incubator, in that it will > >>> require only one sync patch instead of e.g. four. At the same > >>> time there may be potential issues when synchronizing updates > >>> from oslo-incubator that would break API and hence require > >>> changes to each of the modules that use it. Since we don't > >>> support atomic merges for multiple projects in gate, we will > >>> need to be cautious about those updates, and we will still need > >>> to leave neutron-*aas repos broken for some time (though the > >>> time may be mitigated with care). > >>> > >>> Option 3. is vice versa - in theory, you get total decoupling, > >>> meaning no oslo-incubator updates in main repo are expected to > >>> break neutron-*aas repos, but bug fixing becomes a huge PITA. > >>> > >>> I would vote for option 2., for two reasons: - most > >>> oslo-incubator syncs are non-breaking, and we may effectively > >>> apply care to updates that may result in potential breakage > >>> (f.e. being able to trigger an integrated run for each of > >>> neutron-*aas repos with the main sync patch, if there are any > >>> concerns). - it will make oslo liaison life a lot easier. OK, > >>> I'm probably too selfish on that. ;) - it will make stable > >>> maintainers life a lot easier. The main reason why stable > >>> maintainers and distributions like recent oslo graduation > >>> movement is that we don't need to track each bug fix we need in > >>> every project, and waste lots of cycles on it. Being able to > >>> fix a bug in one place only is *highly* anticipated. [OK, I'm > >>> quite selfish on that one too.] - it's a delusion that there > >>> will be no neutron-main syncs that will break neutron-*aas > >>> repos ever. There can still be problems due to incompatibility > >>> between neutron main and neutron-*aas code resulted EXACTLY > >>> because multiple parts of the same process use different > >>> versions of the same module. > >>> > >>> That said, Doug Wiegley (lbaas core) seems to be in favour of > >>> option 3. due to lower coupling that is achieved in that way. > >>> I know that lbaas team had a bad experience due to tight > >>> coupling to neutron project in the past, so I appreciate their > >>> concerns. > >>> > >>> All in all, we should come up with some standard solution for > >>> both advanced services that are already split out, *and* > >>> upcoming vendor plugin shrinking initiative. > >>> > >>> The initial discussion is captured at: > >>> https://review.openstack.org/#/c/141427/ > >>> > >>> Thanks, /Ihar > >>>>> > >>>> > >>>> _______________________________________________ > >>>> OpenStack-dev mailing list OpenStack-dev at lists.openstack.org > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > > > From markmc at redhat.com Tue Dec 16 12:41:37 2014 From: markmc at redhat.com (Mark McLoughlin) Date: Tue, 16 Dec 2014 12:41:37 +0000 Subject: [openstack-dev] [oslo] interesting problem with config filter In-Reply-To: <4BCA7D02-38D6-4B5B-A496-8EB259C1A792@doughellmann.com> References: <4BCA7D02-38D6-4B5B-A496-8EB259C1A792@doughellmann.com> Message-ID: <1418733697.16928.106.camel@sorcha> Hi Doug, On Mon, 2014-12-08 at 15:58 -0500, Doug Hellmann wrote: > As we?ve discussed a few times, we want to isolate applications from > the configuration options defined by libraries. One way we have of > doing that is the ConfigFilter class in oslo.config. When a regular > ConfigOpts instance is wrapped with a filter, a library can register > new options on the filter that are not visible to anything that > doesn?t have the filter object. Or to put it more simply, the configuration options registered by the library should not be part of the public API of the library. > Unfortunately, the Neutron team has identified an issue with this > approach. We have a bug report [1] from them about the way we?re using > config filters in oslo.concurrency specifically, but the issue applies > to their use everywhere. > > The neutron tests set the default for oslo.concurrency?s lock_path > variable to ?$state_path/lock?, and the state_path option is defined > in their application. With the filter in place, interpolation of > $state_path to generate the lock_path value fails because state_path > is not known to the ConfigFilter instance. It seems that Neutron sets this default in its etc/neutron.conf file in its git tree: lock_path = $state_path/lock I think we should be aiming for defaults like this to be set in code, and for the sample config files to contain nothing but comments. So, neutron should do: lockutils.set_defaults(lock_path="$state_path/lock") That's a side detail, however. > The reverse would also happen (if the value of state_path was somehow > defined to depend on lock_path), This dependency wouldn't/shouldn't be code - because Neutron *code* shouldn't know about the existence of library config options. Neutron deployers absolutely will be aware of lock_path however. > and that?s actually a bigger concern to me. A deployer should be able > to use interpolation anywhere, and not worry about whether the options > are in parts of the code that can see each other. The values are all > in one file, as far as they know, and so interpolation should ?just > work?. Yes, if a deployer looks at a sample configuration file, all options listed in there seem like they're in-play for substitution use within the value of another option. For string substitution only, I'd say there should be a global namespace where all options are registered. Now ... one caveat on all of this ... I do think the string substitution feature is pretty obscure and mostly just used in default values. > I see a few solutions: > > 1. Don?t use the config filter at all. > 2. Make the config filter able to add new options and still see > everything else that is already defined (only filter in one > direction). > 3. Leave things as they are, and make the error message better. 4. Just tackle this specific case by making lock_path implicitly relative to a base path the application can set via an API, so Neutron would do: lockutils.set_base_path(CONF.state_path) at startup. 5. Make the toplevel ConfigOpts aware of all filters hanging off it, and somehow cycle through all of those filters just when doing string substitution. > Because of the deployment implications of using the filter, I?m > inclined to go with choice 1 or 2. However, choice 2 leaves open the > possibility of a deployer wanting to use the value of an option > defined by one filtered set of code when defining another. I don?t > know how frequently that might come up, but it seems like the error > would be very confusing, especially if both options are set in the > same config file. > > I think that leaves option 1, which means our plans for hiding options > from applications need to be rethought. > > Does anyone else see another solution that I?m missing? I'd do something like (3) and (4), then wait to see if it crops up multiple times in the future before tackling a more general solution. With option (1), the basic thing to think about is how to maintain API compatibility - if we expose the options through the API, how do we deal with future moves, removals, renames, and changing semantics of those config options. Mark. From ihrachys at redhat.com Tue Dec 16 12:42:28 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 16 Dec 2014 13:42:28 +0100 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: <38253D73-25BD-4E24-9676-6450ABD6CEBE@doughellmann.com> References: <548EED0D.6020600@redhat.com> <549007EE.1060205@redhat.com> <38253D73-25BD-4E24-9676-6450ABD6CEBE@doughellmann.com> Message-ID: <549028B4.5000407@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 16/12/14 12:52, Doug Hellmann wrote: > > On Dec 16, 2014, at 5:22 AM, Ihar Hrachyshka > wrote: > >> Signed PGP part On 15/12/14 17:22, Doug Wiegley wrote: >>> Hi Ihar, >>> >>> I?m actually in favor of option 2, but it implies a few things >>> about your time, and I wanted to chat with you before >>> presuming. >> >> I think split didn't mean moving project trees under separate >> governance, so I assume oslo (doc, qa, ...) liaisons should not >> be split either. >> >>> >>> Maintenance can not involve breaking changes. At this point, >>> the co-gate will block it. Also, oslo graduation changes will >>> have to be made in the services repos first, and then Neutron. >> >> Do you mean that a change to oslo-incubator modules is co-gated >> (not just co-checked with no vote) with each of advanced >> services? >> >> As I pointed in my previous email, sometimes breakages are >> inescapable. >> >> Consider a change to neutron oslo-incubator module used commonly >> in all repos that breaks API (they are quite rare, but still have >> a chance of happening once in a while). If we would co-gate main >> neutron repo changes with services, it will mean that we won't be >> able to merge the change. >> >> That would probably suggest that we go forward with option 3 and >> manage all incubator files separately in each of the trees, >> though, again, breakages are still possible in that scenario via >> introducing incompatibility between versions of incubator modules >> in separate repos. >> >> So we should be realistic about it and plan forward how we deal >> potential breakages that *may* occur. >> >> As for oslo library graduations, the order is not really >> significant. What is significant is that we drop oslo-incubator >> module from main neutron repo only after all other neutron-*aas >> repos migrate to appropriate oslo.* library. The neutron >> migration itself may occur in parallel (by postponing module drop >> later). > > Don?t assume that it?s safe to combine the incubated version and > library version of a module. We?ve had some examples where the APIs > change or global state changes in a way that make the two > incompatible. We definitely don?t take any care to ensure that the > two copies can be run together. Hm. Does it leave us with option 3 only? In that case, should we care about incompatibilities between different versions of incubator modules running in the same process (one for core code, and another one for a service)? That sounds more like we're not left with safe options. > >> >>> >>> Thanks, doug >>> >>> >>> On 12/15/14, 6:15 AM, "Ihar Hrachyshka" >>> wrote: >>> >>> Hi all, >>> >>> the question arose recently in one of reviews for neutron-*aas >>> repos to remove all oslo-incubator code from those repos since >>> it's duplicated in neutron main repo. (You can find the link to >>> the review at the end of the email.) >>> >>> Brief hostory: neutron repo was recently split into 4 pieces >>> (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The >>> split resulted in each repository keeping their own copy of >>> neutron/openstack/common/... tree (currently unused in all >>> neutron-*aas repos that are still bound to modules from main >>> repo). >>> >>> As a oslo liaison for the project, I wonder what's the best way >>> to manage oslo-incubator files. We have several options: >>> >>> 1. just kill all the neutron/openstack/common/ trees from >>> neutron-*aas repositories and continue using modules from main >>> repo. >>> >>> 2. kill all duplicate modules from neutron-*aas repos and >>> leave only those that are used in those repos but not in main >>> repo. >>> >>> 3. fully duplicate all those modules in each of four repos that >>> use them. >>> >>> I think option 1. is a straw man, since we should be able to >>> introduce new oslo-incubator modules into neutron-*aas repos >>> even if they are not used in main repo. >>> >>> Option 2. is good when it comes to synching non-breaking bug >>> fixes (or security fixes) from oslo-incubator, in that it will >>> require only one sync patch instead of e.g. four. At the same >>> time there may be potential issues when synchronizing updates >>> from oslo-incubator that would break API and hence require >>> changes to each of the modules that use it. Since we don't >>> support atomic merges for multiple projects in gate, we will >>> need to be cautious about those updates, and we will still need >>> to leave neutron-*aas repos broken for some time (though the >>> time may be mitigated with care). >>> >>> Option 3. is vice versa - in theory, you get total decoupling, >>> meaning no oslo-incubator updates in main repo are expected to >>> break neutron-*aas repos, but bug fixing becomes a huge PITA. >>> >>> I would vote for option 2., for two reasons: - most >>> oslo-incubator syncs are non-breaking, and we may effectively >>> apply care to updates that may result in potential breakage >>> (f.e. being able to trigger an integrated run for each of >>> neutron-*aas repos with the main sync patch, if there are any >>> concerns). - it will make oslo liaison life a lot easier. OK, >>> I'm probably too selfish on that. ;) - it will make stable >>> maintainers life a lot easier. The main reason why stable >>> maintainers and distributions like recent oslo graduation >>> movement is that we don't need to track each bug fix we need in >>> every project, and waste lots of cycles on it. Being able to >>> fix a bug in one place only is *highly* anticipated. [OK, I'm >>> quite selfish on that one too.] - it's a delusion that there >>> will be no neutron-main syncs that will break neutron-*aas >>> repos ever. There can still be problems due to incompatibility >>> between neutron main and neutron-*aas code resulted EXACTLY >>> because multiple parts of the same process use different >>> versions of the same module. >>> >>> That said, Doug Wiegley (lbaas core) seems to be in favour of >>> option 3. due to lower coupling that is achieved in that way. >>> I know that lbaas team had a bad experience due to tight >>> coupling to neutron project in the past, so I appreciate their >>> concerns. >>> >>> All in all, we should come up with some standard solution for >>> both advanced services that are already split out, *and* >>> upcoming vendor plugin shrinking initiative. >>> >>> The initial discussion is captured at: >>> https://review.openstack.org/#/c/141427/ >>> >>> Thanks, /Ihar >>> >> >> >> _______________________________________________ OpenStack-dev >> mailing list OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUkCi0AAoJEC5aWaUY1u57lBoH/3sSr2yF4PfrMjTS4dyplgQ7 ZW+tSewctNf1JRYfh4l9+eYnv+R9YB4xWT7AvfBVO28WP2BuK5CmZ5ja49M8fzmO jeRsgTInnZ7Hm3RkyAxpHdsiLuVRKN0syuEPN81BVJI0gLBd3kVf/0Anc6raC/Op RBlYOL9pUoCiSki+a6Pg4j2Zn/yKUAcOmGWblCoB7zpFNgeWNAoXCH06/6bKtDFg u0DHArKyOC/ZgmNs5BD/i2EFr71dqR5kitryuRbV02nVkm6U2iO6QfCgQx6PG61q vQHN3bLJ2JLuA2weisZL+20yDeSb9kAuXTwdstG/rhNTWH89CZy1nsuWdbXcsqE= =NE/f -----END PGP SIGNATURE----- From henry4hly at gmail.com Tue Dec 16 12:56:28 2014 From: henry4hly at gmail.com (Henry) Date: Tue, 16 Dec 2014 20:56:28 +0800 Subject: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change In-Reply-To: References: <87egs0re9m.fsf@metaswitch.com> Message-ID: Sent from my iPad On 2014-12-16, at ??2:54, "Armando M." wrote: > > > Good questions. I'm also looking for the linux bridge MD, SRIOV MD... > Who will be responsible for these drivers? > > Excellent question. In my opinion, 'technology' specific but not vendor specific MD (like SRIOV) should not be maintained by specific vendor. It should be accessible for all interested parties for contribution. > > I don't think that anyone is making the suggestion of making these drivers develop in silos, but instead one of the objective is to allow them to evolve more rapidly, and in the open, where anyone can participate. > > > The OVS driver is maintained by Neutron community, vendor specific hardware driver by vendor, SDN controllers driver by their own community or vendor. But there are also other drivers like SRIOV, which are general for a lot of vendor agonitsc backends, and can't be maintained by a certain vendor/community. > > Certain technologies, like the ones mentioned above may require specific hardware; even though they may not be particularly associated with a specific vendor, some sort of vendor support is indeed required, like 3rd party CI. So, grouping them together under an hw-accelerated umbrella, or whichever other name that sticks, may make sense long term should the number of drivers really ramp up as hinted below. There are also MD not related with hardware, like via-tap, vif-vhostuser. Even for sriov, a stub agent for testing is enough, no need for real hardware. All these MD should be very thin, only handle port binding. > > > So, it would be better to keep some "general backend" MD in tree besides SRIOV. There are also vif-type-tap, vif-type-vhostuser, hierarchy-binding-external-VTEP ... We can implement a very thin in-tree base MD that only handle "vif bind" which is backend agonitsc, then backend provider is free to implement their own service logic, either by an backend agent, or by a driver derived from the base MD for agentless scenery. > > Keeping general backend MDs in tree sounds reasonable. > Regards > > > Many thanks, > > Neil > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Tue Dec 16 12:59:35 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 16 Dec 2014 13:59:35 +0100 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: References: <548EED0D.6020600@redhat.com> <549005D5.7070707@redhat.com> <88210AFF-9388-488F-A443-3480682F0D51@doughellmann.com> <5490253F.7020802@redhat.com> Message-ID: <54902CB7.8080507@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 16/12/14 13:41, Doug Hellmann wrote: > > On Dec 16, 2014, at 7:27 AM, Ihar Hrachyshka > wrote: > >> Signed PGP part On 16/12/14 12:50, Doug Hellmann wrote: >>> >>> On Dec 16, 2014, at 5:13 AM, Ihar Hrachyshka >>> wrote: >>> >>>> Signed PGP part On 15/12/14 18:57, Doug Hellmann wrote: >>>>> There may be a similar problem managing dependencies on >>>>> libraries that live outside of either tree. I assume you >>>>> already decided how to handle that. Are you doing the same >>>>> thing, and adding the requirements to neutron?s lists? >>>> >>>> I guess the idea is to keep in neutron-*aas only those >>>> oslo-incubator modules that are used there solely (=not used >>>> in main repo). >>> >>> How are the *aas packages installed? Are they separate >>> libraries or applications that are installed on top of neutron? >>> Or are their files copied into the neutron namespace? >> >> They are separate libraries with their own setup.py, >> dependencies, tarballs, all that, but they are free to use >> (public) code from main neutron package. > > OK. > > If they don?t have copies of all of the incubated modules they use, > how are they tested? Is neutron a dependency? Yes, neutron is in their requirements.txt files. > >> >>> >>>> >>>> I think requirements are a bit easier and should track all >>>> direct dependencies in each of the repos, so that in case >>>> main repo decides to drop one, neutron-*aas repos are not >>>> broken. >>>> >>>> For requirements, it's different because there is no major >>>> burden due to duplicate entries in repos. >>>> >>>>> >>>>> On Dec 15, 2014, at 12:16 PM, Doug Wiegley >>>>> wrote: >>>>> >>>>>> Hi all, >>>>>> >>>>>> Ihar and I discussed this on IRC, and are going forward >>>>>> with option 2 unless someone has a big problem with it. >>>>>> >>>>>> Thanks, Doug >>>>>> >>>>>> >>>>>> On 12/15/14, 8:22 AM, "Doug Wiegley" >>>>>> wrote: >>>>>> >>>>>>> Hi Ihar, >>>>>>> >>>>>>> I?m actually in favor of option 2, but it implies a >>>>>>> few things about your time, and I wanted to chat with >>>>>>> you before presuming. >>>>>>> >>>>>>> Maintenance can not involve breaking changes. At this >>>>>>> point, the co-gate will block it. Also, oslo >>>>>>> graduation changes will have to be made in the services >>>>>>> repos first, and then Neutron. >>>>>>> >>>>>>> Thanks, doug >>>>>>> >>>>>>> >>>>>>> On 12/15/14, 6:15 AM, "Ihar Hrachyshka" >>>>>>> wrote: >>>>>>> >>>>> Hi all, >>>>> >>>>> the question arose recently in one of reviews for >>>>> neutron-*aas repos to remove all oslo-incubator code from >>>>> those repos since it's duplicated in neutron main repo. >>>>> (You can find the link to the review at the end of the >>>>> email.) >>>>> >>>>> Brief hostory: neutron repo was recently split into 4 >>>>> pieces (main, neutron-fwaas, neutron-lbaas, and >>>>> neutron-vpnaas). The split resulted in each repository >>>>> keeping their own copy of neutron/openstack/common/... tree >>>>> (currently unused in all neutron-*aas repos that are still >>>>> bound to modules from main repo). >>>>> >>>>> As a oslo liaison for the project, I wonder what's the best >>>>> way to manage oslo-incubator files. We have several >>>>> options: >>>>> >>>>> 1. just kill all the neutron/openstack/common/ trees from >>>>> neutron-*aas repositories and continue using modules from >>>>> main repo. >>>>> >>>>> 2. kill all duplicate modules from neutron-*aas repos and >>>>> leave only those that are used in those repos but not in >>>>> main repo. >>>>> >>>>> 3. fully duplicate all those modules in each of four repos >>>>> that use them. >>>>> >>>>> I think option 1. is a straw man, since we should be able >>>>> to introduce new oslo-incubator modules into neutron-*aas >>>>> repos even if they are not used in main repo. >>>>> >>>>> Option 2. is good when it comes to synching non-breaking >>>>> bug fixes (or security fixes) from oslo-incubator, in that >>>>> it will require only one sync patch instead of e.g. four. >>>>> At the same time there may be potential issues when >>>>> synchronizing updates from oslo-incubator that would break >>>>> API and hence require changes to each of the modules that >>>>> use it. Since we don't support atomic merges for multiple >>>>> projects in gate, we will need to be cautious about those >>>>> updates, and we will still need to leave neutron-*aas repos >>>>> broken for some time (though the time may be mitigated with >>>>> care). >>>>> >>>>> Option 3. is vice versa - in theory, you get total >>>>> decoupling, meaning no oslo-incubator updates in main repo >>>>> are expected to break neutron-*aas repos, but bug fixing >>>>> becomes a huge PITA. >>>>> >>>>> I would vote for option 2., for two reasons: - most >>>>> oslo-incubator syncs are non-breaking, and we may >>>>> effectively apply care to updates that may result in >>>>> potential breakage (f.e. being able to trigger an >>>>> integrated run for each of neutron-*aas repos with the main >>>>> sync patch, if there are any concerns). - it will make oslo >>>>> liaison life a lot easier. OK, I'm probably too selfish on >>>>> that. ;) - it will make stable maintainers life a lot >>>>> easier. The main reason why stable maintainers and >>>>> distributions like recent oslo graduation movement is that >>>>> we don't need to track each bug fix we need in every >>>>> project, and waste lots of cycles on it. Being able to fix >>>>> a bug in one place only is *highly* anticipated. [OK, I'm >>>>> quite selfish on that one too.] - it's a delusion that >>>>> there will be no neutron-main syncs that will break >>>>> neutron-*aas repos ever. There can still be problems due to >>>>> incompatibility between neutron main and neutron-*aas code >>>>> resulted EXACTLY because multiple parts of the same process >>>>> use different versions of the same module. >>>>> >>>>> That said, Doug Wiegley (lbaas core) seems to be in favour >>>>> of option 3. due to lower coupling that is achieved in that >>>>> way. I know that lbaas team had a bad experience due to >>>>> tight coupling to neutron project in the past, so I >>>>> appreciate their concerns. >>>>> >>>>> All in all, we should come up with some standard solution >>>>> for both advanced services that are already split out, >>>>> *and* upcoming vendor plugin shrinking initiative. >>>>> >>>>> The initial discussion is captured at: >>>>> https://review.openstack.org/#/c/141427/ >>>>> >>>>> Thanks, /Ihar >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> OpenStack-dev mailing list >>>>>> OpenStack-dev at lists.openstack.org >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>> >> > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUkCy3AAoJEC5aWaUY1u57Fk4IAOM2BpwcqlOgfxMtvcjfNpHB IfFQgAalsI0uzycL9hbArYP66ZcTzDSSIX7DNndTIAHwDAvoEtOJ+WVqFm3jH9Tr WHiTy0tx1nkzyE4oGZdp1DAYg2LEPuNn3tbC8ROqUnIqFrZ0voMKuhTGOCe4cNWL L+lljW6H1r5DZVm56gk9HsJHYwrmMYfY8YiQ5AH+j6w5rlu2a4Y6VtlDsWGZWBL4 kmnfhzjZLUnuJ3CBlbClApsJOh54dDjVgJkHxoLgGnKVzLptoXEn+0IcMippe4AR SFGcF9NAuugXgJqJlICfDVcFF6VgsQXmoC99Cq4L1EOaGdsF91SYvrEZ3JTlOTM= =d/+5 -----END PGP SIGNATURE----- From doug at doughellmann.com Tue Dec 16 13:15:24 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 16 Dec 2014 08:15:24 -0500 Subject: [openstack-dev] oslo.db 1.2.1 release coming to fix stable/juno In-Reply-To: <193BFB76-D89B-4B3A-94F8-DB6FEFEC5138@doughellmann.com> References: <27A76931-16EF-400B-8EDD-6C3E18F52053@doughellmann.com> <193BFB76-D89B-4B3A-94F8-DB6FEFEC5138@doughellmann.com> Message-ID: <31C68845-C96B-4744-B106-11B1CDDCED36@doughellmann.com> On Dec 15, 2014, at 5:58 PM, Doug Hellmann wrote: > > On Dec 15, 2014, at 3:21 PM, Doug Hellmann wrote: > >> The issue with stable/juno jobs failing because of the difference in the SQLAlchemy requirements between the older applications and the newer oslo.db is being addressed with a new release of the 1.2.x series. We will then cap the requirements for stable/juno to 1.2.1. We decided we did not need to raise the minimum version of oslo.db allowed in kilo, because the old versions of the library do work, if they are installed from packages and not through setuptools. >> >> Jeremy created a feature/1.2 branch for us, and I have 2 patches up [1][2] to apply the requirements fix. The change to the oslo.db version in stable/juno is [3]. >> >> After the changes in oslo.db merge, I will tag 1.2.1. > > After spending several hours exploring a bunch of options to make this actually work, some of which require making changes to test job definitions, grenade, or other long-term changes, I?m proposing a new approach: > > 1. Undo the change in master that broke the compatibility with versions of SQLAlchemy by making master match juno: https://review.openstack.org/141927 > 2. Update oslo.db after ^^ lands. > 3. Tag oslo.db 1.4.0 with a set of requirements compatible with Juno. > 4. Change the requirements in stable/juno to skip oslo.db 1.1, 1.2, and 1.3. > > I?ll proceed with that plan tomorrow morning (~15 hours from now) unless someone points out why that won?t work in the mean time. I just reset a few approved patches that were not going to land because of this issue to kick them out of the gate to expedite landing part of the fix. I did this by modifying their commit messages. I tried to limit the changes to simple cosmetic tweaks, so if you see a weird change to one of your patches that?s probably why. > > Doug > >> >> Doug >> >> [1] https://review.openstack.org/#/c/141893/ >> [2] https://review.openstack.org/#/c/141894/ >> [3] https://review.openstack.org/#/c/141896/ >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Tue Dec 16 13:20:51 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 16 Dec 2014 08:20:51 -0500 Subject: [openstack-dev] [oslo] interesting problem with config filter In-Reply-To: <1418733697.16928.106.camel@sorcha> References: <4BCA7D02-38D6-4B5B-A496-8EB259C1A792@doughellmann.com> <1418733697.16928.106.camel@sorcha> Message-ID: <1B377DFB-FB0A-443E-AB46-249BC201EBB6@doughellmann.com> On Dec 16, 2014, at 7:41 AM, Mark McLoughlin wrote: > Hi Doug, > > On Mon, 2014-12-08 at 15:58 -0500, Doug Hellmann wrote: >> As we?ve discussed a few times, we want to isolate applications from >> the configuration options defined by libraries. One way we have of >> doing that is the ConfigFilter class in oslo.config. When a regular >> ConfigOpts instance is wrapped with a filter, a library can register >> new options on the filter that are not visible to anything that >> doesn?t have the filter object. > > Or to put it more simply, the configuration options registered by the > library should not be part of the public API of the library. > >> Unfortunately, the Neutron team has identified an issue with this >> approach. We have a bug report [1] from them about the way we?re using >> config filters in oslo.concurrency specifically, but the issue applies >> to their use everywhere. >> >> The neutron tests set the default for oslo.concurrency?s lock_path >> variable to ?$state_path/lock?, and the state_path option is defined >> in their application. With the filter in place, interpolation of >> $state_path to generate the lock_path value fails because state_path >> is not known to the ConfigFilter instance. > > It seems that Neutron sets this default in its etc/neutron.conf file in > its git tree: > > lock_path = $state_path/lock > > I think we should be aiming for defaults like this to be set in code, > and for the sample config files to contain nothing but comments. So, > neutron should do: > > lockutils.set_defaults(lock_path="$state_path/lock") > > That's a side detail, however. > >> The reverse would also happen (if the value of state_path was somehow >> defined to depend on lock_path), > > This dependency wouldn't/shouldn't be code - because Neutron *code* > shouldn't know about the existence of library config options. > Neutron deployers absolutely will be aware of lock_path however. > >> and that?s actually a bigger concern to me. A deployer should be able >> to use interpolation anywhere, and not worry about whether the options >> are in parts of the code that can see each other. The values are all >> in one file, as far as they know, and so interpolation should ?just >> work?. > > Yes, if a deployer looks at a sample configuration file, all options > listed in there seem like they're in-play for substitution use within > the value of another option. For string substitution only, I'd say there > should be a global namespace where all options are registered. > > Now ... one caveat on all of this ... I do think the string substitution > feature is pretty obscure and mostly just used in default values. > >> I see a few solutions: >> >> 1. Don?t use the config filter at all. >> 2. Make the config filter able to add new options and still see >> everything else that is already defined (only filter in one >> direction). >> 3. Leave things as they are, and make the error message better. > > 4. Just tackle this specific case by making lock_path implicitly > relative to a base path the application can set via an API, so Neutron > would do: > > lockutils.set_base_path(CONF.state_path) > > at startup. > > 5. Make the toplevel ConfigOpts aware of all filters hanging off it, and > somehow cycle through all of those filters just when doing string > substitution. We would have to allow the reverse as well, since the filter object doesn?t see options not explicitly imported by the code creating the filter. In either case, it only works if the filter object has been instantiated. I wonder if we have a similar problem with runtime option registration. I?ll have to test that. > >> Because of the deployment implications of using the filter, I?m >> inclined to go with choice 1 or 2. However, choice 2 leaves open the >> possibility of a deployer wanting to use the value of an option >> defined by one filtered set of code when defining another. I don?t >> know how frequently that might come up, but it seems like the error >> would be very confusing, especially if both options are set in the >> same config file. >> >> I think that leaves option 1, which means our plans for hiding options >> from applications need to be rethought. >> >> Does anyone else see another solution that I?m missing? > > I'd do something like (3) and (4), then wait to see if it crops up > multiple times in the future before tackling a more general solution. Option 3 prevents neutron from adopting oslo.concurrency, and option 4 is a backwards-incompatible change to the way lock path is set. > > With option (1), the basic thing to think about is how to maintain API > compatibility - if we expose the options through the API, how do we deal > with future moves, removals, renames, and changing semantics of those > config options. The option is exposed through the existing set_defaults() method, so we can make that handle any backwards compatibility issues if we change it. > > Mark. > From gilmeir at mellanox.com Tue Dec 16 13:33:32 2014 From: gilmeir at mellanox.com (Gil Meir) Date: Tue, 16 Dec 2014 13:33:32 +0000 Subject: [openstack-dev] [Fuel] Message-ID: A performance issue was found when using OVS mechanism (we deduced it's on VM RX side) - we get very limited BW, tested with iperf. When using Mellanox SR-IOV the problem does not occur, this also points on OVS mechanism driver problem. LP bug with all details: https://bugs.launchpad.net/fuel/+bug/1403047/ For further questions Sasha from Mellanox who reported the bug is now on #fuel-dev with nick = t-sasha, and is also Cc here. Regards, Gil Meir SW Cloud Solutions Mellanox Technologies -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.morin at orange.com Tue Dec 16 13:40:08 2014 From: thomas.morin at orange.com (Thomas Morin) Date: Tue, 16 Dec 2014 14:40:08 +0100 Subject: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration In-Reply-To: <891761EAFA335D44AD1FFDB9B4A8C063DA025B@G9W0733.americas.hpqcorp.net> References: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP> <891761EAFA335D44AD1FFDB9B4A8C063DA025B@G9W0733.americas.hpqcorp.net> Message-ID: <54903638.20608@orange.com> Hi Keshava, 2014-12-15 11:52, A, Keshava : > I have been thinking of "Starting MPLS right from CN" for L2VPN/EVPN scenario also. > > Below are my queries w.r.t supporting MPLS from OVS : > 1. MPLS will be used even for VM-VM traffic across CNs generated by OVS ? If E-VPN is used only to interconnect outside of a Neutron domain, then MPLS does not have to be used for traffic between VMs. If E-VPN is used inside one DC for VM-VM traffic, then MPLS is *one* of the possible encapsulation only: E-VPN specs have been defined to use VXLAN (handy because there is native kernel support), MPLS/GRE or MPLS/UDP are other possibilities. > 2. MPLS will be originated right from OVS and will be mapped at Gateway (it may be NN/Hardware router ) to SP network ? > So MPLS will carry 2 Labels ? (one for hop-by-hop, and other one for end to identify network ?) On "will carry 2 Labels ?" : this would be one possibility, but not the one we target. We would actually favor MPLS/GRE (GRE used instead of what you call the MPLS "hop-by-hop" label) inside the DC -- this requires only one label. At the DC edge gateway, depending on the interconnection techniques to connect the WAN, different options can be used (RFC4364 section 10): Option A with back-to-back VRFs (no MPLS label, but typically VLANs), or option B (with one MPLS label), a mix of A/B is also possible and sometimes called option D (one label) ; option C also exists, but is not a good fit here. Inside one DC, if vswitches see each other across an Ethernet segment, we can also use MPLS with just one label (the VPN label) without a GRE encap. In a way, you can say that in Option B, the label are "mapped" at the DC/WAN gateway(s), but this is really just MPLS label swaping, not to be misunderstood as mapping a DC label space to a WAN label space (see below, the label space is local to each device). > 3. MPLS will go over even the "network physical infrastructure" also ? The use of MPLS/GRE means we are doing an overlay, just like your typical VXLAN-based solution, and the network physical infrastructure does not need to be MPLS-aware (it just needs to be able to carry IP traffic) > 4. How the Labels will be mapped a/c virtual and physical world ? (I don't get the question, I'm not sure what you mean by "mapping labels") > 5. Who manages the label space ? Virtual world or physical world or both ? (OpenStack + ODL ?) In MPLS*, the label space is local to each device : a label is "downstream-assigned", i.e. allocated by the receiving device for a specific purpose (e.g. forwarding in a VRF). It is then (typically) avertized in a routing protocol; the sender device will use this label to send traffic to the receiving device for this specific purpose. As a result a sender device may then use label 42 to forward traffic in the context of VPN X to a receiving device A, and the same label 42 to forward traffic in the context of another VPN Y to another receiving device B, and locally use label 42 to receive traffic for VPN Z. There is no global label space to manage. So, while you can design a solution where the label space is managed in a centralized fashion, this is not required. You could design an SDN controller solution where the controller would manage one label space common to all nodes, or all the label spaces of all forwarding devices, but I think its hard to derive any interesting property from such a design choice. In our BaGPipe distributed design (and this is also true in OpenContrail for instance) the label space is managed locally on each compute node (or network node if the BGP speaker is on a network node). More precisely in VPN implementation. If you take a step back, the only naming space that has to be "managed" in BGP VPNs is the Route Target space. This is only in the control plane. It is a very large space (48 bits), and it is structured (each AS has its own 32 bit space, and there are private AS numbers). The mapping to the dataplane to MPLS labels is per-device and purely local. (*: MPLS also allows "upstream-assigned" labels, it is more recent and only used in specific cases where downstream assigned does not work well) > 6. The labels are nested (i.e. Like L3 VPN end to end MPLS connectivity ) will be established ? In solutions where MPLS/GRE is used the label stack typically has only one label (the VPN label). > 7. Or it will be label stitching between Virtual-Physical network ? > How the end-to-end path will be setup ? > > Let me know your opinion for the same. > How the end-to-end path is setup may depend on interconnection choice. With an inter-AS option B or A+B, you would have the following: - ingress DC overlay: one MPLS-over-GRE hop from vswitch to DC edge - ingress DC edge to WAN: one MPLS label (VPN label advertised by eBGP) - inside the WAN: (typically) two labels (e.g. LDP label to reach remote edge, and VPN label advertised via iBGP) - WAN to edgress DC edge: one MPLS label (VPN label advertised by eBGP) - egress DC overlay: one MPLS-over-GRE hop from DC edge to vswitch Not sure how the above answers your questions; please keep asking if it does not ! ;) -Thomas > -----Original Message----- > From: Mathieu Rohon [mailto:mathieu.rohon at gmail.com] > Sent: Monday, December 15, 2014 3:46 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration > > Hi Ryan, > > We have been working on similar Use cases to announce /32 with the Bagpipe BGPSpeaker that supports EVPN. > Please have a look at use case B in [1][2]. > Note also that the L2population Mechanism driver for ML2, that is compatible with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN, and I'm sure it could help in your use case > > [1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe > [2]https://www.youtube.com/watch?v=q5z0aPrUZYc&sns > [3]https://blueprints.launchpad.net/neutron/+spec/l2-population > > Mathieu > > On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger wrote: >> Hi, >> >> At Rackspace, we have a need to create a higher level networking >> service primarily for the purpose of creating a Floating IP solution >> in our environment. The current solutions for Floating IPs, being tied >> to plugin implementations, does not meet our needs at scale for the following reasons: >> >> 1. Limited endpoint H/A mainly targeting failover only and not >> multi-active endpoints, 2. Lack of noisy neighbor and DDOS mitigation, >> 3. IP fragmentation (with cells, public connectivity is terminated >> inside each cell leading to fragmentation and IP stranding when cell >> CPU/Memory use doesn't line up with allocated IP blocks. Abstracting >> public connectivity away from nova installations allows for much more >> efficient use of those precious IPv4 blocks). >> 4. Diversity in transit (multiple encapsulation and transit types on a >> per floating ip basis). >> >> We realize that network infrastructures are often unique and such a >> solution would likely diverge from provider to provider. However, we >> would love to collaborate with the community to see if such a project >> could be built that would meet the needs of providers at scale. We >> believe that, at its core, this solution would boil down to >> terminating north<->south traffic temporarily at a massively >> horizontally scalable centralized core and then encapsulating traffic >> east<->west to a specific host based on the association setup via the current L3 router's extension's 'floatingips' >> resource. >> >> Our current idea, involves using Open vSwitch for header rewriting and >> tunnel encapsulation combined with a set of Ryu applications for management: >> >> https://i.imgur.com/bivSdcC.png >> >> The Ryu application uses Ryu's BGP support to announce up to the >> Public Routing layer individual floating ips (/32's or /128's) which >> are then summarized and announced to the rest of the datacenter. If a >> particular floating ip is experiencing unusually large traffic (DDOS, >> slashdot effect, etc.), the Ryu application could change the >> announcements up to the Public layer to shift that traffic to >> dedicated hosts setup for that purpose. It also announces a single /32 >> "Tunnel Endpoint" ip downstream to the TunnelNet Routing system which >> provides transit to and from the cells and their hypervisors. Since >> traffic from either direction can then end up on any of the FLIP >> hosts, a simple flow table to modify the MAC and IP in either the SRC >> or DST fields (depending on traffic direction) allows the system to be >> completely stateless. We have proven this out (with static routing and >> flows) to work reliably in a small lab setup. >> >> On the hypervisor side, we currently plumb networks into separate OVS >> bridges. Another Ryu application would control the bridge that handles >> overlay networking to selectively divert traffic destined for the >> default gateway up to the FLIP NAT systems, taking into account any >> configured logical routing and local L2 traffic to pass out into the >> existing overlay fabric undisturbed. >> >> Adding in support for L2VPN EVPN >> (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN >> Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) >> to the Ryu BGP speaker will allow the hypervisor side Ryu application >> to advertise up to the FLIP system reachability information to take >> into account VM failover, live-migrate, and supported encapsulation >> types. We believe that decoupling the tunnel endpoint discovery from >> the control plane >> (Nova/Neutron) will provide for a more robust solution as well as >> allow for use outside of openstack if desired. >> >> From thomas.morin at orange.com Tue Dec 16 13:52:45 2014 From: thomas.morin at orange.com (Thomas Morin) Date: Tue, 16 Dec 2014 14:52:45 +0100 Subject: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration In-Reply-To: References: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP> Message-ID: <5490392D.8040300@orange.com> Hi Ryan, Mathieu Rohon : > We have been working on similar Use cases to announce /32 with the > Bagpipe BGPSpeaker that supports EVPN. Btw, the code for the BGP E-VPN implementation is at https://github.com/Orange-OpenSource/bagpipe-bgp It reuses parts of ExaBGP (to which we contributed encodings for E-VPN and IP VPNs) and relies on the VXLAN native Linux kernel implementation for the E-VPN dataplane. -Thomas > Please have a look at use case B in [1][2]. > Note also that the L2population Mechanism driver for ML2, that is > compatible with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN, > and I'm sure it could help in your use case > > [1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe > [2]https://www.youtube.com/watch?v=q5z0aPrUZYc&sns > [3]https://blueprints.launchpad.net/neutron/+spec/l2-population > > Mathieu > > On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger > wrote: >> Hi, >> >> At Rackspace, we have a need to create a higher level networking service >> primarily for the purpose of creating a Floating IP solution in our >> environment. The current solutions for Floating IPs, being tied to plugin >> implementations, does not meet our needs at scale for the following reasons: >> >> 1. Limited endpoint H/A mainly targeting failover only and not multi-active >> endpoints, >> 2. Lack of noisy neighbor and DDOS mitigation, >> 3. IP fragmentation (with cells, public connectivity is terminated inside >> each cell leading to fragmentation and IP stranding when cell CPU/Memory use >> doesn't line up with allocated IP blocks. Abstracting public connectivity >> away from nova installations allows for much more efficient use of those >> precious IPv4 blocks). >> 4. Diversity in transit (multiple encapsulation and transit types on a per >> floating ip basis). >> >> We realize that network infrastructures are often unique and such a solution >> would likely diverge from provider to provider. However, we would love to >> collaborate with the community to see if such a project could be built that >> would meet the needs of providers at scale. We believe that, at its core, >> this solution would boil down to terminating north<->south traffic >> temporarily at a massively horizontally scalable centralized core and then >> encapsulating traffic east<->west to a specific host based on the >> association setup via the current L3 router's extension's 'floatingips' >> resource. >> >> Our current idea, involves using Open vSwitch for header rewriting and >> tunnel encapsulation combined with a set of Ryu applications for management: >> >> https://i.imgur.com/bivSdcC.png >> >> The Ryu application uses Ryu's BGP support to announce up to the Public >> Routing layer individual floating ips (/32's or /128's) which are then >> summarized and announced to the rest of the datacenter. If a particular >> floating ip is experiencing unusually large traffic (DDOS, slashdot effect, >> etc.), the Ryu application could change the announcements up to the Public >> layer to shift that traffic to dedicated hosts setup for that purpose. It >> also announces a single /32 "Tunnel Endpoint" ip downstream to the TunnelNet >> Routing system which provides transit to and from the cells and their >> hypervisors. Since traffic from either direction can then end up on any of >> the FLIP hosts, a simple flow table to modify the MAC and IP in either the >> SRC or DST fields (depending on traffic direction) allows the system to be >> completely stateless. We have proven this out (with static routing and >> flows) to work reliably in a small lab setup. >> >> On the hypervisor side, we currently plumb networks into separate OVS >> bridges. Another Ryu application would control the bridge that handles >> overlay networking to selectively divert traffic destined for the default >> gateway up to the FLIP NAT systems, taking into account any configured >> logical routing and local L2 traffic to pass out into the existing overlay >> fabric undisturbed. >> >> Adding in support for L2VPN EVPN >> (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN >> Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to the >> Ryu BGP speaker will allow the hypervisor side Ryu application to advertise >> up to the FLIP system reachability information to take into account VM >> failover, live-migrate, and supported encapsulation types. We believe that >> decoupling the tunnel endpoint discovery from the control plane >> (Nova/Neutron) will provide for a more robust solution as well as allow for >> use outside of openstack if desired. >> From gkotton at vmware.com Tue Dec 16 13:57:38 2014 From: gkotton at vmware.com (Gary Kotton) Date: Tue, 16 Dec 2014 13:57:38 +0000 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: References: <548EED0D.6020600@redhat.com> <549005D5.7070707@redhat.com> <88210AFF-9388-488F-A443-3480682F0D51@doughellmann.com> <5490253F.7020802@redhat.com> Message-ID: On 12/16/14, 2:41 PM, "Doug Hellmann" wrote: > >On Dec 16, 2014, at 7:27 AM, Ihar Hrachyshka wrote: > >> Signed PGP part >> On 16/12/14 12:50, Doug Hellmann wrote: >> > >> > On Dec 16, 2014, at 5:13 AM, Ihar Hrachyshka >> > wrote: >> > >> >> Signed PGP part On 15/12/14 18:57, Doug Hellmann wrote: >> >>> There may be a similar problem managing dependencies on >> >>> libraries that live outside of either tree. I assume you >> >>> already decided how to handle that. Are you doing the same >> >>> thing, and adding the requirements to neutron?s lists? >> >> >> >> I guess the idea is to keep in neutron-*aas only those >> >> oslo-incubator modules that are used there solely (=not used in >> >> main repo). >> > >> > How are the *aas packages installed? Are they separate libraries or >> > applications that are installed on top of neutron? Or are their >> > files copied into the neutron namespace? >> >> They are separate libraries with their own setup.py, dependencies, >> tarballs, all that, but they are free to use (public) code from main >> neutron package. > >OK. > >If they don?t have copies of all of the incubated modules they use, how >are they tested? Is neutron a dependency? This is/was one of my concerns with the decomposition proposal. It is not clear if neutron is a dependency. My two cents is that it should be. > >> >> > >> >> >> >> I think requirements are a bit easier and should track all >> >> direct dependencies in each of the repos, so that in case main >> >> repo decides to drop one, neutron-*aas repos are not broken. >> >> >> >> For requirements, it's different because there is no major burden >> >> due to duplicate entries in repos. >> >> >> >>> >> >>> On Dec 15, 2014, at 12:16 PM, Doug Wiegley >> >>> wrote: >> >>> >> >>>> Hi all, >> >>>> >> >>>> Ihar and I discussed this on IRC, and are going forward with >> >>>> option 2 unless someone has a big problem with it. >> >>>> >> >>>> Thanks, Doug >> >>>> >> >>>> >> >>>> On 12/15/14, 8:22 AM, "Doug Wiegley" >> >>>> wrote: >> >>>> >> >>>>> Hi Ihar, >> >>>>> >> >>>>> I?m actually in favor of option 2, but it implies a few >> >>>>> things about your time, and I wanted to chat with you >> >>>>> before presuming. >> >>>>> >> >>>>> Maintenance can not involve breaking changes. At this >> >>>>> point, the co-gate will block it. Also, oslo graduation >> >>>>> changes will have to be made in the services repos first, >> >>>>> and then Neutron. >> >>>>> >> >>>>> Thanks, doug >> >>>>> >> >>>>> >> >>>>> On 12/15/14, 6:15 AM, "Ihar Hrachyshka" >> >>>>> wrote: >> >>>>> >> >>> Hi all, >> >>> >> >>> the question arose recently in one of reviews for neutron-*aas >> >>> repos to remove all oslo-incubator code from those repos since >> >>> it's duplicated in neutron main repo. (You can find the link to >> >>> the review at the end of the email.) >> >>> >> >>> Brief hostory: neutron repo was recently split into 4 pieces >> >>> (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The >> >>> split resulted in each repository keeping their own copy of >> >>> neutron/openstack/common/... tree (currently unused in all >> >>> neutron-*aas repos that are still bound to modules from main >> >>> repo). >> >>> >> >>> As a oslo liaison for the project, I wonder what's the best way >> >>> to manage oslo-incubator files. We have several options: >> >>> >> >>> 1. just kill all the neutron/openstack/common/ trees from >> >>> neutron-*aas repositories and continue using modules from main >> >>> repo. >> >>> >> >>> 2. kill all duplicate modules from neutron-*aas repos and >> >>> leave only those that are used in those repos but not in main >> >>> repo. >> >>> >> >>> 3. fully duplicate all those modules in each of four repos that >> >>> use them. >> >>> >> >>> I think option 1. is a straw man, since we should be able to >> >>> introduce new oslo-incubator modules into neutron-*aas repos >> >>> even if they are not used in main repo. >> >>> >> >>> Option 2. is good when it comes to synching non-breaking bug >> >>> fixes (or security fixes) from oslo-incubator, in that it will >> >>> require only one sync patch instead of e.g. four. At the same >> >>> time there may be potential issues when synchronizing updates >> >>> from oslo-incubator that would break API and hence require >> >>> changes to each of the modules that use it. Since we don't >> >>> support atomic merges for multiple projects in gate, we will >> >>> need to be cautious about those updates, and we will still need >> >>> to leave neutron-*aas repos broken for some time (though the >> >>> time may be mitigated with care). >> >>> >> >>> Option 3. is vice versa - in theory, you get total decoupling, >> >>> meaning no oslo-incubator updates in main repo are expected to >> >>> break neutron-*aas repos, but bug fixing becomes a huge PITA. >> >>> >> >>> I would vote for option 2., for two reasons: - most >> >>> oslo-incubator syncs are non-breaking, and we may effectively >> >>> apply care to updates that may result in potential breakage >> >>> (f.e. being able to trigger an integrated run for each of >> >>> neutron-*aas repos with the main sync patch, if there are any >> >>> concerns). - it will make oslo liaison life a lot easier. OK, >> >>> I'm probably too selfish on that. ;) - it will make stable >> >>> maintainers life a lot easier. The main reason why stable >> >>> maintainers and distributions like recent oslo graduation >> >>> movement is that we don't need to track each bug fix we need in >> >>> every project, and waste lots of cycles on it. Being able to >> >>> fix a bug in one place only is *highly* anticipated. [OK, I'm >> >>> quite selfish on that one too.] - it's a delusion that there >> >>> will be no neutron-main syncs that will break neutron-*aas >> >>> repos ever. There can still be problems due to incompatibility >> >>> between neutron main and neutron-*aas code resulted EXACTLY >> >>> because multiple parts of the same process use different >> >>> versions of the same module. >> >>> >> >>> That said, Doug Wiegley (lbaas core) seems to be in favour of >> >>> option 3. due to lower coupling that is achieved in that way. >> >>> I know that lbaas team had a bad experience due to tight >> >>> coupling to neutron project in the past, so I appreciate their >> >>> concerns. >> >>> >> >>> All in all, we should come up with some standard solution for >> >>> both advanced services that are already split out, *and* >> >>> upcoming vendor plugin shrinking initiative. >> >>> >> >>> The initial discussion is captured at: >> >>> https://review.openstack.org/#/c/141427/ >> >>> >> >>> Thanks, /Ihar >> >>>>> >> >>>> >> >>>> _______________________________________________ >> >>>> OpenStack-dev mailing list OpenStack-dev at lists.openstack.org >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >> >> >> > >> > > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dpyzhov at mirantis.com Tue Dec 16 13:57:31 2014 From: dpyzhov at mirantis.com (Dmitry Pyzhov) Date: Tue, 16 Dec 2014 17:57:31 +0400 Subject: [openstack-dev] [Fuel] Logs format on UI (High/6.0) In-Reply-To: References: Message-ID: Guys, thank you for your feedback. As a quick and dirty solution we continue to hide extra information from UI. It will not break existing user experience. Roman, there were attempts to get rid of our current web logs page and use Logstash. As usual, it's all about time and resources. It is our backlog, but it is not in our current roadmap. On Mon, Dec 15, 2014 at 6:11 PM, Roman Prykhodchenko wrote: > > Hi folks! > > In most productions environments I?ve seen bare logs as they are shown now > in Fuel web UI were pretty useless. If someone has an infrastructure that > consists of more that 5 servers and 5 services running on them they are > most likely to use logstash, loggly or any other log management system. > There are options for forwarding these logs to a remote log server and > that?s what is likely to be used IRL. > > Therefore for production environments formatting logs in Fuel web UI or > even showing them is a cool but pretty useless feature. In addition to > being useless in production environments it also creates additional load to > the user interface. > > However, I can see that developers actually use it for debugging or > troubleshooting, so my proposal is to introduce an option for disabling > this feature completely. > > > - romcheg > > > On 15 Dec 2014, at 12:40, Tomasz Napierala > wrote: > > > > Also +1 here. > > In huge envs we already have problems with parsing performance. In long > long term we need to think about other log management solution > > > > > >> On 12 Dec 2014, at 23:17, Igor Kalnitsky > wrote: > >> > >> +1 to stop parsing logs on UI and show them "as is". I think it's more > >> than enough for all users. > >> > >> On Fri, Dec 12, 2014 at 8:35 PM, Dmitry Pyzhov > wrote: > >>> We have a high priority bug in 6.0: > >>> https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story. > >>> > >>> Our openstack services use to send logs in strange format with extra > copy of > >>> timestamp and loglevel: > >>> ==> ./neutron-metadata-agent.log <== > >>> 2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 > INFO > >>> neutron.common.config [-] Logging enabled! > >>> > >>> And we have a workaround for this. We hide extra timestamp and use > second > >>> loglevel. > >>> > >>> In Juno some of services have updated oslo.logging and now send logs in > >>> simple format: > >>> ==> ./nova-api.log <== > >>> 2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from > >>> /etc/nova/api-paste.ini > >>> > >>> In order to keep backward compatibility and deal with both formats we > have a > >>> dirty workaround for our workaround: > >>> https://review.openstack.org/#/c/141450/ > >>> > >>> As I see, our best choice here is to throw away all workarounds and > show > >>> logs on UI as is. If service sends duplicated data - we should show > >>> duplicated data. > >>> > >>> Long term fix here is to update oslo.logging in all packages. We can > do it > >>> in 6.1. > >>> > >>> _______________________________________________ > >>> OpenStack-dev mailing list > >>> OpenStack-dev at lists.openstack.org > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > > Tomasz 'Zen' Napierala > > Sr. OpenStack Engineer > > tnapierala at mirantis.com > > > > > > > > > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Dec 16 13:59:06 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 16 Dec 2014 13:59:06 +0000 Subject: [openstack-dev] [all][gerrit] Showing all inline comments from all patch sets In-Reply-To: <54900903.6090008@vmware.com> References: <54900903.6090008@vmware.com> Message-ID: <20141216135906.GR2497@yuggoth.org> On 2014-12-16 12:27:15 +0200 (+0200), Radoslav Gerganov wrote: [...] > the backend running on GoogleAppEngine is just proxying the > requests to review.openstack.org. So in theory if we serve the > html page from our Gerrit it will work. [...] I'm having trouble locating the source code for Google App Engine, and can instead only find source code for its SDK. How would we run a GAE instance? (Please remember that our Infra team doesn't host content backed by proprietary services, but do encourage Google to release this under a free license if they haven't already.) -- Jeremy Stanley From doug at doughellmann.com Tue Dec 16 14:14:25 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 16 Dec 2014 09:14:25 -0500 Subject: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split In-Reply-To: <549028B4.5000407@redhat.com> References: <548EED0D.6020600@redhat.com> <549007EE.1060205@redhat.com> <38253D73-25BD-4E24-9676-6450ABD6CEBE@doughellmann.com> <549028B4.5000407@redhat.com> Message-ID: <1A22F470-1B44-44CA-95CB-4D62A4C6A8E2@doughellmann.com> On Dec 16, 2014, at 7:42 AM, Ihar Hrachyshka wrote: > Signed PGP part > On 16/12/14 12:52, Doug Hellmann wrote: > > > > On Dec 16, 2014, at 5:22 AM, Ihar Hrachyshka > > wrote: > > > >> Signed PGP part On 15/12/14 17:22, Doug Wiegley wrote: > >>> Hi Ihar, > >>> > >>> I?m actually in favor of option 2, but it implies a few things > >>> about your time, and I wanted to chat with you before > >>> presuming. > >> > >> I think split didn't mean moving project trees under separate > >> governance, so I assume oslo (doc, qa, ...) liaisons should not > >> be split either. > >> > >>> > >>> Maintenance can not involve breaking changes. At this point, > >>> the co-gate will block it. Also, oslo graduation changes will > >>> have to be made in the services repos first, and then Neutron. > >> > >> Do you mean that a change to oslo-incubator modules is co-gated > >> (not just co-checked with no vote) with each of advanced > >> services? > >> > >> As I pointed in my previous email, sometimes breakages are > >> inescapable. > >> > >> Consider a change to neutron oslo-incubator module used commonly > >> in all repos that breaks API (they are quite rare, but still have > >> a chance of happening once in a while). If we would co-gate main > >> neutron repo changes with services, it will mean that we won't be > >> able to merge the change. > >> > >> That would probably suggest that we go forward with option 3 and > >> manage all incubator files separately in each of the trees, > >> though, again, breakages are still possible in that scenario via > >> introducing incompatibility between versions of incubator modules > >> in separate repos. > >> > >> So we should be realistic about it and plan forward how we deal > >> potential breakages that *may* occur. > >> > >> As for oslo library graduations, the order is not really > >> significant. What is significant is that we drop oslo-incubator > >> module from main neutron repo only after all other neutron-*aas > >> repos migrate to appropriate oslo.* library. The neutron > >> migration itself may occur in parallel (by postponing module drop > >> later). > > > > Don?t assume that it?s safe to combine the incubated version and > > library version of a module. We?ve had some examples where the APIs > > change or global state changes in a way that make the two > > incompatible. We definitely don?t take any care to ensure that the > > two copies can be run together. > > Hm. Does it leave us with option 3 only? In that case, should we care > about incompatibilities between different versions of incubator > modules running in the same process (one for core code, and another > one for a service)? That sounds more like we're not left with safe > options. I think you only want to have one copy of the Oslo modules active in a process at any given point. That probably means having the *aas projects use whatever incubated Oslo modules are in the main neutron repository instead of their own copy, but as you point out that will break those projects when neutron adopts a new library. You might end up having to build shims in neutron to hide the Oslo change during the transition. OTOH, it may not be a big deal. We don?t go out of our way to break compatibility, so you might find that it works fine in a lot of cases. I think context won?t, because it holds global state, but some of the others should be fine. FWIW, usually when we hit a dependency problem like this, the solution is to split one of the projects up so there is a library that can be used by all of the consumers. It sounds like neutron is trying to be both an application and a library. > > > > >> > >>> > >>> Thanks, doug > >>> > >>> > >>> On 12/15/14, 6:15 AM, "Ihar Hrachyshka" > >>> wrote: > >>> > >>> Hi all, > >>> > >>> the question arose recently in one of reviews for neutron-*aas > >>> repos to remove all oslo-incubator code from those repos since > >>> it's duplicated in neutron main repo. (You can find the link to > >>> the review at the end of the email.) > >>> > >>> Brief hostory: neutron repo was recently split into 4 pieces > >>> (main, neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The > >>> split resulted in each repository keeping their own copy of > >>> neutron/openstack/common/... tree (currently unused in all > >>> neutron-*aas repos that are still bound to modules from main > >>> repo). > >>> > >>> As a oslo liaison for the project, I wonder what's the best way > >>> to manage oslo-incubator files. We have several options: > >>> > >>> 1. just kill all the neutron/openstack/common/ trees from > >>> neutron-*aas repositories and continue using modules from main > >>> repo. > >>> > >>> 2. kill all duplicate modules from neutron-*aas repos and > >>> leave only those that are used in those repos but not in main > >>> repo. > >>> > >>> 3. fully duplicate all those modules in each of four repos that > >>> use them. > >>> > >>> I think option 1. is a straw man, since we should be able to > >>> introduce new oslo-incubator modules into neutron-*aas repos > >>> even if they are not used in main repo. > >>> > >>> Option 2. is good when it comes to synching non-breaking bug > >>> fixes (or security fixes) from oslo-incubator, in that it will > >>> require only one sync patch instead of e.g. four. At the same > >>> time there may be potential issues when synchronizing updates > >>> from oslo-incubator that would break API and hence require > >>> changes to each of the modules that use it. Since we don't > >>> support atomic merges for multiple projects in gate, we will > >>> need to be cautious about those updates, and we will still need > >>> to leave neutron-*aas repos broken for some time (though the > >>> time may be mitigated with care). > >>> > >>> Option 3. is vice versa - in theory, you get total decoupling, > >>> meaning no oslo-incubator updates in main repo are expected to > >>> break neutron-*aas repos, but bug fixing becomes a huge PITA. > >>> > >>> I would vote for option 2., for two reasons: - most > >>> oslo-incubator syncs are non-breaking, and we may effectively > >>> apply care to updates that may result in potential breakage > >>> (f.e. being able to trigger an integrated run for each of > >>> neutron-*aas repos with the main sync patch, if there are any > >>> concerns). - it will make oslo liaison life a lot easier. OK, > >>> I'm probably too selfish on that. ;) - it will make stable > >>> maintainers life a lot easier. The main reason why stable > >>> maintainers and distributions like recent oslo graduation > >>> movement is that we don't need to track each bug fix we need in > >>> every project, and waste lots of cycles on it. Being able to > >>> fix a bug in one place only is *highly* anticipated. [OK, I'm > >>> quite selfish on that one too.] - it's a delusion that there > >>> will be no neutron-main syncs that will break neutron-*aas > >>> repos ever. There can still be problems due to incompatibility > >>> between neutron main and neutron-*aas code resulted EXACTLY > >>> because multiple parts of the same process use different > >>> versions of the same module. > >>> > >>> That said, Doug Wiegley (lbaas core) seems to be in favour of > >>> option 3. due to lower coupling that is achieved in that way. > >>> I know that lbaas team had a bad experience due to tight > >>> coupling to neutron project in the past, so I appreciate their > >>> concerns. > >>> > >>> All in all, we should come up with some standard solution for > >>> both advanced services that are already split out, *and* > >>> upcoming vendor plugin shrinking initiative. > >>> > >>> The initial discussion is captured at: > >>> https://review.openstack.org/#/c/141427/ > >>> > >>> Thanks, /Ihar > >>> > >> > >> > >> _______________________________________________ OpenStack-dev > >> mailing list OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > >> > > > > _______________________________________________ OpenStack-dev > > mailing list OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ppouliot at microsoft.com Tue Dec 16 15:14:25 2014 From: ppouliot at microsoft.com (Peter Pouliot) Date: Tue, 16 Dec 2014 15:14:25 +0000 Subject: [openstack-dev] Hyper-V Meeting Message-ID: Hi All, We're postponing the meeeting this week due to everyone being swamped with higher priority tasks. If people have direct needs we please email us directly or contact us in the irc channels. p -------------- next part -------------- An HTML attachment was scrubbed... URL: From lsurette at redhat.com Tue Dec 16 15:16:25 2014 From: lsurette at redhat.com (Liz Blanchard) Date: Tue, 16 Dec 2014 10:16:25 -0500 Subject: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon In-Reply-To: References: Message-ID: <5654E7A3-FF43-4B41-A9AE-37FD67E9E2AB@redhat.com> On Dec 12, 2014, at 2:26 PM, David Lyle wrote: > works for me, less complexity +1 Sorry I?m a bit late to the game here? +1 to this though from my perspective! Liz > > On Fri, Dec 12, 2014 at 11:09 AM, Timur Sufiev wrote: > It seems to me that the consensus on keeping the simpler approach -- to make Bootstrap data-backdrop="static" as the default behavior -- has been reached. Am I right? > > On Thu, Dec 4, 2014 at 10:59 PM, Kruithof, Piet wrote: > My preference would be ?change the default behavior to 'static?? for the following reasons: > > - There are plenty of ways to close the modal, so there?s not really a need for this feature. > - There are no visual cues, such as an ?X? or a Cancel button, that selecting outside of the modal closes it. > - Downside is losing all of your data. > > My two cents? > > Begin forwarded message: > > From: "Rob Cresswell (rcresswe)" > > To: "OpenStack Development Mailing List (not for usage questions)" > > Date: December 3, 2014 at 5:21:51 AM PST > Subject: Re: [openstack-dev] [horizon] [ux] Changing how the modals are closed in Horizon > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > > > +1 to changing the behaviour to ?static'. Modal inside a modal is potentially slightly more useful, but looks messy and inconsistent, which I think outweighs the functionality. > > Rob > > > On 2 Dec 2014, at 12:21, Timur Sufiev > wrote: > > Hello, Horizoneers and UX-ers! > > The default behavior of modals in Horizon (defined in turn by Bootstrap defaults) regarding their closing is to simply close the modal once user clicks somewhere outside of it (on the backdrop element below and around the modal). This is not very convenient for the modal forms containing a lot of input - when it is closed without a warning all the data the user has already provided is lost. Keeping this in mind, I've made a patch [1] changing default Bootstrap 'modal_backdrop' parameter to 'static', which means that forms are not closed once the user clicks on a backdrop, while it's still possible to close them by pressing 'Esc' or clicking on the 'X' link at the top right border of the form. Also the patch [1] allows to customize this behavior (between 'true'-current one/'false' - no backdrop element/'static') on a per-form basis. > > What I didn't know at the moment I was uploading my patch is that David Lyle had been working on a similar solution [2] some time ago. It's a bit more elaborate than mine: if the user has already filled some some inputs in the form, then a confirmation dialog is shown, otherwise the form is silently dismissed as it happens now. > > The whole point of writing about this in the ML is to gather opinions which approach is better: > * stick to the current behavior; > * change the default behavior to 'static'; > * use the David's solution with confirmation dialog (once it'll be rebased to the current codebase). > > What do you think? > > [1] https://review.openstack.org/#/c/113206/ > [2] https://review.openstack.org/#/c/23037/ > > P.S. I remember that I promised to write this email a week ago, but better late than never :). > > -- > Timur Sufiev > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- > Timur Sufiev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rgerganov at vmware.com Tue Dec 16 15:19:55 2014 From: rgerganov at vmware.com (Radoslav Gerganov) Date: Tue, 16 Dec 2014 17:19:55 +0200 Subject: [openstack-dev] [all][gerrit] Showing all inline comments from all patch sets In-Reply-To: <20141216135906.GR2497@yuggoth.org> References: <54900903.6090008@vmware.com> <20141216135906.GR2497@yuggoth.org> Message-ID: <54904D9B.2000204@vmware.com> On 12/16/2014 03:59 PM, Jeremy Stanley wrote: > I'm having trouble locating the source code for Google App Engine, > and can instead only find source code for its SDK. How would we run > a GAE instance? (Please remember that our Infra team doesn't host > content backed by proprietary services, but do encourage Google to > release this under a free license if they haven't already.) > Hi Jeremy, We don't need GoogleAppEngine if we decide that this is useful. We simply need to put the html page which renders the view on https://review.openstack.org. It is all javascript which talks asynchronously to the Gerrit backend. I am using GAE to simply illustrate the idea without having to spin up an entire Gerrit server. I guess I can also submit a patch to the infra project and see how this works on https://review-dev.openstack.org if you want. Thanks, Rado From openstack at nemebean.com Tue Dec 16 15:32:52 2014 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 16 Dec 2014 09:32:52 -0600 Subject: [openstack-dev] [oslo] interesting problem with config filter In-Reply-To: <1B377DFB-FB0A-443E-AB46-249BC201EBB6@doughellmann.com> References: <4BCA7D02-38D6-4B5B-A496-8EB259C1A792@doughellmann.com> <1418733697.16928.106.camel@sorcha> <1B377DFB-FB0A-443E-AB46-249BC201EBB6@doughellmann.com> Message-ID: <549050A4.5080100@nemebean.com> On 12/16/2014 07:20 AM, Doug Hellmann wrote: > > On Dec 16, 2014, at 7:41 AM, Mark McLoughlin wrote: > >> Hi Doug, >> >> On Mon, 2014-12-08 at 15:58 -0500, Doug Hellmann wrote: >>> As we?ve discussed a few times, we want to isolate applications from >>> the configuration options defined by libraries. One way we have of >>> doing that is the ConfigFilter class in oslo.config. When a regular >>> ConfigOpts instance is wrapped with a filter, a library can register >>> new options on the filter that are not visible to anything that >>> doesn?t have the filter object. >> >> Or to put it more simply, the configuration options registered by the >> library should not be part of the public API of the library. >> >>> Unfortunately, the Neutron team has identified an issue with this >>> approach. We have a bug report [1] from them about the way we?re using >>> config filters in oslo.concurrency specifically, but the issue applies >>> to their use everywhere. >>> >>> The neutron tests set the default for oslo.concurrency?s lock_path >>> variable to ?$state_path/lock?, and the state_path option is defined >>> in their application. With the filter in place, interpolation of >>> $state_path to generate the lock_path value fails because state_path >>> is not known to the ConfigFilter instance. >> >> It seems that Neutron sets this default in its etc/neutron.conf file in >> its git tree: >> >> lock_path = $state_path/lock >> >> I think we should be aiming for defaults like this to be set in code, >> and for the sample config files to contain nothing but comments. So, >> neutron should do: >> >> lockutils.set_defaults(lock_path="$state_path/lock") >> >> That's a side detail, however. >> >>> The reverse would also happen (if the value of state_path was somehow >>> defined to depend on lock_path), >> >> This dependency wouldn't/shouldn't be code - because Neutron *code* >> shouldn't know about the existence of library config options. >> Neutron deployers absolutely will be aware of lock_path however. >> >>> and that?s actually a bigger concern to me. A deployer should be able >>> to use interpolation anywhere, and not worry about whether the options >>> are in parts of the code that can see each other. The values are all >>> in one file, as far as they know, and so interpolation should ?just >>> work?. >> >> Yes, if a deployer looks at a sample configuration file, all options >> listed in there seem like they're in-play for substitution use within >> the value of another option. For string substitution only, I'd say there >> should be a global namespace where all options are registered. >> >> Now ... one caveat on all of this ... I do think the string substitution >> feature is pretty obscure and mostly just used in default values. >> >>> I see a few solutions: >>> >>> 1. Don?t use the config filter at all. >>> 2. Make the config filter able to add new options and still see >>> everything else that is already defined (only filter in one >>> direction). >>> 3. Leave things as they are, and make the error message better. >> >> 4. Just tackle this specific case by making lock_path implicitly >> relative to a base path the application can set via an API, so Neutron >> would do: >> >> lockutils.set_base_path(CONF.state_path) >> >> at startup. >> >> 5. Make the toplevel ConfigOpts aware of all filters hanging off it, and >> somehow cycle through all of those filters just when doing string >> substitution. > > We would have to allow the reverse as well, since the filter object doesn?t see options not explicitly imported by the code creating the filter. This doesn't seem like it should be difficult to do though. The ConfigFilter already takes a conf object when it gets initialized so it should have access to all of the globally registered opts. I'm a little surprised it doesn't already. I'm actually not 100% sure it makes sense to allow application opts to reference library opts since the application shouldn't depend on a library setting, but since the config file is flat I don't know that we can enforce that separation so _somebody_ is going to try to do it and be confused why it doesn't work. So I guess I feel like making opt interpolation work in both directions is the "right" way to do this, but it's kind of a moot point if runtime registration breaks this anyway (which it probably does :-/). Improving the error message to explain why a particular value can't be used for interpolation might be the only not insanely complicated way to completely address this interpolation issue. > > In either case, it only works if the filter object has been instantiated. I wonder if we have a similar problem with runtime option registration. I?ll have to test that. > > >> >>> Because of the deployment implications of using the filter, I?m >>> inclined to go with choice 1 or 2. However, choice 2 leaves open the >>> possibility of a deployer wanting to use the value of an option >>> defined by one filtered set of code when defining another. I don?t >>> know how frequently that might come up, but it seems like the error >>> would be very confusing, especially if both options are set in the >>> same config file. >>> >>> I think that leaves option 1, which means our plans for hiding options >>> from applications need to be rethought. >>> >>> Does anyone else see another solution that I?m missing? >> >> I'd do something like (3) and (4), then wait to see if it crops up >> multiple times in the future before tackling a more general solution. > > Option 3 prevents neutron from adopting oslo.concurrency, and option 4 is a backwards-incompatible change to the way lock path is set. > >> >> With option (1), the basic thing to think about is how to maintain API >> compatibility - if we expose the options through the API, how do we deal >> with future moves, removals, renames, and changing semantics of those >> config options. > > The option is exposed through the existing set_defaults() method, so we can make that handle any backwards compatibility issues if we change it. > >> >> Mark. >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mestery at mestery.com Tue Dec 16 15:33:01 2014 From: mestery at mestery.com (Kyle Mestery) Date: Tue, 16 Dec 2014 09:33:01 -0600 Subject: [openstack-dev] [all] New release of python-neutronclient: 2.3.10 Message-ID: The neutron team is pleased to announce the release of a new version of python-neutronclient. This primarily has bug fixes, including a regression with how "--enable-dhcp" was modified. See bug 1401555 [1] for more details. In addition, the following changes are also in this release: [kmestery at fedora-mac python-neutronclient]$ git log --abbrev-commit --pretty=oneline --no-merges 2.3.9..2.3.10 fea8706 subnet: allow --enable-dhcp=False/True syntax, again 66612c9 Router create distributed accepts lower case 89271b1 Add unit tests for agent related commands 56892bb Make help for agent-update more verbose c5d8557 Use discovery fixture 497bb55 Cleanup copy and pasted token 8a77718 fix the firewall rule arg split error a65f385 Updated from global requirements 3ed2a5e Disable name support for lb-healthmonitor-* commands 02c108f Fix mixed usage of _ 12a87f2 Fixes neutronclient lb-member-show command 5d2bafa neutron port-list -f csv outputs poorly formatted JSON strings 9ed73c0 Updated from global requirements 81fe0c7 Don't allow update of ipv6-ra-mode and ipv6-address-mode 1ac542c Updated from global requirements 9c464ba Use graduated oslo libraries d046a95 Fix E113 hacking check 64b2d8a Fix E129 hacking check d812227 Updated from global requirements 092e668 Add InvalidIpForNetworkClient exception 0f7741d Add missing parameters to Client's docstring 72afc0f Leverage neutronclient.openstack.common.importutils import_class 27f02ac Remove extraneous vim editor configuration comments 4d2133c Fix E128 hacking check 2eba58a Don't get keystone session if using noauth c02e782 Bump hacking to 0.9.x series e3e0915 Change "healthmonitor" to "health monitor" in help info bb4a0dc Correct 4xx/5xx response management in SessionClient 0fedd33 Change ipsecpolicies to 2 separate words: IPsec policies a1a8a0e handles keyboard interrupt 1ab4335 Use six.moves cStringIO instead of cStringIO 8115c02 Updated from global requirements 9d8ab0d Replace httpretty with requests_mock a9ed96f Fix to ensure endpoint_type is used by make_client() 42731a2 Work toward Python 3.4 support and testing 2840bdb Adds tty password entry for neutronclient [kmestery at fedora-mac python-neutronclient]$ For more info, please see the LP page [2], and report any issues found on the python-neutronclient LP page as a bug [3]. Thanks! Kyle [1] https://bugs.launchpad.net/python-neutronclient/+bug/1401555 [2] https://launchpad.net/python-neutronclient/+milestone/2.3.10 [3] https://bugs.launchpad.net/python-neutronclient -------------- next part -------------- An HTML attachment was scrubbed... URL: From anant.patil at hp.com Tue Dec 16 15:36:58 2014 From: anant.patil at hp.com (Anant Patil) Date: Tue, 16 Dec 2014 21:06:58 +0530 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <1418660028-sup-6503@fewbar.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> <548A3FB8.9030007@redhat.com> <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> <548B8480.9010506@redhat.com> <548EFB12.2090303@hp.com> <1418660028-sup-6503@fewbar.com> Message-ID: <5490519A.3090804@hp.com> On 16-Dec-14 00:59, Clint Byrum wrote: > Excerpts from Anant Patil's message of 2014-12-15 07:15:30 -0800: >> On 13-Dec-14 05:42, Zane Bitter wrote: >>> On 12/12/14 05:29, Murugan, Visnusaran wrote: >>>> >>>> >>>>> -----Original Message----- >>>>> From: Zane Bitter [mailto:zbitter at redhat.com] >>>>> Sent: Friday, December 12, 2014 6:37 AM >>>>> To: openstack-dev at lists.openstack.org >>>>> Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept >>>>> showdown >>>>> >>>>> On 11/12/14 08:26, Murugan, Visnusaran wrote: >>>>>>>> [Murugan, Visnusaran] >>>>>>>> In case of rollback where we have to cleanup earlier version of >>>>>>>> resources, >>>>>>> we could get the order from old template. We'd prefer not to have a >>>>>>> graph table. >>>>>>> >>>>>>> In theory you could get it by keeping old templates around. But that >>>>>>> means keeping a lot of templates, and it will be hard to keep track >>>>>>> of when you want to delete them. It also means that when starting an >>>>>>> update you'll need to load every existing previous version of the >>>>>>> template in order to calculate the dependencies. It also leaves the >>>>>>> dependencies in an ambiguous state when a resource fails, and >>>>>>> although that can be worked around it will be a giant pain to implement. >>>>>>> >>>>>> >>>>>> Agree that looking to all templates for a delete is not good. But >>>>>> baring Complexity, we feel we could achieve it by way of having an >>>>>> update and a delete stream for a stack update operation. I will >>>>>> elaborate in detail in the etherpad sometime tomorrow :) >>>>>> >>>>>>> I agree that I'd prefer not to have a graph table. After trying a >>>>>>> couple of different things I decided to store the dependencies in the >>>>>>> Resource table, where we can read or write them virtually for free >>>>>>> because it turns out that we are always reading or updating the >>>>>>> Resource itself at exactly the same time anyway. >>>>>>> >>>>>> >>>>>> Not sure how this will work in an update scenario when a resource does >>>>>> not change and its dependencies do. >>>>> >>>>> We'll always update the requirements, even when the properties don't >>>>> change. >>>>> >>>> >>>> Can you elaborate a bit on rollback. >>> >>> I didn't do anything special to handle rollback. It's possible that we >>> need to - obviously the difference in the UpdateReplace + rollback case >>> is that the replaced resource is now the one we want to keep, and yet >>> the replaced_by/replaces dependency will force the newer (replacement) >>> resource to be checked for deletion first, which is an inversion of the >>> usual order. >>> >> >> This is where the version is so handy! For UpdateReplaced ones, there is >> an older version to go back to. This version could just be template ID, >> as I mentioned in another e-mail. All resources are at the current >> template ID if they are found in the current template, even if they is >> no need to update them. Otherwise, they need to be cleaned-up in the >> order given in the previous templates. >> >> I think the template ID is used as version as far as I can see in Zane's >> PoC. If the resource template key doesn't match the current template >> key, the resource is deleted. The version is misnomer here, but that >> field (template id) is used as though we had versions of resources. >> >>> However, I tried to think of a scenario where that would cause problems >>> and I couldn't come up with one. Provided we know the actual, real-world >>> dependencies of each resource I don't think the ordering of those two >>> checks matters. >>> >>> In fact, I currently can't think of a case where the dependency order >>> between replacement and replaced resources matters at all. It matters in >>> the current Heat implementation because resources are artificially >>> segmented into the current and backup stacks, but with a holistic view >>> of dependencies that may well not be required. I tried taking that line >>> out of the simulator code and all the tests still passed. If anybody can >>> think of a scenario in which it would make a difference, I would be very >>> interested to hear it. >>> >>> In any event though, it should be no problem to reverse the direction of >>> that one edge in these particular circumstances if it does turn out to >>> be a problem. >>> >>>> We had an approach with depends_on >>>> and needed_by columns in ResourceTable. But dropped it when we figured out >>>> we had too many DB operations for Update. >>> >>> Yeah, I initially ran into this problem too - you have a bunch of nodes >>> that are waiting on the current node, and now you have to go look them >>> all up in the database to see what else they're waiting on in order to >>> tell if they're ready to be triggered. >>> >>> It turns out the answer is to distribute the writes but centralise the >>> reads. So at the start of the update, we read all of the Resources, >>> obtain their dependencies and build one central graph[1]. We than make >>> that graph available to each resource (either by passing it as a >>> notification parameter, or storing it somewhere central in the DB that >>> they will all have to read anyway, i.e. the Stack). But when we update a >>> dependency we don't update the central graph, we update the individual >>> Resource so there's no global lock required. >>> >>> [1] >>> https://github.com/zaneb/heat-convergence-prototype/blob/distributed-graph/converge/stack.py#L166-L168 >>> >> >> A centralized graph and decision making will make the implementation far >> more simpler than distributed. This looks academic, but the simplicity >> beats everything! When each worker has to decide, there needs to be >> lock, only DB transactions are not enough. In contrast, when the >> decision making is centralized, that particular critical section can be >> attempted with transaction and re-attempted if needed. >> > > I'm concerned that we're losing sight of the whole point of convergence. > > Yes, concurrency is hard, and state management is really the only thing > hard about concurrency. > > What Zane is suggesting is a lock-free approach commonly called 'Read > Copy Update' or "RCU". It has high reader concurrency, but relies on > there only being one writer. It is quite simple, and has proven itself > enough to even be included in the Linux kernel: > > http://lwn.net/Articles/263130/ > I am afraid Zane is saying the other way: "distribute the writes but centralize the reads". Nevertheless, what I meant by having a centralize decision making is the same thing you are pointing to. By centralize I mean: 1. Have the graph in DB, not the edges distributed in the resource table. Also, having the graph in DB doesn't mean we need to lock it each time we update traversal information or compute next set of ready resources/tasks. 2. Take the responsibility of what-to-do-next out of worker. This is the "writer" part, where upon receiving a notification the engine will update the graph in DB and compute the next set of resources to be converged (having all dependencies resolved). I agree that there will be multiple instances of engine which could execute this section code (this code will become the critical section), so in that sense it is not centralized. But this approach differs from workers making the decision, where, each worker reads from DB and updates the sync point. Instead, if the workers send "done" notification to engine, the engine can update the graph traversal and issue request for next set of tasks. All the workers and observers read from DB, send notifications to engine, while the engine writes to DB. This may not be strictly followed. Please also note that updating a graph doesn't mean that we have to lock anything. As the notifications arrive, the graph is updated, in a TX, and re-attempted if the two notifications try to update the same row and the TX fails. The workers are *part of* engine but not really *are the engine*. Given an instance of engine, and multiple workers (greenlet threads), I can see that it turns out to be what you are suggesting about RCU. Please correct me if I am wrong. I do not insist that it has to be the way I am saying. I am merely brainstorming to see if the "centralized" write makes sense in this case. I also admit that I do not have performance data, and I am not any DB expert. >> With the distributed approach, I see following drawbacks: >> 1. Every time a resource is done, the peer resources (siblings) are >> checked to see if they are done and the parent is propagated. This >> happens for each resource. > > This is slightly inaccurate. > > Every time a resource is done, resources that depend on that resource > are checked to see if they still have any unresolved dependencies. > Yes. If we have a graph in DB, we should be able to easily compute this. If the edges are distributed and kept along with the resources in the resources, we might have to execute multiple queries or keep the graph in memory. >> 2. The worker has to run through all the resources to see if the stack >> is done, to mark it as completed. > > If a worker completes a resource which has no dependent resources, it > only needs to check to see if all of the other edges of the graph are > complete to mark the state as complete. There is no full traversal > unless you want to make sure nothing regressed, which is not the way any > of the specs read. > I agree. For this same reason I am favoring to have the graph in DB. I also noted that Zane is not against keeping the graph in DB, but only that store the *actual* dependencies in resources (may be physical resource ID?). This is fine I think, though we will be looking at graph table some times(create/update) and looking at these dependencies in resource table at other times (delete/clean-up?). The graph can instantly tell if it is done or not when we look at it, but for clean-ups and delelts we would have to rely on resource table also. >> 3. The decision to converge is made by each worker resulting in lot of >> contention. The centralized graph restricts the contention point to one >> place where we can use DB transactions. It is easier to maintain code >> where particular decisions are made at a place rather than at many >> places. > > Why not let multiple workers use DB transactions? The contention happens > _only if it needs to happen to preserve transaction consistency_ instead > of _always_. > Sure! When the workers are done with the current resource they can update the DB and pick-up the parent if it is ready. The DB interactions can happen as a TX. But then there would be no one-writer-multiple-reader if we follow this. All the workers write and read. >> 4. The complex part we are trying to solve is to decide on what to do >> next when a resource is done. With centralized graph, this is abstracted >> out to the DB API. The API will return the next set of nodes. A smart >> SQL query can reduce a lot of logic currently being coded in >> worker/engine. > > Having seen many such "smart" SQL queries, I have to say, this is > terrifying. Complexity in database access is by far the single biggest > obstacle to scaling out. > > I don't really know why you'd want logic to move into the database. It > is the one place that you must keep simple in order to scale. We can > scale out python like crazy.. but SQL is generally a single threaded, > impossible to debug component. So make the schema obvious and accesses > to it straight forward. > > I think we need to land somewhere between the two approaches though. > Here is my idea for DB interaction, I realize now it's been in my head > for a while but I never shared: > > CREATE TABLE resource ( > id ..., > ! all the stuff now > version int, > replaced_by int, > complete_order int, > primary key (id, version), > key idx_replaced_by (replaced_by)); > > CREATE TABLE resource_dependencies ( > id ...., > version int, > needed_by ... > primary key (id, version, needed_by)); > > Then completion happens something like this: > > BEGIN > SELECT @complete_order := MAX(complete_order) FROM resource WHERE stack_id = :stack_id: > SET @complete_order := @complete_order + 1 > UPDATE resource SET complete_order = @complete_order, state='COMPLETE' WHERE id=:id: AND version=:version:; > ! if there is a replaced_version > UPDATE resource SET replaced_by=:version: WHERE id=:id: AND version=:replaced_version:; > SELECT DISTINCT r.id FROM resource r INNER JOIN resource_dependencies rd > ON r.id = rd.resource_id AND r.version = rd.version > WHERE r.version=:version: AND rd.needed_by=:id: AND r.state != 'COMPLETE' > > for id in results: > convergequeue.submit(id) > > COMMIT > > Perhaps I've missed some revelation that makes this hard or impossible. > But I don't see a ton of database churn (one update per completion is > meh). I also don't see a lot of complexity in the query. The > complete_order can be used to handle deletes in the right order later > (note that I know that is probably the wrong way to do it and sequences > are a thing that can be used for this). > >> 5. What would be the starting point for resource clean-up? The clean-up >> has to start when all the resources are updated. With no centralized >> graph, the DB has to be searched for all the resources with no >> dependencies and with older versions (or having older template keys) and >> start removing them. With centralized graph, this would be a simpler >> with a SQL queries returning what needs to be done. The search space for >> where to start with clean-up will be huge. > > "Searched" may be the wrong way. With the table structure above, you can > find everything to delete with this query: > > ! Outright deletes > SELECT r_old.id > FROM resource r_old LEFT OUTER JOIN resource r_new > ON r_old.id = r_new.id AND r_old.version = :cleanup_version: > WHERE r_new.id IS NULL OR r_old.replaced_by IS NOT NULL > ORDER BY DESC r.complete_order; > > That should delete everything in more or less the right order. I think > for that one you can just delete the rows as they're confirmed deleted > from the plugins, no large transaction needed since we'd not expect > these rows to be updated anymore. > Well, I meant simple SQL queries where we JOIN the graph table and resource table to see if a resource can be taken up for convergence. It is possible that the graph says a resource is ready since it has all the dependencies stisfied, but have a previous version still in progress from previous update. By smart, I never meant any complex queries, but only the ones which solve _our problem_. The queries which you have suggested above is what I meant. I was not sure we were using the DB in the way you are suggesting, hence called for utilizing it in a better way. >> 6. When engine restarts, the search space on where to start will be >> huge. With a centralized graph, the abstracted API to get next set of >> nodes makes the implementation of decision simpler. >> >> I am convinced enough that it is simpler to assign the responsibility to >> engine on what needs to be done next. No locks will be required, not >> even resource locks! It is simpler from implementation, understanding >> and maintenance perspective. >> > > I thought you started saying you would need locks, but now saying you > won't. No. We wanted to get rid of the stack lock we were using to avoid the concurrency issues, and you had suggested using DB transactions. We had other ideas but DB TX looked cleaner to us and we are proceeding with it. > I agree no abstract locking is needed, just a consistent view of > the graph in the DB. > +1. > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Tue Dec 16 15:39:58 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 16 Dec 2014 10:39:58 -0500 Subject: [openstack-dev] [oslo] interesting problem with config filter In-Reply-To: <549050A4.5080100@nemebean.com> References: <4BCA7D02-38D6-4B5B-A496-8EB259C1A792@doughellmann.com> <1418733697.16928.106.camel@sorcha> <1B377DFB-FB0A-443E-AB46-249BC201EBB6@doughellmann.com> <549050A4.5080100@nemebean.com> Message-ID: <5AEC68B8-3674-4515-B56D-FC84B3312CF8@doughellmann.com> On Dec 16, 2014, at 10:32 AM, Ben Nemec wrote: > On 12/16/2014 07:20 AM, Doug Hellmann wrote: >> >> On Dec 16, 2014, at 7:41 AM, Mark McLoughlin wrote: >> >>> Hi Doug, >>> >>> On Mon, 2014-12-08 at 15:58 -0500, Doug Hellmann wrote: >>>> As we?ve discussed a few times, we want to isolate applications from >>>> the configuration options defined by libraries. One way we have of >>>> doing that is the ConfigFilter class in oslo.config. When a regular >>>> ConfigOpts instance is wrapped with a filter, a library can register >>>> new options on the filter that are not visible to anything that >>>> doesn?t have the filter object. >>> >>> Or to put it more simply, the configuration options registered by the >>> library should not be part of the public API of the library. >>> >>>> Unfortunately, the Neutron team has identified an issue with this >>>> approach. We have a bug report [1] from them about the way we?re using >>>> config filters in oslo.concurrency specifically, but the issue applies >>>> to their use everywhere. >>>> >>>> The neutron tests set the default for oslo.concurrency?s lock_path >>>> variable to ?$state_path/lock?, and the state_path option is defined >>>> in their application. With the filter in place, interpolation of >>>> $state_path to generate the lock_path value fails because state_path >>>> is not known to the ConfigFilter instance. >>> >>> It seems that Neutron sets this default in its etc/neutron.conf file in >>> its git tree: >>> >>> lock_path = $state_path/lock >>> >>> I think we should be aiming for defaults like this to be set in code, >>> and for the sample config files to contain nothing but comments. So, >>> neutron should do: >>> >>> lockutils.set_defaults(lock_path="$state_path/lock") >>> >>> That's a side detail, however. >>> >>>> The reverse would also happen (if the value of state_path was somehow >>>> defined to depend on lock_path), >>> >>> This dependency wouldn't/shouldn't be code - because Neutron *code* >>> shouldn't know about the existence of library config options. >>> Neutron deployers absolutely will be aware of lock_path however. >>> >>>> and that?s actually a bigger concern to me. A deployer should be able >>>> to use interpolation anywhere, and not worry about whether the options >>>> are in parts of the code that can see each other. The values are all >>>> in one file, as far as they know, and so interpolation should ?just >>>> work?. >>> >>> Yes, if a deployer looks at a sample configuration file, all options >>> listed in there seem like they're in-play for substitution use within >>> the value of another option. For string substitution only, I'd say there >>> should be a global namespace where all options are registered. >>> >>> Now ... one caveat on all of this ... I do think the string substitution >>> feature is pretty obscure and mostly just used in default values. >>> >>>> I see a few solutions: >>>> >>>> 1. Don?t use the config filter at all. >>>> 2. Make the config filter able to add new options and still see >>>> everything else that is already defined (only filter in one >>>> direction). >>>> 3. Leave things as they are, and make the error message better. >>> >>> 4. Just tackle this specific case by making lock_path implicitly >>> relative to a base path the application can set via an API, so Neutron >>> would do: >>> >>> lockutils.set_base_path(CONF.state_path) >>> >>> at startup. >>> >>> 5. Make the toplevel ConfigOpts aware of all filters hanging off it, and >>> somehow cycle through all of those filters just when doing string >>> substitution. >> >> We would have to allow the reverse as well, since the filter object doesn?t see options not explicitly imported by the code creating the filter. > > This doesn't seem like it should be difficult to do though. The > ConfigFilter already takes a conf object when it gets initialized so it > should have access to all of the globally registered opts. I'm a little > surprised it doesn't already. > > I'm actually not 100% sure it makes sense to allow application opts to > reference library opts since the application shouldn't depend on a > library setting, but since the config file is flat I don't know that we > can enforce that separation so _somebody_ is going to try to do it and > be confused why it doesn't work. > > So I guess I feel like making opt interpolation work in both directions > is the "right" way to do this, but it's kind of a moot point if runtime > registration breaks this anyway (which it probably does :-/). If it does, we should probably change the interpolation code to use any option values it finds as a literal string without interpreting or validating it. That means changing the implementation to go through a different lookup path, but it sounds like we need that anyway. > Improving > the error message to explain why a particular value can't be used for > interpolation might be the only not insanely complicated way to > completely address this interpolation issue. https://review.openstack.org/#/c/140143/ > >> >> In either case, it only works if the filter object has been instantiated. I wonder if we have a similar problem with runtime option registration. I?ll have to test that. >> >> >>> >>>> Because of the deployment implications of using the filter, I?m >>>> inclined to go with choice 1 or 2. However, choice 2 leaves open the >>>> possibility of a deployer wanting to use the value of an option >>>> defined by one filtered set of code when defining another. I don?t >>>> know how frequently that might come up, but it seems like the error >>>> would be very confusing, especially if both options are set in the >>>> same config file. >>>> >>>> I think that leaves option 1, which means our plans for hiding options >>>> from applications need to be rethought. >>>> >>>> Does anyone else see another solution that I?m missing? >>> >>> I'd do something like (3) and (4), then wait to see if it crops up >>> multiple times in the future before tackling a more general solution. >> >> Option 3 prevents neutron from adopting oslo.concurrency, and option 4 is a backwards-incompatible change to the way lock path is set. >> >>> >>> With option (1), the basic thing to think about is how to maintain API >>> compatibility - if we expose the options through the API, how do we deal >>> with future moves, removals, renames, and changing semantics of those >>> config options. >> >> The option is exposed through the existing set_defaults() method, so we can make that handle any backwards compatibility issues if we change it. >> >>> >>> Mark. >>> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Tue Dec 16 15:45:01 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 16 Dec 2014 15:45:01 +0000 Subject: [openstack-dev] [all][gerrit] Showing all inline comments from all patch sets In-Reply-To: <54904D9B.2000204@vmware.com> References: <54900903.6090008@vmware.com> <20141216135906.GR2497@yuggoth.org> <54904D9B.2000204@vmware.com> Message-ID: <20141216154501.GU2497@yuggoth.org> On 2014-12-16 17:19:55 +0200 (+0200), Radoslav Gerganov wrote: > We don't need GoogleAppEngine if we decide that this is useful. We > simply need to put the html page which renders the view on > https://review.openstack.org. It is all javascript which talks > asynchronously to the Gerrit backend. > > I am using GAE to simply illustrate the idea without having to > spin up an entire Gerrit server. That makes a lot more sense--thanks for the clarification! > I guess I can also submit a patch to the infra project and see how > this works on https://review-dev.openstack.org if you want. If there's a general desire from the developer community for it, then that's probably the next step. However, ultimately this seems like something better suited as an upstream feature request for Gerrit (there may even already be thread-oriented improvements in the works for the new change screen--I haven't kept up with their progress lately). -- Jeremy Stanley From sgordon at redhat.com Tue Dec 16 16:13:46 2014 From: sgordon at redhat.com (Steve Gordon) Date: Tue, 16 Dec 2014 11:13:46 -0500 (EST) Subject: [openstack-dev] [Telco][NFV] Meeting Reminder - Wednesday December 17th @ 1400 UTC in #openstack-meeting-alt In-Reply-To: <62420988.2067575.1418743794299.JavaMail.zimbra@redhat.com> Message-ID: <1545004568.2104129.1418746426731.JavaMail.zimbra@redhat.com> Hi all, Just a reminder that the Telco Working Group will be meeting @ 1400 UTC in #openstack-meeting on Wednesday December 17th. Draft agenda is available here: https://etherpad.openstack.org/p/nfv-meeting-agenda Please feel free to add items. Note that I would also like to propose that we skip the meetings which would have fallen on December 24th and December 31st due to it being a holiday period for many participants. This would make the next meeting following this one January 7th. Thanks, Steve From svasilenko at mirantis.com Tue Dec 16 16:28:25 2014 From: svasilenko at mirantis.com (Sergey Vasilenko) Date: Tue, 16 Dec 2014 20:28:25 +0400 Subject: [openstack-dev] [Fuel] In-Reply-To: References: Message-ID: Guys, it's a big and complicated architecture issue. Issue, like this was carefully researched about month ago (while P***) root case of issue: - Now we use OVS for build virtual network topology on each node. - OVS has performance degradation while pass huge of small network packets. - We can?t abandon using OVS entirely and forever, because it's a most popular Neutron solution. - We can?t abandon using OVS partial now, because low-level modules don?t ready yet for this. I start blueprint ( https://blueprints.launchpad.net/fuel/+spec/l23network-refactror-to-provider-based-resources) for aim possibility of combine using OVS for Neutron purposes and don't use it for management, storage, etc... purposes. We, together with L2 support team, Neutron team, and another network experts make tuning one of existing production-like env after deployment and achieve following values on bonds of two 10G cards: - vm-to-vm speed (on different compute nodes): 2.56 Gbits/sec (GRE segmentation) - node-to-node speed: 17.6 Gbits/s This values closely near with theoretical maximum for OVS 1.xx with GRE. Some performance improvements may also achieved by upgrading open vSwitch to the latest LTS (2.3.1 at this time) branch and using "megaflow" feature ( http://networkheresy.com/2014/11/13/accelerating-open-vswitch-to-ludicrous-speed/ ). After this research we concluded: - OVS can't pass huge of small packages without network performance degradation - for fix this we should re-design network topology on env nodes - even re-designed network topology can't fix this issue at all. Some network parameters, like mtu, disabling offloading for NICs, buffers, etc... can be tuned only on real environment. My opinion ? in FUEL we should add new (or extend existing network-checker) component. This component should testing network performance on real customer?s pre-configured env by different (already defined) performance test cases and recommend better setup BEFORE main deployment cycle run. /sv -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgagne at iweb.com Tue Dec 16 16:35:30 2014 From: mgagne at iweb.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=) Date: Tue, 16 Dec 2014 11:35:30 -0500 Subject: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group In-Reply-To: References: <548A06C5.2060900@gmail.com> <6A529F76-5B1C-43ED-A8DC-4AA5DB0E93C8@gmail.com> <548A09B2.3040909@gmail.com> <548A1A34.40105@dague.net> <548B60D0.7090600@dague.net> <548F1235.2020502@redhat.com> <936303430.2524172.1418663608280.JavaMail.zimbra@redhat.com> <695A9601-ECFA-4A85-BCBF-B4C89F473875@redhat.com> Message-ID: <54905F52.5090903@iweb.com> On 2014-12-16 12:07 AM, Christopher Yeoh wrote: > So I think this is something we really should get agreement on across > the open stack API first before flipping back and forth on a case by > case basis. > > Personally I think we should be using uuids for uniqueness and leave any > extra restrictions to a ui layer if really required. If we try to have > name uniqueness then "test " should be considered the same as " test" as > " test " and it introduces all sorts of slightly different combos that > look the same except under very close comparison. Add unicode for extra fun. > Leaving such uniqueness validation to the UI layer is a *huge no-no*. The problem I had in production occurred in a non-UI system. Please consider making it great for all users, not just the one (Horizon) provided by OpenStack. -- Mathieu From dguryanov at parallels.com Tue Dec 16 16:49:42 2014 From: dguryanov at parallels.com (Dmitry Guryanov) Date: Tue, 16 Dec 2014 19:49:42 +0300 Subject: [openstack-dev] [Nova] question about "Get Guest Info" row in HypervisorSupportMatrix In-Reply-To: References: Message-ID: <1720684.YrPHgtedvt@dblinov.sw.ru> On Tuesday 09 December 2014 18:15:01 Markus Zoeller wrote: > > > On Tue, Dec 09, 2014 at 06:33:47PM +0300, Dmitry Guryanov wrote: > > > > > > Hello! > > > > > > There is a feature in HypervisorSupportMatrix > > > (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called "Get > > Guest > > > > Info". Does anybody know, what does it mean? I haven't found anything > > like > > > > this neither in nova api nor in horizon and nova command line. > > I think this maps to the nova driver function "get_info": > https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4 > 054 > > I believe (and didn't double-check) that this is used e.g. by the > Nova CLI via `nova show [--minimal] ` command. > It seems Driver.get_info used only for obtaining instance's power state. It's strange. It think we can cleanup the code, rename get_info to get_power_state and return only power state from this function. > I tried to map the features of the hypervisor support matrix to > specific nova driver functions on this wiki page: > https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DriverAPI > Thanks! > > On Tue Dec 9 15:39:35 UTC 2014, Daniel P. Berrange wrote: > > I've pretty much no idea what the intention was for that field. I've > > been working on formally documenting all those things, but draw a blank > > for that > > > > FYI: > > > > https://review.openstack.org/#/c/136380/1/doc/hypervisor-support.ini > > > > Regards, Daniel > > Nice! I will keep an eye on that :) > > > Regards, > Markus Zoeller > IRC: markus_z > Launchpad: mzoeller > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Dmitry Guryanov From anne at openstack.org Tue Dec 16 16:57:54 2014 From: anne at openstack.org (Anne Gentle) Date: Tue, 16 Dec 2014 10:57:54 -0600 Subject: [openstack-dev] [Neutron] Core/Vendor code decomposition In-Reply-To: <878ui7rjtz.fsf@metaswitch.com> References: <548F2DC9.5070907@openstack.org> <878ui7rjtz.fsf@metaswitch.com> Message-ID: On Tue, Dec 16, 2014 at 4:05 AM, Neil Jerram wrote: > > Stefano Maffulli writes: > > > On 12/09/2014 04:11 PM, by wrote: > >>>>[vad] how about the documentation in this case?... bcos it needs some > >> place to document (a short desc and a link to vendor page) or list these > >> kind of out-of-tree plugins/drivers... just to make the user aware of > >> the availability of such plugins/driers which is compatible with so and > >> so openstack release. > >> I checked with the documentation team and according to them, only the > >> following plugins/drivers only will get documented... > >> 1) in-tree plugins/drivers (full documentation) > >> 2) third-party plugins/drivers (ie, one implements and follows this new > >> proposal, a.k.a partially-in-tree due to the integration module/code)... > >> > >> *** no listing/mention about such completely out-of-tree > plugins/drivers*** > > > > Discoverability of documentation is a serious issue. As I commented on > > docs spec [1], I think there are already too many places, mini-sites and > > random pages holding information that is relevant to users. We should > > make an effort to keep things discoverable, even if not maintained > > necessarily by the same, single team. > > > > I think the docs team means that they are not able to guarantee > > documentation for third-party *themselves* (and has not been able, too). > > The docs team is already overworked as it is now, they can't take on > > more responsibilities. > > > > So once Neutron's code will be split, documentation for the users of all > > third-party modules should find a good place to live in, indexed and > > searchable together where the rest of the docs are. I'm hoping that we > > can find a place (ideally under docs.openstack.org?) where third-party > > documentation can live and be maintained by the teams responsible for > > the code, too. > > > > Thoughts? > > I suggest a simple table, under docs.openstack.org, where each row has > the plugin/driver name, and then links to the documentation and code. > There should ideally be a very lightweight process for vendors to add > their row(s) to this table, and to edit those rows. > > I don't think it makes sense for the vendor documentation itself to be > under docs.openstack.org, while the code is out of tree. > > Stef has suggested docs.openstack.org/third-party as a potential location on the review at [1] https://review.openstack.org/#/c/133372/. The proposal currently is that the list's source would be in the openstack-manuals repository, and the process for adding to that repo is the same as all OpenStack contributions. I plan to finalize the plan in January, thanks all for the input, and keep it coming. Anne > Regards, > Neil > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Dec 16 17:05:23 2014 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 16 Dec 2014 18:05:23 +0100 Subject: [openstack-dev] [stable] Organizational changes to support stable branches In-Reply-To: <54733885.5030300@openstack.org> References: <54649730.5070105@openstack.org> <54733885.5030300@openstack.org> Message-ID: <54906653.6090308@openstack.org> New status update: The switch to per-project stable review teams is now completed. People that originally were in the openstack-stable-maint have been split between stable-maint-core (for cross-project stable policy guardians) and $PROJECT-stable-maint (for those specialized in reviewing one project stable branch in particular). If you were a member of openstack-stable-maint and feel like you've been misplaced, don't hesitate to contact me or another stable-maint-core member. Regards, Thierry Carrez wrote: > OK, since there was no disagreement I pushed the changes to: > https://wiki.openstack.org/wiki/StableBranch > > We'll get started setting up project-specific stable-maint teams ASAP. > Cheers, > > Thierry Carrez wrote: >> TL;DR: >> Every project should designate a Stable branch liaison. >> >> Hi everyone, >> >> Last week at the summit we discussed evolving the governance around >> stable branches, in order to maintain them more efficiently (and >> hopefully for a longer time) in the future. >> >> The current situation is the following: there is a single >> stable-maint-core review team that reviews all backports for all >> projects, making sure the stable rules are followed. This does not scale >> that well, so we started adding project-specific people to the single >> group, but they (rightfully) only care about one project. Things had to >> change for Kilo. Here is what we came up with: >> >> 1. We propose that integrated projects with stable branches designate a >> formal "Stable Branch Liaison" (by default, that would be the PTL, but I >> strongly encourage someone specifically interested in stable branches to >> step up). The Stable Branch Liaison is responsible for making sure >> backports are proposed for critical issues in their project, and make >> sure proposed backports are reviewed. They are also the contact point >> for stable branch release managers around point release times. >> >> 2. We propose to set up project-specific review groups >> ($PROJECT-stable-core) which would be in charge of reviewing backports >> for a given project, following the stable rules. Originally that group >> should be the Stable Branch Liaison + stable-maint-core. The group is >> managed by stable-maint-core, so that we make sure any addition is well >> aware of the Stable Branch rules before they are added. The Stable >> Branch Liaison should suggest names for addition to the group as needed. >> >> 3. The current stable-maint-core group would be reduced to stable branch >> release managers and other active cross-project stable branch rules >> custodians. We'll remove project-specific people and PTLs that were >> added in the past. The new group would be responsible for granting >> exceptions for all questionable backports raised by $PROJECT-stable-core >> groups, providing backports reviews help everywhere, maintain the stable >> branch rules (and make sure they are respected), and educate proposed >> $PROJECT-stable-core members on the rules. >> >> 4. Each stable branch (stable/icehouse, stable/juno...) that we >> concurrently support should have a champion. Stable Branch Champions are >> tasked with championing a specific stable branch support, making sure >> the branch stays in good shape and remains usable at all times. They >> monitor periodic jobs failures and enlist the help of others in order to >> fix the branches in case of breakage. They should also raise flags if >> for some reason they are blocked and don't receive enough support, in >> which case early abandon of the branch will be considered. Adam >> Gandelman volunteered to be the stable/juno champion. Ihar Hrachyshka >> (was) volunteered to be the stable/icehouse champion. >> >> 5. To set expectations right and evolve the meaning of "stable" over >> time to gradually mean more "not changing", we propose to introduce >> support phases for stable branches. During the first 6 months of life of >> a stable branch (Phase I) any significant bug may be backported. During >> the next 6 months of life of a stable branch (Phase II), only critical >> issues and security fixes may be backported. After that and until end of >> life (Phase III), only security fixes may be backported. That way, at >> any given time, there is only one stable branch in "Phase I" support. >> >> 6. In order to raise awareness, all stable branch discussions will now >> happen on the -dev list (with prefix [stable]). The >> openstack-stable-maint list is now only used for periodic jobs reports, >> and is otherwise read-only. >> >> Let us know if you have any comment, otherwise we'll proceed to set >> those new policies up. >> > > -- Thierry Carrez (ttx) From dolph.mathews at gmail.com Tue Dec 16 17:31:35 2014 From: dolph.mathews at gmail.com (Dolph Mathews) Date: Tue, 16 Dec 2014 11:31:35 -0600 Subject: [openstack-dev] [all][gerrit] Showing all inline comments from all patch sets In-Reply-To: <54900903.6090008@vmware.com> References: <54900903.6090008@vmware.com> Message-ID: I've envisioned basically the same feature before, but I don't find the comments to be particularly useful without the complete context. What I really want from gerrit is a 3-way diff, wherein the first column is always the original state of the repo, the second column is a user-selectable patchset between (patchset 1) and (latest patchset - 1), and the third column is always the (latest patchset). And then make it easy for me to switch the middle column to a different patchset, without scrolling back to the top of the page. You'd be able to quickly skim through the history of comments and see the evolution of a patch, which I think is the same user experience that you're looking for? I agree with Jeremy though, this is ideally an upstream effort to improve gerrit itself. On Tue, Dec 16, 2014 at 4:27 AM, Radoslav Gerganov wrote: > I never liked how Gerrit is displaying inline comments and I find it hard > to follow discussions on changes with many patch sets and inline comments. > So I tried to hack together an html view which display all comments grouped > by patch set, file and commented line. You can see the result at > http://gerrit-mirror.appspot.com/. Some examples: > > http://gerrit-mirror.appspot.com/127283 > http://gerrit-mirror.appspot.com/128508 > http://gerrit-mirror.appspot.com/83207 > > There is room for many improvements (my css skills are very limited) but I > am just curious if someone else finds the idea useful. The frontend is > using the same APIs as the Gerrit UI and the backend running on > GoogleAppEngine is just proxying the requests to review.openstack.org. So > in theory if we serve the html page from our Gerrit it will work. You can > find all sources here: https://github.com/rgerganov/gerrit-hacks. Let me > know what you think. > > Thanks, > Rado > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.maddox at RACKSPACE.COM Tue Dec 16 17:32:13 2014 From: thomas.maddox at RACKSPACE.COM (Thomas Maddox) Date: Tue, 16 Dec 2014 17:32:13 +0000 Subject: [openstack-dev] [Neutron] Looking for feedback: spec for allowing additional IPs to be shared Message-ID: Hey all, It seems I missed the Kilo proposal deadline for Neutron, unfortunately, but I still wanted to propose this spec for Neutron and get feedback/approval, sooner rather than later, so I can begin working on an implementation, even if it can't land in Kilo. I opted to put this in an etherpad for now for collaboration due to missing the Kilo proposal deadline. Spec markdown in etherpad: https://etherpad.openstack.org/p/allow-sharing-additional-ips Blueprint: https://blueprints.launchpad.net/neutron/+spec/allow-sharing-additional-ips I also want to add this to the meeting agenda for Monday and hopefully we can get to chatting about it. :) Cheers! -Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt.r.taylor at gmail.com Tue Dec 16 17:38:45 2014 From: kurt.r.taylor at gmail.com (Kurt Taylor) Date: Tue, 16 Dec 2014 11:38:45 -0600 Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: <548F85C8.6020400@openstack.org> References: <547CCA69.4000909@anteaya.info> <837B116B6E5B934DA06D9AD0FD79C6A3018E9C60@SHSMSX104.ccr.corp.intel.com> <547EB5C9.8020008@rackspace.com> <547F8DC1.5000202@anteaya.info> <548F85C8.6020400@openstack.org> Message-ID: On Mon, Dec 15, 2014 at 7:07 PM, Stefano Maffulli wrote: > > On 12/05/2014 07:08 AM, Kurt Taylor wrote: > > 1. Meeting content: Having 2 meetings per week is more than is needed at > > this stage of the working group. There just isn't enough meeting content > > to justify having two meetings every week. > > I'd like to discuss this further: the stated objectives of the meetings > are very wide and may allow for more than one slot per week. In > particular I'm seeing the two below as good candidates for 'meet as many > times as possible': > > * to provide a forum for the curious and for OpenStack programs who > are not yet in this space but may be in the future > * to encourage questions from third party folks and support the > sourcing of answers > > > > As I mentioned above, probably one way to do this is to make some slots > more focused on engaging newcomers and answering questions, more like > serendipitous mentoring sessions with the less involved, while another > slot could be dedicated to more focused and long term efforts, with more > committed people? > This is an excellent idea, let's split the meetings into: 1) Mentoring - mentoring new CI team members and operators, help them understand infra tools and processes. Anita can continue her fantastic work here. 2) Working Group - working meeting for documentation, reviewing patches for relevant work, and improving the consumability of infra CI components. I will be happy to chair these meetings initially. I am sure I can get help with these meetings for the other time zones also. With this approach we can also continue to use the new meeting times voted on by the group, and each is focused on targeting a specific group with very different needs. Thanks Stefano! Kurt Taylor (krtaylor) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dguryanov at parallels.com Tue Dec 16 18:01:50 2014 From: dguryanov at parallels.com (Dmitry Guryanov) Date: Tue, 16 Dec 2014 21:01:50 +0300 Subject: [openstack-dev] [Nova] question about "Get Guest Info" row in HypervisorSupportMatrix In-Reply-To: <20141209153935.GM29167@redhat.com> References: <7997383.8LhO9nnzxZ@dblinov.sw.ru> <20141209153935.GM29167@redhat.com> Message-ID: <4564977.o3RP0TIijQ@dblinov.sw.ru> On Tuesday 09 December 2014 15:39:35 Daniel P. Berrange wrote: > On Tue, Dec 09, 2014 at 06:33:47PM +0300, Dmitry Guryanov wrote: > > Hello! > > > > There is a feature in HypervisorSupportMatrix > > (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called "Get > > Guest > > Info". Does anybody know, what does it mean? I haven't found anything like > > this neither in nova api nor in horizon and nova command line. > > I've pretty much no idea what the intention was for that field. I've > been working on formally documenting all those things, but draw a blank > for that > > FYI: > > https://review.openstack.org/#/c/136380/1/doc/hypervisor-support.ini > > Thanks, looks much betters than previous one. I think "Auto configure disk" refers to resizing filesystems on root disk according to value given in flavor. > Regards, > Daniel -- Dmitry Guryanov From lsurette at redhat.com Tue Dec 16 18:13:53 2014 From: lsurette at redhat.com (Liz Blanchard) Date: Tue, 16 Dec 2014 13:13:53 -0500 Subject: [openstack-dev] [Horizon] [UX] Curvature interactive virtual network design In-Reply-To: References: Message-ID: On Nov 7, 2014, at 11:16 AM, John Davidge (jodavidg) wrote: > As discussed in the Horizon contributor meet up, here at Cisco we?re interested in upstreaming our work on the Curvature dashboard into Horizon. We think that it can solve a lot of issues around guidance for new users and generally improving the experience of interacting with Neutron. Possibly an alternative persona for novice users? > > For reference, see: > http://youtu.be/oFTmHHCn2-g ? Video Demo > https://www.openstack.org/summit/portland-2013/session-videos/presentation/interactive-visual-orchestration-with-curvature-and-donabe ? Portland presentation > https://github.com/CiscoSystems/curvature ? original (Rails based) code > We?d like to gauge interest from the community on whether this is something people want. > > Thanks, > > John, Brad & Sam Hey guys, Sorry for my delayed response here?just coming back from maternity leave. I?ve been waiting and hoping since the Portland summit that the curvature work you have done would be brought in to Horizon. A definite +1 from me from a user experience point of view. It would be great to have a solid plan on how this could work with or be additional to the Orchestration and Network Topology pieces that currently exist in Horizon. Let me know if I can help out with any design review, wireframe, or usability testing aspects. Best, Liz > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From thinrichs at vmware.com Tue Dec 16 18:24:37 2014 From: thinrichs at vmware.com (Tim Hinrichs) Date: Tue, 16 Dec 2014 18:24:37 +0000 Subject: [openstack-dev] [Congress] Re: Placement and Scheduling via Policy In-Reply-To: References: <6841EB3A-4304-4FA7-B5A2-3AB92AFAF89D@vmware.com> <2B1037E6-105A-47FE-A7CE-6E6EE9CEAB90@vmware.com> <018209AD-5009-4873-905D-8A7AE9F4A2C8@vmware.com> <98E78314-3F6D-4F06-A6EF-6BDA94E54B4B@vmware.com> <2c1d29c9ed6c41aabc8ae2efa964b9aa@BRMWP-EXMB12.corp.brocade.com> Message-ID: [Adding openstack-dev to this thread. For those of you just joining? We started kicking around ideas for how we might integrate a special-purpose VM placement engine into Congress.] Kudva: responses inline. On Dec 16, 2014, at 6:25 AM, Prabhakar Kudva > wrote: Hi, I am very interested in this. So, it looks like there are two parts to this: 1. Policy analysis when there are a significant mix of logical and builtin predicates (i.e., runtime should identify a solution space when there are arithmetic operators). This will require linear programming/ILP type solvers. There might be a need to have a function in runtime.py that specifically deals with this (Tim?) I think it?s right that we expect there to be a mix of builtins and standard predicates. But what we?re considering here is having the linear solver be treated as if it were a domain-specific policy engine. So that solver wouldn?t be embedded into the runtime.py necessarily. Rather, we?d delegate part of the policy to that domain-specific policy engine. 2. Enforcement. That is with a large number of constraints in place for placement and scheduling, how does the policy engine communicate and enforce the placement constraints to nova scheduler. I would imagine that we could delegate either enforcement or monitoring or both. Eventually we want enforcement here, but monitoring could be useful too. And yes you?re asking the right questions. I was trying to break the problem down into pieces in my bullet (1) below. But I think there is significant overlap in the questions we need to answer whether we?re delegating monitoring or enforcement. Both of these require some form of mathematical analysis. Would be happy and interested to discuss more on these lines. Maybe take a look at how I tried to breakdown the problem into separate questions in bullet (1) below and see if that makes sense. Tim Prabhakar From: Tim Hinrichs > To: "ruby.krishnaswamy at orange.com" > Cc: "Ramki Krishnan (ramk at Brocade.com)" >, Gokul B Kandiraju/Watson/IBM at IBMUS, Prabhakar Kudva/Watson/IBM at IBMUS Date: 12/15/2014 12:09 PM Subject: Re: Placement and Scheduling via Policy ________________________________ [Adding Prabhakar and Gokul, in case they are interested.] 1) Ruby, thinking about the solver as taking 1 matrix of [vm, server] and returning another matrix helps me understand what we?re talking about?thanks. I think you?re right that once we move from placement to optimization problems in general we?ll need to figure out how to deal with actions. But if it?s a placement-specific policy engine, then we can build VM-migration into it. It seems to me that the only part left is figuring out how to take an arbitrary policy, carve off the placement-relevant portion, and create the inputs the solver needs to generate that new matrix. Some thoughts... - My gut tells me that the placement-solver should basically say ?I enforce policies having to do with the schema nova:location.? This way the Congress policy engine knows to give it policies relevant to nova:location (placement). If we do that, I believe we can carve off the right sub theory. - That leaves taking a Datalog policy where we know nova:location is important and converting it to the input language required by a linear solver. We need to remember that the Datalog rules may reference tables from other services like Neutron, Ceilometer, etc. I think the key will be figuring out what class of policies we can actually do that for reliably. Cool?a concrete question. 2) We can definitely wait until January on this. I?ll be out of touch starting Friday too; it seems we all get back early January, which seems like the right time to resume our discussions. We have some concrete questions to answer, which was what I was hoping to accomplish before we all went on holiday. Happy Holidays! Tim On Dec 15, 2014, at 5:53 AM, > > wrote: Hi Tim ?Questions: 1) Is there any more data the solver needs? Seems like it needs something about CPU-load for each VM. 2) Which solver should we be using? What does the linear program that we feed it look like? How do we translate the results of the linear solver into a collection of ?migrate_VM? API calls?? Question (2) seems to me the first to address, in particular: ?how to prepare the input (variables, constraints, goal) and invoke the solver? => We need rules that represent constraints to give the solver (e.g. a technical constraint that a VM should not be assigned to more than one server or that more than maximum resource (cpu / mem ?) of a server cannot be assigned. ?how to translate the results of the linear solver into a collection of API calls?: => The output from the ?solver? will give the new placement plan (respecting the constraints in input)? o E.g. a table of [vm, server, true/false] => Then this depends on how ?action? is going to be implemented in Congress (whether an external solver is used or not) o Is the action presented as the ?final? DB rows that the system must produce as a result of the actions? o E.g. if current vm table is [vm3, host4] and the recomputed row says [vm3, host6], then the action is to move vm3 to host6? ?how will the solver be invoked?? => When will the optimization call be invoked? => Is it ?batched?, e.g. periodically invoke Congress to compute new assignments? Which solver to use: http://www.coin-or.org/projects/ and http://www.coin-or.org/projects/PuLP.xml I think it may be useful to pass through an interface (e.g. LP modeler to generate LP files in standard formats accepted by prevalent solvers) The mathematical program: We can (Orange) contribute to writing down in an informal way the program for this precise use case, if this can wait until January. Perhaps the objective is to may be ?minimize the number of servers whose usage is less than 50%?, since the original policy ?Not more than 1 server of type1 to have a load under 50%? need not necessarily have a solution. This may help to derive the ?mappings? from Congress (rules to program equations, intermediary tables to program variables)? For ?migration? use case: it may be useful to add some constraint representing cost of migration, such that the solver computes the new assignment plan such that the maximum migration cost is not exceeded. To start with, perhaps number of migrations? I will be away from the end of the week until 5th January. I will also discuss with colleagues to see how we can formalize contribution (congress+nfv poc). Rgds Ruby De : Tim Hinrichs [mailto:thinrichs at vmware.com] Envoy? : vendredi 12 d?cembre 2014 19:41 ? : KRISHNASWAMY Ruby IMT/OLPS Cc : Ramki Krishnan (ramk at Brocade.com) Objet : Re: Placement and Scheduling via Policy There?s a ton of good stuff here! So if we took Ramki?s initial use case and combined it with Ruby?s HA constraint, we?d have something like the following policy. // anti-affinity error (server, VM1, VM2) :- same_ha_group(VM1, VM2), nova:location(VM1, server), nova:location(VM2, server) // server-utilization error(server) :- type1_server(server), ceilometer:average_utilization(server, ?cpu-util?, avg), avg < 50 As a start, this seems plenty complex to me. anti-affinity is great b/c it DOES NOT require a sophisticated solver; server-utilization is great because it DOES require a linear solver. Data the solver needs: - Ceilometer: cpu-utilization for all the servers - Nova: data as to where each VM is located - Policy: high-availability groups Questions: 1) Is there any more data the solver needs? Seems like it needs something about CPU-load for each VM. 2) Which solver should we be using? What does the linear program that we feed it look like? How do we translate the results of the linear solver into a collection of ?migrate_VM? API calls? Maybe another few emails and then we set up a phone call. Tim On Dec 11, 2014, at 1:33 AM, > > wrote: Hello A) First a small extension to the use case that Ramki proposes - Add high availability constraint. - Assuming server-a and server-b are of same size and same failure model. [Later: Assumption of identical failure rates can be loosened. Instead of considering only servers as failure domains, can introduce other failure domains ==> not just an anti-affinity policy but a calculation from 99,99.. requirement to VM placements, e.g. ] - For an exemplary maximum usage scenario, 53 physical servers could be under peak utilization (100%), 1 server (server-a) could be under partial utilization (50%) with 2 instances of type large.3 and 1 instance of type large.2, and 1 server (server-b) could be under partial utilization (37.5%) with 3 instances of type large.2. Call VM.one.large2 as the large2 VM in server-a Call VM.two.large2 as one of the large2 VM in server-b - VM.one.large2 and VM.two.large2 - When one of the large.3 instances mapped to server-a is deleted from physical server type 1, Policy 1 will be violated, since the overall utilization of server-a falls to 37,5%. - Various new placements(s) are described below VM.two.large2 must not be moved. Moving VM.two.large2 breaks non-affinity constraint. error (server, VM1, VM2) :- node (VM1, server1), node (VM2, server2), same_ha_group(VM1, VM2), equal(server1, server2); 1) New placement 1: Move 2 instances of large.2 to server-a. Overall utilization of server-a - 50%. Overall utilization of server-b - 12.5%. 2) New placement 2: Move 1 instance of large.3 to server-b. Overall utilization of server-a - 0%. Overall utilization of server-b - 62.5%. 3) New placement 3: Move 3 instances of large.2 to server-a. Overall utilization of server-a - 62.5%. Overall utilization of server-b - 0%. New placements 2 and 3 could be considered optimal, since they achieve maximal bin packing and open up the door for turning off server-a or server-b and maximizing energy efficiency. But new placement 3 breaks client policy. BTW: what happens if a given situation does not allow the policy violation to be removed? B) Ramki?s original use case can itself be extended: Adding additional constraints to the previous use case due to cases such as: - Server heterogeneity - CPU ?pinning? - ?VM groups? (and allocation - Application interference - Refining on the statement ?instantaneous energy consumption can be approximately measured using an overall utilization metric, which is a combination of CPU utilization, memory usage, I/O usage, and network usage? Let me know if this will interest you. Some (e.g. application interference) will need some time. E.G; benchmarking / profiling to class VMs etc. C) New placement plan execution - In Ramki?s original use case, violation is detected at events such as VM delete. While certainly this by itself is sufficiently complex, we may need to consider other triggering cases (periodic or when multiple VMs are deleted/added) - In this case, it may not be sufficient to compute the new placement plan that brings the system to a configuration that does not break policy, but also add other goals D) Let me know if a use case such as placing ?video conferencing servers? (geographically distributed clients) would suit you (multi site scenario) => Or is it too premature? Ruby De : Tim Hinrichs [mailto:thinrichs at vmware.com] Envoy? : mercredi 10 d?cembre 2014 19:44 ? : KRISHNASWAMY Ruby IMT/OLPS Cc : Ramki Krishnan (ramk at Brocade.com) Objet : Re: Placement and Scheduling via Policy Hi Ruby, Whatever information you think is important for the use case is good. Section 3 from one of the docs Ramki sent you covers his use case. https://datatracker.ietf.org/doc/draft-krishnan-nfvrg-policy-based-rm-nfviaas/?include_text=1 >From my point of view, the keys things for the use case are? - The placement policy (i.e. the conditions under which VMs require migration). - A description of how we want to compute what specific migrations should be performed (a sketch of (i) the information that we need about current placements, policy violations, etc., (2) what systems/algorithms/etc. can utilize that input to figure out what migrations to perform. I think we want to focus on the end-user/customer experience (write a policy, and watch the VMs move around to obey that policy in response to environment changes) and then work out the details of how to implement that experience. That?s why I didn?t include things like delays, asynchronous/synchronous, architecture, applications, etc. in my 2 bullets above. Tim On Dec 10, 2014, at 8:55 AM, > > wrote: Hi Ramki, Tim By a ?format? for describing use cases, I meant to ask what sets of information to provide, for example, - what granularity in description of use case? - a specific placement policy (and perhaps citing reasons for needing such policy)? - Specific applications - Requirements on the placement manager itself (delay, ?)? o Architecture as well - Specific services from the placement manager (using Congress), such as, o Violation detection (load, security, ?) - Adapting (e.g. context-aware) of policies used In any case I will read the documents that Ramki has sent to not resend similar things. Regards Ruby De : Ramki Krishnan [mailto:ramk at Brocade.com] Envoy? : mercredi 10 d?cembre 2014 16:59 ? : Tim Hinrichs; KRISHNASWAMY Ruby IMT/OLPS Cc : Norival Figueira; Pierre Ettori; Alex Yip; dilikris at in.ibm.com Objet : RE: Placement and Scheduling via Policy Hi Tim, This sounds like a plan. It would be great if you could add the links below to the Congress wiki. I am all for discussing this in the openstack-dev mailing list and at this point this discussion is completely open. IRTF NFVRG Research Group: https://trac.tools.ietf.org/group/irtf/trac/wiki/nfvrg IRTF NFVRG draft on NFVIaaS placement/scheduling (includes system analysis for the PoC we are thinking): https://datatracker.ietf.org/doc/draft-krishnan-nfvrg-policy-based-rm-nfviaas/?include_text=1 IRTF NFVRG draft on Policy Architecture and Framework (looking forward to your comments and thoughts): https://datatracker.ietf.org/doc/draft-norival-nfvrg-nfv-policy-arch/?include_text=1 Hi Ruby, Looking forward to your use cases. Thanks, Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaufer at us.ibm.com Tue Dec 16 18:26:49 2014 From: kaufer at us.ibm.com (Steven Kaufer) Date: Tue, 16 Dec 2014 12:26:49 -0600 Subject: [openstack-dev] [api] Counting resources In-Reply-To: <4D27423EB14FEE4E9E5F88A15764C51A9DBE1938@ORD1EXD02.RACKSPACE.CORP> References: <4C649DDC-99D0-4842-80B6-668CF3528E93@rackspace.com> <4D27423EB14FEE4E9E5F88A15764C51A9DBE1938@ORD1EXD02.RACKSPACE.CORP> Message-ID: This is a follow up to this thread from a few weeks ago: https://www.mail-archive.com/openstack-dev at lists.openstack.org/msg40287.html I've updated the nova spec in this area to include the total server count in the server_links based on the existence of an "include_count" query parameter (eg: GET /servers?include_count=1). The spec no longer references a GET /servers/count API. Nova spec: https://review.openstack.org/#/c/134279/ Thanks, Steven Kaufer -------------- next part -------------- An HTML attachment was scrubbed... URL: From dannchoi at cisco.com Tue Dec 16 18:26:55 2014 From: dannchoi at cisco.com (Danny Choi (dannchoi)) Date: Tue, 16 Dec 2014 18:26:55 +0000 Subject: [openstack-dev] [qa] Very first VM launched won't response to ARP request Message-ID: Hi, I have seen this issue consistently. I freshly install Ubuntu 14.04 onto Cisco UCS and use devstack to deploy OpenStack (stable Juno) to make it a Compute node. For the very first VM launched at this node, it won?t respond to ARP request (I ping from the router namespace). The Linux bridge tap interface shows it?s sending packets to the VM, and tcpdump confirms it. qbr8a29c673-4f Link encap:Ethernet HWaddr b2:76:d7:47:c2:fe inet6 addr: fe80::98ac:73ff:fea8:8be1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1137 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:49528 (49.5 KB) TX bytes:648 (648.0 B) qvb8a29c673-4f Link encap:Ethernet HWaddr b2:76:d7:47:c2:fe inet6 addr: fe80::b076:d7ff:fe47:c2fe/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:1132 errors:0 dropped:0 overruns:0 frame:0 TX packets:22 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:63592 (63.5 KB) TX bytes:3228 (3.2 KB) qvo8a29c673-4f Link encap:Ethernet HWaddr 9a:2b:5e:e4:22:f9 inet6 addr: fe80::982b:5eff:fee4:22f9/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:22 errors:0 dropped:0 overruns:0 frame:0 TX packets:1132 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3228 (3.2 KB) TX bytes:63592 (63.5 KB) tap8a29c673-4f Link encap:Ethernet HWaddr fe:16:3e:12:49:10 inet6 addr: fe80::fc16:3eff:fe12:4910/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:1143 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:2022 (2.0 KB) TX bytes:64490 (64.4 KB) localadmin at qa6:~/devstack$ ifconfig tap8a29c673-4f tap8a29c673-4f Link encap:Ethernet HWaddr fe:16:3e:12:49:10 inet6 addr: fe80::fc16:3eff:fe12:4910/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:1236 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:2022 (2.0 KB) TX bytes:69698 (69.6 KB) localadmin at qa6:~/devstack$ ifconfig tap8a29c673-4f tap8a29c673-4f Link encap:Ethernet HWaddr fe:16:3e:12:49:10 inet6 addr: fe80::fc16:3eff:fe12:4910/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:1239 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:2022 (2.0 KB) TX bytes:69866 (69.8 KB) localadmin at qa6:~/devstack$ sudo tcpdump -i tap8a29c673-4f tcpdump: WARNING: tap8a29c673-4f: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap8a29c673-4f, link-type EN10MB (Ethernet), capture size 65535 bytes 13:07:31.678751 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42 13:07:32.678813 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42 13:07:32.678838 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42 13:07:33.678778 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42 13:07:34.678840 ARP, Request who-has 10.0.0.14 tell 10.0.0.1, length 42 Usually I would reboot the VM and the ping works fine afterwards. localadmin at qa6:~/devstack$ sudo tcpdump -i tap8a29c673-4f tcpdump: WARNING: tap8a29c673-4f: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap8a29c673-4f, link-type EN10MB (Ethernet), capture size 65535 bytes 13:13:18.154711 IP 10.0.0.1 > 10.0.0.14: ICMP echo request, id 25711, seq 32, length 64 13:13:18.154996 IP 10.0.0.14 > 10.0.0.1: ICMP echo reply, id 25711, seq 32, length 64 13:13:19.156244 IP 10.0.0.1 > 10.0.0.14: ICMP echo request, id 25711, seq 33, length 64 13:13:19.156502 IP 10.0.0.14 > 10.0.0.1: ICMP echo reply, id 25711, seq 33, length 64 Looking for suggestions on how to debug this issue? Thanks, Danny -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpyzhov at mirantis.com Tue Dec 16 18:54:05 2014 From: dpyzhov at mirantis.com (Dmitry Pyzhov) Date: Tue, 16 Dec 2014 22:54:05 +0400 Subject: [openstack-dev] [Fuel] Image based provisioning Message-ID: Guys, we are about to enable image based provisioning in our master by default. I'm trying to figure out requirement for this change. As far as I know, it was not tested on scale lab. Is it true? Have we ever run full system tests cycle with this option? Do we have any other pre-requirements? -------------- next part -------------- An HTML attachment was scrubbed... URL: From yudupi at cisco.com Tue Dec 16 19:14:38 2014 From: yudupi at cisco.com (Yathiraj Udupi (yudupi)) Date: Tue, 16 Dec 2014 19:14:38 +0000 Subject: [openstack-dev] [Congress] Re: Placement and Scheduling via Policy Message-ID: Tim, I read the conversation thread below and this got me interested as it relates to our discussion we had in the Policy Summit (mid cycle meet up) held in Palo Alto a few months ago. This relates to our project ? Nova Solver Scheduler, which I had talked about at the Policy summit. Please see this - https://github.com/stackforge/nova-solver-scheduler We already have a working constraints-based solver framework/engine that handles Nova placement, and we are currently active in Stackforge, and aim to get this integrated into the Gantt project (https://blueprints.launchpad.net/nova/+spec/solver-scheduler), based on our discussions in the Nova scheduler sub group. When I saw discussions around using Linear programming (LP) solvers, PULP, etc, I thought of pitching in here to say, we already have demonstrated integrating a LP based solver for Nova compute placements. Please see: https://www.youtube.com/watch?v=7QzDbhkk-BI#t=942 for a demo of this (from our talk at the Atlanta Openstack summit). Based on this email thread, I believe Ramki, one of our early collaborators is driving a similar solution in the NFV ETSI research group. Glad to know our Solver scheduler project is getting interest now. As part of Congress integration, at the policy summit, I had suggested, we can try to translate a Congress policy into our Solver Scheduler?s constraints, and use this to enforce Nova placement policies. We can already demonstrate policy-driven nova placements using our pluggable constraints model. So it should be easy to integrate with Congress. The Nova solver scheduler team would be glad to help with any efforts wrt to trying out a Congress integration for Nova placements. Thanks, Yathi. On 12/16/14, 10:24 AM, "Tim Hinrichs" > wrote: [Adding openstack-dev to this thread. For those of you just joining? We started kicking around ideas for how we might integrate a special-purpose VM placement engine into Congress.] Kudva: responses inline. On Dec 16, 2014, at 6:25 AM, Prabhakar Kudva > wrote: Hi, I am very interested in this. So, it looks like there are two parts to this: 1. Policy analysis when there are a significant mix of logical and builtin predicates (i.e., runtime should identify a solution space when there are arithmetic operators). This will require linear programming/ILP type solvers. There might be a need to have a function in runtime.py that specifically deals with this (Tim?) I think it?s right that we expect there to be a mix of builtins and standard predicates. But what we?re considering here is having the linear solver be treated as if it were a domain-specific policy engine. So that solver wouldn?t be embedded into the runtime.py necessarily. Rather, we?d delegate part of the policy to that domain-specific policy engine. 2. Enforcement. That is with a large number of constraints in place for placement and scheduling, how does the policy engine communicate and enforce the placement constraints to nova scheduler. I would imagine that we could delegate either enforcement or monitoring or both. Eventually we want enforcement here, but monitoring could be useful too. And yes you?re asking the right questions. I was trying to break the problem down into pieces in my bullet (1) below. But I think there is significant overlap in the questions we need to answer whether we?re delegating monitoring or enforcement. Both of these require some form of mathematical analysis. Would be happy and interested to discuss more on these lines. Maybe take a look at how I tried to breakdown the problem into separate questions in bullet (1) below and see if that makes sense. Tim Prabhakar From: Tim Hinrichs > To: "ruby.krishnaswamy at orange.com" > Cc: "Ramki Krishnan (ramk at Brocade.com)" >, Gokul B Kandiraju/Watson/IBM at IBMUS, Prabhakar Kudva/Watson/IBM at IBMUS Date: 12/15/2014 12:09 PM Subject: Re: Placement and Scheduling via Policy ________________________________ [Adding Prabhakar and Gokul, in case they are interested.] 1) Ruby, thinking about the solver as taking 1 matrix of [vm, server] and returning another matrix helps me understand what we?re talking about?thanks. I think you?re right that once we move from placement to optimization problems in general we?ll need to figure out how to deal with actions. But if it?s a placement-specific policy engine, then we can build VM-migration into it. It seems to me that the only part left is figuring out how to take an arbitrary policy, carve off the placement-relevant portion, and create the inputs the solver needs to generate that new matrix. Some thoughts... - My gut tells me that the placement-solver should basically say ?I enforce policies having to do with the schema nova:location.? This way the Congress policy engine knows to give it policies relevant to nova:location (placement). If we do that, I believe we can carve off the right sub theory. - That leaves taking a Datalog policy where we know nova:location is important and converting it to the input language required by a linear solver. We need to remember that the Datalog rules may reference tables from other services like Neutron, Ceilometer, etc. I think the key will be figuring out what class of policies we can actually do that for reliably. Cool?a concrete question. 2) We can definitely wait until January on this. I?ll be out of touch starting Friday too; it seems we all get back early January, which seems like the right time to resume our discussions. We have some concrete questions to answer, which was what I was hoping to accomplish before we all went on holiday. Happy Holidays! Tim On Dec 15, 2014, at 5:53 AM, > > wrote: Hi Tim ?Questions: 1) Is there any more data the solver needs? Seems like it needs something about CPU-load for each VM. 2) Which solver should we be using? What does the linear program that we feed it look like? How do we translate the results of the linear solver into a collection of ?migrate_VM? API calls?? Question (2) seems to me the first to address, in particular: ?how to prepare the input (variables, constraints, goal) and invoke the solver? => We need rules that represent constraints to give the solver (e.g. a technical constraint that a VM should not be assigned to more than one server or that more than maximum resource (cpu / mem ?) of a server cannot be assigned. ?how to translate the results of the linear solver into a collection of API calls?: => The output from the ?solver? will give the new placement plan (respecting the constraints in input)? o E.g. a table of [vm, server, true/false] => Then this depends on how ?action? is going to be implemented in Congress (whether an external solver is used or not) o Is the action presented as the ?final? DB rows that the system must produce as a result of the actions? o E.g. if current vm table is [vm3, host4] and the recomputed row says [vm3, host6], then the action is to move vm3 to host6? ?how will the solver be invoked?? => When will the optimization call be invoked? => Is it ?batched?, e.g. periodically invoke Congress to compute new assignments? Which solver to use: http://www.coin-or.org/projects/ and http://www.coin-or.org/projects/PuLP.xml I think it may be useful to pass through an interface (e.g. LP modeler to generate LP files in standard formats accepted by prevalent solvers) The mathematical program: We can (Orange) contribute to writing down in an informal way the program for this precise use case, if this can wait until January. Perhaps the objective is to may be ?minimize the number of servers whose usage is less than 50%?, since the original policy ?Not more than 1 server of type1 to have a load under 50%? need not necessarily have a solution. This may help to derive the ?mappings? from Congress (rules to program equations, intermediary tables to program variables)? For ?migration? use case: it may be useful to add some constraint representing cost of migration, such that the solver computes the new assignment plan such that the maximum migration cost is not exceeded. To start with, perhaps number of migrations? I will be away from the end of the week until 5th January. I will also discuss with colleagues to see how we can formalize contribution (congress+nfv poc). Rgds Ruby De : Tim Hinrichs [mailto:thinrichs at vmware.com] Envoy? : vendredi 12 d?cembre 2014 19:41 ? : KRISHNASWAMY Ruby IMT/OLPS Cc : Ramki Krishnan (ramk at Brocade.com) Objet : Re: Placement and Scheduling via Policy There?s a ton of good stuff here! So if we took Ramki?s initial use case and combined it with Ruby?s HA constraint, we?d have something like the following policy. // anti-affinity error (server, VM1, VM2) :- same_ha_group(VM1, VM2), nova:location(VM1, server), nova:location(VM2, server) // server-utilization error(server) :- type1_server(server), ceilometer:average_utilization(server, ?cpu-util?, avg), avg < 50 As a start, this seems plenty complex to me. anti-affinity is great b/c it DOES NOT require a sophisticated solver; server-utilization is great because it DOES require a linear solver. Data the solver needs: - Ceilometer: cpu-utilization for all the servers - Nova: data as to where each VM is located - Policy: high-availability groups Questions: 1) Is there any more data the solver needs? Seems like it needs something about CPU-load for each VM. 2) Which solver should we be using? What does the linear program that we feed it look like? How do we translate the results of the linear solver into a collection of ?migrate_VM? API calls? Maybe another few emails and then we set up a phone call. Tim On Dec 11, 2014, at 1:33 AM, > > wrote: Hello A) First a small extension to the use case that Ramki proposes - Add high availability constraint. - Assuming server-a and server-b are of same size and same failure model. [Later: Assumption of identical failure rates can be loosened. Instead of considering only servers as failure domains, can introduce other failure domains ==> not just an anti-affinity policy but a calculation from 99,99.. requirement to VM placements, e.g. ] - For an exemplary maximum usage scenario, 53 physical servers could be under peak utilization (100%), 1 server (server-a) could be under partial utilization (50%) with 2 instances of type large.3 and 1 instance of type large.2, and 1 server (server-b) could be under partial utilization (37.5%) with 3 instances of type large.2. Call VM.one.large2 as the large2 VM in server-a Call VM.two.large2 as one of the large2 VM in server-b - VM.one.large2 and VM.two.large2 - When one of the large.3 instances mapped to server-a is deleted from physical server type 1, Policy 1 will be violated, since the overall utilization of server-a falls to 37,5%. - Various new placements(s) are described below VM.two.large2 must not be moved. Moving VM.two.large2 breaks non-affinity constraint. error (server, VM1, VM2) :- node (VM1, server1), node (VM2, server2), same_ha_group(VM1, VM2), equal(server1, server2); 1) New placement 1: Move 2 instances of large.2 to server-a. Overall utilization of server-a - 50%. Overall utilization of server-b - 12.5%. 2) New placement 2: Move 1 instance of large.3 to server-b. Overall utilization of server-a - 0%. Overall utilization of server-b - 62.5%. 3) New placement 3: Move 3 instances of large.2 to server-a. Overall utilization of server-a - 62.5%. Overall utilization of server-b - 0%. New placements 2 and 3 could be considered optimal, since they achieve maximal bin packing and open up the door for turning off server-a or server-b and maximizing energy efficiency. But new placement 3 breaks client policy. BTW: what happens if a given situation does not allow the policy violation to be removed? B) Ramki?s original use case can itself be extended: Adding additional constraints to the previous use case due to cases such as: - Server heterogeneity - CPU ?pinning? - ?VM groups? (and allocation - Application interference - Refining on the statement ?instantaneous energy consumption can be approximately measured using an overall utilization metric, which is a combination of CPU utilization, memory usage, I/O usage, and network usage? Let me know if this will interest you. Some (e.g. application interference) will need some time. E.G; benchmarking / profiling to class VMs etc. C) New placement plan execution - In Ramki?s original use case, violation is detected at events such as VM delete. While certainly this by itself is sufficiently complex, we may need to consider other triggering cases (periodic or when multiple VMs are deleted/added) - In this case, it may not be sufficient to compute the new placement plan that brings the system to a configuration that does not break policy, but also add other goals D) Let me know if a use case such as placing ?video conferencing servers? (geographically distributed clients) would suit you (multi site scenario) => Or is it too premature? Ruby De : Tim Hinrichs [mailto:thinrichs at vmware.com] Envoy? : mercredi 10 d?cembre 2014 19:44 ? : KRISHNASWAMY Ruby IMT/OLPS Cc : Ramki Krishnan (ramk at Brocade.com) Objet : Re: Placement and Scheduling via Policy Hi Ruby, Whatever information you think is important for the use case is good. Section 3 from one of the docs Ramki sent you covers his use case. https://datatracker.ietf.org/doc/draft-krishnan-nfvrg-policy-based-rm-nfviaas/?include_text=1 From my point of view, the keys things for the use case are? - The placement policy (i.e. the conditions under which VMs require migration). - A description of how we want to compute what specific migrations should be performed (a sketch of (i) the information that we need about current placements, policy violations, etc., (2) what systems/algorithms/etc. can utilize that input to figure out what migrations to perform. I think we want to focus on the end-user/customer experience (write a policy, and watch the VMs move around to obey that policy in response to environment changes) and then work out the details of how to implement that experience. That?s why I didn?t include things like delays, asynchronous/synchronous, architecture, applications, etc. in my 2 bullets above. Tim On Dec 10, 2014, at 8:55 AM, > > wrote: Hi Ramki, Tim By a ?format? for describing use cases, I meant to ask what sets of information to provide, for example, - what granularity in description of use case? - a specific placement policy (and perhaps citing reasons for needing such policy)? - Specific applications - Requirements on the placement manager itself (delay, ?)? o Architecture as well - Specific services from the placement manager (using Congress), such as, o Violation detection (load, security, ?) - Adapting (e.g. context-aware) of policies used In any case I will read the documents that Ramki has sent to not resend similar things. Regards Ruby De : Ramki Krishnan [mailto:ramk at Brocade.com] Envoy? : mercredi 10 d?cembre 2014 16:59 ? : Tim Hinrichs; KRISHNASWAMY Ruby IMT/OLPS Cc : Norival Figueira; Pierre Ettori; Alex Yip; dilikris at in.ibm.com Objet : RE: Placement and Scheduling via Policy Hi Tim, This sounds like a plan. It would be great if you could add the links below to the Congress wiki. I am all for discussing this in the openstack-dev mailing list and at this point this discussion is completely open. IRTF NFVRG Research Group: https://trac.tools.ietf.org/group/irtf/trac/wiki/nfvrg IRTF NFVRG draft on NFVIaaS placement/scheduling (includes system analysis for the PoC we are thinking): https://datatracker.ietf.org/doc/draft-krishnan-nfvrg-policy-based-rm-nfviaas/?include_text=1 IRTF NFVRG draft on Policy Architecture and Framework (looking forward to your comments and thoughts): https://datatracker.ietf.org/doc/draft-norival-nfvrg-nfv-policy-arch/?include_text=1 Hi Ruby, Looking forward to your use cases. Thanks, Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at dague.net Tue Dec 16 19:15:41 2014 From: sean at dague.net (Sean Dague) Date: Tue, 16 Dec 2014 14:15:41 -0500 Subject: [openstack-dev] Questions regarding Functional Testing (Paris Summit) In-Reply-To: <2270469.sZVWXDMDUh@workdev-stoner.usersys.redhat.com> References: <2270469.sZVWXDMDUh@workdev-stoner.usersys.redhat.com> Message-ID: <549084DD.7010004@dague.net> On 12/12/2014 01:04 PM, Sean Toner wrote: > Hi everyone, > > I have been reading the etherpad from the Paris summit wrt to moving the > functional tests into their respective projects > (https://etherpad.openstack.org/p/kilo-crossproject-move-func-tests-to-projects). I am mostly interested this from the nova project > perspective. However, I still have a lot of questions. > > For example, is it permissible (or a good idea) to use the python- > *clients as a library for the tasks? I know these were not allowed in > Tempest, but I don't see why they couldn't be used here (especially > since, AFAIK, there is no testing done on the SDK clients themselves). Sure, though realistically I'd actually expect the clients to have their own tests. > Another question is also about a difference between Tempest and these > new functional tests. In nova's case, it would be very useful to > actually utilize the libvirt library in order to touch the hypervisor > itself. In Tempest, it's not allowed to do that. Would it make sense > to be able to make calls to libvirt within a nova functional test? Examples would be handy here. > Basically, since Tempest was a "public" only library, there needs to be > a different set of rules as to what can and can't be done. Even the > definition of what exactly a functional test is should be more clearly > stated. > > For example, I have been working on a project for some nova tests that > also use the glance and keystone clients (since I am using the python > SDK clients). I saw this quote from the etherpad: > > " Many "api" tests in Tempest require more than one service (eg, nova > api tests require glance) > > Is this an API test or an integration test or a functional test? > sounds to me like cross project integration tests +1+1" > > I would disagree that a functional test should belong to only one > project. IMHO, a functional test is essentially a black box test that > might span one or more projects, though the projects should be related. > For example, I have worked on one of the new features where the config > drive image property is set in the glance image itself, rather than > specified during the nova boot call. > > I believe that's how a functional test can be defined. A black box test > which may require looking "under the hood" that Tempest does not allow. A black box test by definition doesn't look under the hood, and I think that's where there has been a lot of disconnect. > Has there been any other work or thoughts on how functional testing > should be done? I've been working through early stages of this on the Nova side. There are very pragmatic reasons for the tests to be owned by a single project. When they are not, we get wedges, where due to factors beyond our control you have 2 source trees in a bind over tests, and the project doesn't have the ability to make an executive decision that those tests aren't actually correct and should be changed. I think the functional correctness of a project needs to be owned by that project. As a community that's been largely pushed at the QA team up until this point, and that both not scalable, as well as not very debuggable (note how many folks just randomly type "recheck" because they have no idea how to debug a failure). -Sean -- Sean Dague http://dague.net From yudupi at cisco.com Tue Dec 16 19:28:50 2014 From: yudupi at cisco.com (Yathiraj Udupi (yudupi)) Date: Tue, 16 Dec 2014 19:28:50 +0000 Subject: [openstack-dev] [Congress] Re: Placement and Scheduling via Policy In-Reply-To: References: Message-ID: To add to what I mentioned below? We from the Solver Scheduler team are a small team here at Cisco, trying to drive this project and slowly adding more complex use cases for scheduling and policy?driven placements. We would really love to have some real contributions from everyone in the community and build this the right way. If it may interest ? some interesting scheduler use cases are here based on one of our community meetings in IRC - https://etherpad.openstack.org/p/SchedulerUseCases This could apply to Congress driving some of this too. I am leading the effort for the Solver Scheduler project ( https://github.com/stackforge/nova-solver-scheduler ) , and if any of you are willing to contribute code, API, benchmarks, and also work on integration, my team and I can help you guide through this. We would be following the same processes under Stackforge at the moment. Thanks, Yathi. On 12/16/14, 11:14 AM, "Yathiraj Udupi (yudupi)" > wrote: Tim, I read the conversation thread below and this got me interested as it relates to our discussion we had in the Policy Summit (mid cycle meet up) held in Palo Alto a few months ago. This relates to our project ? Nova Solver Scheduler, which I had talked about at the Policy summit. Please see this - https://github.com/stackforge/nova-solver-scheduler We already have a working constraints-based solver framework/engine that handles Nova placement, and we are currently active in Stackforge, and aim to get this integrated into the Gantt project (https://blueprints.launchpad.net/nova/+spec/solver-scheduler), based on our discussions in the Nova scheduler sub group. When I saw discussions around using Linear programming (LP) solvers, PULP, etc, I thought of pitching in here to say, we already have demonstrated integrating a LP based solver for Nova compute placements. Please see: https://www.youtube.com/watch?v=7QzDbhkk-BI#t=942 for a demo of this (from our talk at the Atlanta Openstack summit). Based on this email thread, I believe Ramki, one of our early collaborators is driving a similar solution in the NFV ETSI research group. Glad to know our Solver scheduler project is getting interest now. As part of Congress integration, at the policy summit, I had suggested, we can try to translate a Congress policy into our Solver Scheduler?s constraints, and use this to enforce Nova placement policies. We can already demonstrate policy-driven nova placements using our pluggable constraints model. So it should be easy to integrate with Congress. The Nova solver scheduler team would be glad to help with any efforts wrt to trying out a Congress integration for Nova placements. Thanks, Yathi. On 12/16/14, 10:24 AM, "Tim Hinrichs" > wrote: [Adding openstack-dev to this thread. For those of you just joining? We started kicking around ideas for how we might integrate a special-purpose VM placement engine into Congress.] Kudva: responses inline. On Dec 16, 2014, at 6:25 AM, Prabhakar Kudva > wrote: Hi, I am very interested in this. So, it looks like there are two parts to this: 1. Policy analysis when there are a significant mix of logical and builtin predicates (i.e., runtime should identify a solution space when there are arithmetic operators). This will require linear programming/ILP type solvers. There might be a need to have a function in runtime.py that specifically deals with this (Tim?) I think it?s right that we expect there to be a mix of builtins and standard predicates. But what we?re considering here is having the linear solver be treated as if it were a domain-specific policy engine. So that solver wouldn?t be embedded into the runtime.py necessarily. Rather, we?d delegate part of the policy to that domain-specific policy engine. 2. Enforcement. That is with a large number of constraints in place for placement and scheduling, how does the policy engine communicate and enforce the placement constraints to nova scheduler. I would imagine that we could delegate either enforcement or monitoring or both. Eventually we want enforcement here, but monitoring could be useful too. And yes you?re asking the right questions. I was trying to break the problem down into pieces in my bullet (1) below. But I think there is significant overlap in the questions we need to answer whether we?re delegating monitoring or enforcement. Both of these require some form of mathematical analysis. Would be happy and interested to discuss more on these lines. Maybe take a look at how I tried to breakdown the problem into separate questions in bullet (1) below and see if that makes sense. Tim Prabhakar From: Tim Hinrichs > To: "ruby.krishnaswamy at orange.com" > Cc: "Ramki Krishnan (ramk at Brocade.com)" >, Gokul B Kandiraju/Watson/IBM at IBMUS, Prabhakar Kudva/Watson/IBM at IBMUS Date: 12/15/2014 12:09 PM Subject: Re: Placement and Scheduling via Policy ________________________________ [Adding Prabhakar and Gokul, in case they are interested.] 1) Ruby, thinking about the solver as taking 1 matrix of [vm, server] and returning another matrix helps me understand what we?re talking about?thanks. I think you?re right that once we move from placement to optimization problems in general we?ll need to figure out how to deal with actions. But if it?s a placement-specific policy engine, then we can build VM-migration into it. It seems to me that the only part left is figuring out how to take an arbitrary policy, carve off the placement-relevant portion, and create the inputs the solver needs to generate that new matrix. Some thoughts... - My gut tells me that the placement-solver should basically say ?I enforce policies having to do with the schema nova:location.? This way the Congress policy engine knows to give it policies relevant to nova:location (placement). If we do that, I believe we can carve off the right sub theory. - That leaves taking a Datalog policy where we know nova:location is important and converting it to the input language required by a linear solver. We need to remember that the Datalog rules may reference tables from other services like Neutron, Ceilometer, etc. I think the key will be figuring out what class of policies we can actually do that for reliably. Cool?a concrete question. 2) We can definitely wait until January on this. I?ll be out of touch starting Friday too; it seems we all get back early January, which seems like the right time to resume our discussions. We have some concrete questions to answer, which was what I was hoping to accomplish before we all went on holiday. Happy Holidays! Tim On Dec 15, 2014, at 5:53 AM, > > wrote: Hi Tim ?Questions: 1) Is there any more data the solver needs? Seems like it needs something about CPU-load for each VM. 2) Which solver should we be using? What does the linear program that we feed it look like? How do we translate the results of the linear solver into a collection of ?migrate_VM? API calls?? Question (2) seems to me the first to address, in particular: ?how to prepare the input (variables, constraints, goal) and invoke the solver? => We need rules that represent constraints to give the solver (e.g. a technical constraint that a VM should not be assigned to more than one server or that more than maximum resource (cpu / mem ?) of a server cannot be assigned. ?how to translate the results of the linear solver into a collection of API calls?: => The output from the ?solver? will give the new placement plan (respecting the constraints in input)? o E.g. a table of [vm, server, true/false] => Then this depends on how ?action? is going to be implemented in Congress (whether an external solver is used or not) o Is the action presented as the ?final? DB rows that the system must produce as a result of the actions? o E.g. if current vm table is [vm3, host4] and the recomputed row says [vm3, host6], then the action is to move vm3 to host6? ?how will the solver be invoked?? => When will the optimization call be invoked? => Is it ?batched?, e.g. periodically invoke Congress to compute new assignments? Which solver to use: http://www.coin-or.org/projects/ and http://www.coin-or.org/projects/PuLP.xml I think it may be useful to pass through an interface (e.g. LP modeler to generate LP files in standard formats accepted by prevalent solvers) The mathematical program: We can (Orange) contribute to writing down in an informal way the program for this precise use case, if this can wait until January. Perhaps the objective is to may be ?minimize the number of servers whose usage is less than 50%?, since the original policy ?Not more than 1 server of type1 to have a load under 50%? need not necessarily have a solution. This may help to derive the ?mappings? from Congress (rules to program equations, intermediary tables to program variables)? For ?migration? use case: it may be useful to add some constraint representing cost of migration, such that the solver computes the new assignment plan such that the maximum migration cost is not exceeded. To start with, perhaps number of migrations? I will be away from the end of the week until 5th January. I will also discuss with colleagues to see how we can formalize contribution (congress+nfv poc). Rgds Ruby De : Tim Hinrichs [mailto:thinrichs at vmware.com] Envoy? : vendredi 12 d?cembre 2014 19:41 ? : KRISHNASWAMY Ruby IMT/OLPS Cc : Ramki Krishnan (ramk at Brocade.com) Objet : Re: Placement and Scheduling via Policy There?s a ton of good stuff here! So if we took Ramki?s initial use case and combined it with Ruby?s HA constraint, we?d have something like the following policy. // anti-affinity error (server, VM1, VM2) :- same_ha_group(VM1, VM2), nova:location(VM1, server), nova:location(VM2, server) // server-utilization error(server) :- type1_server(server), ceilometer:average_utilization(server, ?cpu-util?, avg), avg < 50 As a start, this seems plenty complex to me. anti-affinity is great b/c it DOES NOT require a sophisticated solver; server-utilization is great because it DOES require a linear solver. Data the solver needs: - Ceilometer: cpu-utilization for all the servers - Nova: data as to where each VM is located - Policy: high-availability groups Questions: 1) Is there any more data the solver needs? Seems like it needs something about CPU-load for each VM. 2) Which solver should we be using? What does the linear program that we feed it look like? How do we translate the results of the linear solver into a collection of ?migrate_VM? API calls? Maybe another few emails and then we set up a phone call. Tim On Dec 11, 2014, at 1:33 AM, > > wrote: Hello A) First a small extension to the use case that Ramki proposes - Add high availability constraint. - Assuming server-a and server-b are of same size and same failure model. [Later: Assumption of identical failure rates can be loosened. Instead of considering only servers as failure domains, can introduce other failure domains ==> not just an anti-affinity policy but a calculation from 99,99.. requirement to VM placements, e.g. ] - For an exemplary maximum usage scenario, 53 physical servers could be under peak utilization (100%), 1 server (server-a) could be under partial utilization (50%) with 2 instances of type large.3 and 1 instance of type large.2, and 1 server (server-b) could be under partial utilization (37.5%) with 3 instances of type large.2. Call VM.one.large2 as the large2 VM in server-a Call VM.two.large2 as one of the large2 VM in server-b - VM.one.large2 and VM.two.large2 - When one of the large.3 instances mapped to server-a is deleted from physical server type 1, Policy 1 will be violated, since the overall utilization of server-a falls to 37,5%. - Various new placements(s) are described below VM.two.large2 must not be moved. Moving VM.two.large2 breaks non-affinity constraint. error (server, VM1, VM2) :- node (VM1, server1), node (VM2, server2), same_ha_group(VM1, VM2), equal(server1, server2); 1) New placement 1: Move 2 instances of large.2 to server-a. Overall utilization of server-a - 50%. Overall utilization of server-b - 12.5%. 2) New placement 2: Move 1 instance of large.3 to server-b. Overall utilization of server-a - 0%. Overall utilization of server-b - 62.5%. 3) New placement 3: Move 3 instances of large.2 to server-a. Overall utilization of server-a - 62.5%. Overall utilization of server-b - 0%. New placements 2 and 3 could be considered optimal, since they achieve maximal bin packing and open up the door for turning off server-a or server-b and maximizing energy efficiency. But new placement 3 breaks client policy. BTW: what happens if a given situation does not allow the policy violation to be removed? B) Ramki?s original use case can itself be extended: Adding additional constraints to the previous use case due to cases such as: - Server heterogeneity - CPU ?pinning? - ?VM groups? (and allocation - Application interference - Refining on the statement ?instantaneous energy consumption can be approximately measured using an overall utilization metric, which is a combination of CPU utilization, memory usage, I/O usage, and network usage? Let me know if this will interest you. Some (e.g. application interference) will need some time. E.G; benchmarking / profiling to class VMs etc. C) New placement plan execution - In Ramki?s original use case, violation is detected at events such as VM delete. While certainly this by itself is sufficiently complex, we may need to consider other triggering cases (periodic or when multiple VMs are deleted/added) - In this case, it may not be sufficient to compute the new placement plan that brings the system to a configuration that does not break policy, but also add other goals D) Let me know if a use case such as placing ?video conferencing servers? (geographically distributed clients) would suit you (multi site scenario) => Or is it too premature? Ruby De : Tim Hinrichs [mailto:thinrichs at vmware.com] Envoy? : mercredi 10 d?cembre 2014 19:44 ? : KRISHNASWAMY Ruby IMT/OLPS Cc : Ramki Krishnan (ramk at Brocade.com) Objet : Re: Placement and Scheduling via Policy Hi Ruby, Whatever information you think is important for the use case is good. Section 3 from one of the docs Ramki sent you covers his use case. https://datatracker.ietf.org/doc/draft-krishnan-nfvrg-policy-based-rm-nfviaas/?include_text=1 From my point of view, the keys things for the use case are? - The placement policy (i.e. the conditions under which VMs require migration). - A description of how we want to compute what specific migrations should be performed (a sketch of (i) the information that we need about current placements, policy violations, etc., (2) what systems/algorithms/etc. can utilize that input to figure out what migrations to perform. I think we want to focus on the end-user/customer experience (write a policy, and watch the VMs move around to obey that policy in response to environment changes) and then work out the details of how to implement that experience. That?s why I didn?t include things like delays, asynchronous/synchronous, architecture, applications, etc. in my 2 bullets above. Tim On Dec 10, 2014, at 8:55 AM, > > wrote: Hi Ramki, Tim By a ?format? for describing use cases, I meant to ask what sets of information to provide, for example, - what granularity in description of use case? - a specific placement policy (and perhaps citing reasons for needing such policy)? - Specific applications - Requirements on the placement manager itself (delay, ?)? o Architecture as well - Specific services from the placement manager (using Congress), such as, o Violation detection (load, security, ?) - Adapting (e.g. context-aware) of policies used In any case I will read the documents that Ramki has sent to not resend similar things. Regards Ruby De : Ramki Krishnan [mailto:ramk at Brocade.com] Envoy? : mercredi 10 d?cembre 2014 16:59 ? : Tim Hinrichs; KRISHNASWAMY Ruby IMT/OLPS Cc : Norival Figueira; Pierre Ettori; Alex Yip; dilikris at in.ibm.com Objet : RE: Placement and Scheduling via Policy Hi Tim, This sounds like a plan. It would be great if you could add the links below to the Congress wiki. I am all for discussing this in the openstack-dev mailing list and at this point this discussion is completely open. IRTF NFVRG Research Group: https://trac.tools.ietf.org/group/irtf/trac/wiki/nfvrg IRTF NFVRG draft on NFVIaaS placement/scheduling (includes system analysis for the PoC we are thinking): https://datatracker.ietf.org/doc/draft-krishnan-nfvrg-policy-based-rm-nfviaas/?include_text=1 IRTF NFVRG draft on Policy Architecture and Framework (looking forward to your comments and thoughts): https://datatracker.ietf.org/doc/draft-norival-nfvrg-nfv-policy-arch/?include_text=1 Hi Ruby, Looking forward to your use cases. Thanks, Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From engg.sanj at gmail.com Tue Dec 16 19:31:25 2014 From: engg.sanj at gmail.com (Satyasanjibani Rautaray) Date: Wed, 17 Dec 2014 01:01:25 +0530 Subject: [openstack-dev] [Fuel] Adding code to add node to fuel UI Message-ID: Hi, *i am in a process of creating an additional node by editing the code where the new node will be solving a different propose than installing openstack components just for testing currently the new node will install vim for me please help me what else i need to look into to create the complete setup and deploy with fuel i have edited openstack.yaml at /root/fuel-web/nailgun/nailgun/fixtures http://pastebin.com/P1MmDBzP * -- Thanks Satya Mob:9844101001 No one is the best by birth, Its his brain/ knowledge which make him the best. -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at greghaynes.net Tue Dec 16 19:47:54 2014 From: greg at greghaynes.net (Gregory Haynes) Date: Tue, 16 Dec 2014 19:47:54 +0000 Subject: [openstack-dev] [TripleO] Bug Squashing Day In-Reply-To: References: <1418247379.1887051.201395737.3992C87F@webmail.messagingengine.com> Message-ID: <1418759063-sup-5722@greghaynes0.opus.gah> > > On Wed, Dec 10, 2014 at 10:36 PM, Gregory Haynes > > wrote: > > > >> A couple weeks ago we discussed having a bug squash day. AFAICT we all > >> forgot, and we still have a huge bug backlog. I'd like to propose we > >> make next Wed. (12/17, in whatever 24 window is Wed. in your time zone) > >> a bug squashing day. Hopefully we can add this as an item to our weekly > >> meeting on Tues. to help remind everyone the day before. Friendly Reminder that tomorrow (or today for some time zones) is our bug squash day! I hope to see youall in IRC squashing some of our (least) favorite bugs. Random Factoid: We currently have 299 open bugs. Cheers, Greg From adanin at mirantis.com Tue Dec 16 20:00:21 2014 From: adanin at mirantis.com (Andrey Danin) Date: Wed, 17 Dec 2014 00:00:21 +0400 Subject: [openstack-dev] [Fuel] Adding code to add node to fuel UI In-Reply-To: References: Message-ID: Hello. What version of Fuel do you use? Did you reupload openstack.yaml into Nailgun? Do you want just to deploy an operating system and configure a network on a new node? I would really appreciate if you use a period at the end of sentences. On Tuesday, December 16, 2014, Satyasanjibani Rautaray wrote: > Hi, > > *i am in a process of creating an additional node by editing the code > where the new node will be solving a different propose than installing > openstack components just for testing currently the new node will install > vim for me please help me what else i need to look into to create the > complete setup and deploy with fuel i have edited openstack.yaml at > /root/fuel-web/nailgun/nailgun/fixtures http://pastebin.com/P1MmDBzP > * > -- > Thanks > Satya > Mob:9844101001 > > No one is the best by birth, Its his brain/ knowledge which make him the > best. > -- Andrey Danin adanin at mirantis.com skype: gcon.monolake -------------- next part -------------- An HTML attachment was scrubbed... URL: From adanin at mirantis.com Tue Dec 16 20:03:34 2014 From: adanin at mirantis.com (Andrey Danin) Date: Wed, 17 Dec 2014 00:03:34 +0400 Subject: [openstack-dev] [Fuel] Image based provisioning In-Reply-To: References: Message-ID: On Tuesday, December 16, 2014, Dmitry Pyzhov wrote: > Guys, > > we are about to enable image based provisioning in our master by default. > I'm trying to figure out requirement for this change. As far as I know, it > was not tested on scale lab. Is it true? Have we ever run full system tests > cycle with this option? > > Do we have any other pre-requirements? > -- Andrey Danin adanin at mirantis.com skype: gcon.monolake -------------- next part -------------- An HTML attachment was scrubbed... URL: From adanin at mirantis.com Tue Dec 16 20:09:27 2014 From: adanin at mirantis.com (Andrey Danin) Date: Wed, 17 Dec 2014 00:09:27 +0400 Subject: [openstack-dev] [Fuel] Image based provisioning In-Reply-To: References: Message-ID: Adding Mellanox team explicitly. Gil, Nurit, Aviram, can you confirm that you tested that feature? It can be enabled on every fresh ISO. You just need to enable the Experimental mode (please, see the documentation for instructions). On Tuesday, December 16, 2014, Dmitry Pyzhov wrote: > Guys, > > we are about to enable image based provisioning in our master by default. > I'm trying to figure out requirement for this change. As far as I know, it > was not tested on scale lab. Is it true? Have we ever run full system tests > cycle with this option? > > Do we have any other pre-requirements? > -- Andrey Danin adanin at mirantis.com skype: gcon.monolake -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.a.st.pierre at gmail.com Tue Dec 16 20:09:38 2014 From: chris.a.st.pierre at gmail.com (Chris St. Pierre) Date: Tue, 16 Dec 2014 14:09:38 -0600 Subject: [openstack-dev] [glance] Option to skip deleting images in use? Message-ID: Currently, with delay_delete enabled, the Glance scrubber happily deletes whatever images you ask it to. That includes images that are currently in use by Nova guests, which can really hose things. It'd be nice to have an option to tell the scrubber to skip deletion of images that are currently in use, which is fairly trivial to check for and provides a nice measure of protection. Without delay_delete enabled, checking for images in use likely takes too much time, so this would be limited to just images that are scrubbed with delay_delete. I wanted to bring this up here before I go to the trouble of writing a spec for it, particularly since it doesn't appear that glance currently talks to Nova as a client at all. Is this something that folks would be interested in having? Thanks! -- Chris St. Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedem at linux.vnet.ibm.com Tue Dec 16 20:18:54 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Tue, 16 Dec 2014 14:18:54 -0600 Subject: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken In-Reply-To: <2709D857-FAB3-4C2D-A7FD-941501A15224@gmail.com> References: <2709D857-FAB3-4C2D-A7FD-941501A15224@gmail.com> Message-ID: <549093AE.3080202@linux.vnet.ibm.com> On 12/12/2014 7:54 PM, melanie witt wrote: > Hi everybody, > > At some point, our db archiving functionality got broken because there was a change to stop ever deleting instance system metadata [1]. For those unfamiliar, the 'nova-manage db archive_deleted_rows' is the thing that moves all soft-deleted (deleted=nonzero) rows to the shadow tables. This is a periodic cleaning that operators can do to maintain performance (as things can get sluggish when deleted=nonzero rows accumulate). > > The change was made because instance_type data still needed to be read even after instances had been deleted, because we allow admin to view deleted instances. I saw a bug [2] and two patches [3][4] which aimed to fix this by changing back to soft-deleting instance sysmeta when instances are deleted, and instead allow read_deleted="yes" for the things that need to read instance_type for deleted instances present in the db. > > My question is, is this approach okay? If so, I'd like to see these patches revive so we can have our db archiving working again. :) I think there's likely something I'm missing about the approach, so I'm hoping people who know more about instance sysmeta than I do, can chime in on how/if we can fix this for db archiving. Thanks. > > [1] https://bugs.launchpad.net/nova/+bug/1185190 > [2] https://bugs.launchpad.net/nova/+bug/1226049 > [3] https://review.openstack.org/#/c/110875/ > [4] https://review.openstack.org/#/c/109201/ > > melanie (melwitt) > > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > I changed this from In Progress to Confirmed, removed Alex as the owner (since I didn't see any patches from him) and marked it High: https://bugs.launchpad.net/nova/+bug/1226049 It looks like that could be a duplicate of bug https://bugs.launchpad.net/nova/+bug/1183523 which sounds like a lot of the same problems. dripton had looked at it at one point and said it was Won't Fix at that time, but I don't think that's the case. Note comment 7 in there: https://bugs.launchpad.net/nova/+bug/1183523/comments/7 "comstud thinks we can fix this but we need to do instance_type data differently. Maybe embedded JSON blobs so we have all the information we need without a reference to the instances row. (My opinion: yuck.) So this bug is staying open for now, but it requires some significant redesign to fix." I'm not sure if that's related to comstud's instance_type design summit topic in Atlanta or not, it sounds the same: http://junodesignsummit.sched.org/event/e3f1d51c53fc484d070f02ea36d08601#.VJCS6yvF-KU I can't find the etherpad for that. I'm wondering if Dan Smith's blueprint for flavor-from-sysmeta-to-blob handles that? [1] I've never been sure how those two items are related. Anyway, I think the fix is for the taking assuming someone has a good fix. As noted in one of the bugs, the foreign key constraints in nova don't have cascading deletes, so if it's a foreign key issue, we should find the one that's not being cleaned up before the delete. It looked like dripton thought it was fixed_ips at one point: https://review.openstack.org/#/c/32742/ [1] http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/flavor-from-sysmeta-to-blob.html -- Thanks, Matt Riedemann From jaypipes at gmail.com Tue Dec 16 20:30:52 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 16 Dec 2014 15:30:52 -0500 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: References: Message-ID: <5490967C.8020801@gmail.com> Just set the images to is_public=False as an admin and they'll disappear from everyone except the admin. -jay On 12/16/2014 03:09 PM, Chris St. Pierre wrote: > Currently, with delay_delete enabled, the Glance scrubber happily > deletes whatever images you ask it to. That includes images that are > currently in use by Nova guests, which can really hose things. It'd be > nice to have an option to tell the scrubber to skip deletion of images > that are currently in use, which is fairly trivial to check for and > provides a nice measure of protection. > > Without delay_delete enabled, checking for images in use likely takes > too much time, so this would be limited to just images that are scrubbed > with delay_delete. > > I wanted to bring this up here before I go to the trouble of writing a > spec for it, particularly since it doesn't appear that glance currently > talks to Nova as a client at all. Is this something that folks would be > interested in having? Thanks! > > -- > Chris St. Pierre > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From engg.sanj at gmail.com Tue Dec 16 20:42:10 2014 From: engg.sanj at gmail.com (Satyasanjibani Rautaray) Date: Wed, 17 Dec 2014 02:12:10 +0530 Subject: [openstack-dev] [Fuel] Adding code to add node to fuel UI In-Reply-To: References: Message-ID: I am using community version 6. Basically I am trying to create a iso file after code change so wants to understand the complete way to add new note and class to fuel. On 17-Dec-2014 1:31 am, "Andrey Danin" wrote: > Hello. > > What version of Fuel do you use? Did you reupload openstack.yaml into > Nailgun? Do you want just to deploy an operating system and configure a > network on a new node? > > I would really appreciate if you use a period at the end of sentences. > > On Tuesday, December 16, 2014, Satyasanjibani Rautaray < > engg.sanj at gmail.com> wrote: > >> Hi, >> >> *i am in a process of creating an additional node by editing the code >> where the new node will be solving a different propose than installing >> openstack components just for testing currently the new node will install >> vim for me please help me what else i need to look into to create the >> complete setup and deploy with fuel i have edited openstack.yaml at >> /root/fuel-web/nailgun/nailgun/fixtures http://pastebin.com/P1MmDBzP >> * >> -- >> Thanks >> Satya >> Mob:9844101001 >> >> No one is the best by birth, Its his brain/ knowledge which make him the >> best. >> > > > -- > Andrey Danin > adanin at mirantis.com > skype: gcon.monolake > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From engg.sanj at gmail.com Tue Dec 16 20:43:05 2014 From: engg.sanj at gmail.com (Satyasanjibani Rautaray) Date: Wed, 17 Dec 2014 02:13:05 +0530 Subject: [openstack-dev] [Fuel] Adding code to add node to fuel UI In-Reply-To: References: Message-ID: I just need to deploy the node and install my required packages. On 17-Dec-2014 1:31 am, "Andrey Danin" wrote: > Hello. > > What version of Fuel do you use? Did you reupload openstack.yaml into > Nailgun? Do you want just to deploy an operating system and configure a > network on a new node? > > I would really appreciate if you use a period at the end of sentences. > > On Tuesday, December 16, 2014, Satyasanjibani Rautaray < > engg.sanj at gmail.com> wrote: > >> Hi, >> >> *i am in a process of creating an additional node by editing the code >> where the new node will be solving a different propose than installing >> openstack components just for testing currently the new node will install >> vim for me please help me what else i need to look into to create the >> complete setup and deploy with fuel i have edited openstack.yaml at >> /root/fuel-web/nailgun/nailgun/fixtures http://pastebin.com/P1MmDBzP >> * >> -- >> Thanks >> Satya >> Mob:9844101001 >> >> No one is the best by birth, Its his brain/ knowledge which make him the >> best. >> > > > -- > Andrey Danin > adanin at mirantis.com > skype: gcon.monolake > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.laski at rackspace.com Tue Dec 16 20:43:07 2014 From: andrew.laski at rackspace.com (Andrew Laski) Date: Tue, 16 Dec 2014 15:43:07 -0500 Subject: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken In-Reply-To: <2709D857-FAB3-4C2D-A7FD-941501A15224@gmail.com> References: <2709D857-FAB3-4C2D-A7FD-941501A15224@gmail.com> Message-ID: <5490995B.7000305@rackspace.com> On 12/12/2014 08:54 PM, melanie witt wrote: > Hi everybody, > > At some point, our db archiving functionality got broken because there was a change to stop ever deleting instance system metadata [1]. For those unfamiliar, the 'nova-manage db archive_deleted_rows' is the thing that moves all soft-deleted (deleted=nonzero) rows to the shadow tables. This is a periodic cleaning that operators can do to maintain performance (as things can get sluggish when deleted=nonzero rows accumulate). > > The change was made because instance_type data still needed to be read even after instances had been deleted, because we allow admin to view deleted instances. I saw a bug [2] and two patches [3][4] which aimed to fix this by changing back to soft-deleting instance sysmeta when instances are deleted, and instead allow read_deleted="yes" for the things that need to read instance_type for deleted instances present in the db. > > My question is, is this approach okay? If so, I'd like to see these patches revive so we can have our db archiving working again. :) I think there's likely something I'm missing about the approach, so I'm hoping people who know more about instance sysmeta than I do, can chime in on how/if we can fix this for db archiving. Thanks. I looked briefly into tackling this as well a while back. The tricky piece that I hit is what system_metadata should be available when read_deleted='yes'. Is it okay for it to be all deleted system_metadata or should it only be the system_metadata that was deleted at the same time as the instance? I didn't get to dig in enough to answer that. Also there are periodic tasks that query for deleted instances so those might need to pull system_metadata in addition to the API. > > [1] https://bugs.launchpad.net/nova/+bug/1185190 > [2] https://bugs.launchpad.net/nova/+bug/1226049 > [3] https://review.openstack.org/#/c/110875/ > [4] https://review.openstack.org/#/c/109201/ > > melanie (melwitt) > > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedem at linux.vnet.ibm.com Tue Dec 16 20:48:47 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Tue, 16 Dec 2014 14:48:47 -0600 Subject: [openstack-dev] [nova][cinder][infra] Ceph CI status update In-Reply-To: <20141211163643.GA10911@helmut> References: <20141211163643.GA10911@helmut> Message-ID: <54909AAF.5050505@linux.vnet.ibm.com> On 12/11/2014 10:36 AM, Jon Bernard wrote: > Heya, quick Ceph CI status update. Once the test_volume_boot_pattern > was marked as skipped, only the revert_resize test was failing. I have > submitted a patch to nova for this [1], and that yields an all green > ceph ci run [2]. So at the moment, and with my revert patch, we're in > good shape. > > I will fix up that patch today so that it can be properly reviewed and > hopefully merged. From there I'll submit a patch to infra to move the > job to the check queue as non-voting, and we can go from there. > > [1] https://review.openstack.org/#/c/139693/ > [2] http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html > > Cheers, > Jon, Thanks, this is something I'm supposed to be tracking actually given the Kilo priorities for the project, it's nice to know that someone is already fixing this stuff. :) I've reviewed https://review.openstack.org/#/c/139693/ so it's close, just needs a small fix. -- Thanks, Matt Riedemann From jaypipes at gmail.com Tue Dec 16 21:21:17 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 16 Dec 2014 16:21:17 -0500 Subject: [openstack-dev] [api] Counting resources In-Reply-To: References: <4C649DDC-99D0-4842-80B6-668CF3528E93@rackspace.com> <4D27423EB14FEE4E9E5F88A15764C51A9DBE1938@ORD1EXD02.RACKSPACE.CORP> Message-ID: <5490A24D.2030002@gmail.com> Thanks, Steven, much appreciated! :) On 12/16/2014 01:26 PM, Steven Kaufer wrote: > This is a follow up to this thread from a few weeks ago: > https://www.mail-archive.com/openstack-dev at lists.openstack.org/msg40287.html > > I've updated the nova spec in this area to include the total server > count in the server_links based on the existence of an "include_count" > query parameter (eg: GET /servers?include_count=1). The spec no longer > references a GET /servers/count API. > > Nova spec: https://review.openstack.org/#/c/134279/ > > Thanks, > Steven Kaufer > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From chris.a.st.pierre at gmail.com Tue Dec 16 21:23:52 2014 From: chris.a.st.pierre at gmail.com (Chris St. Pierre) Date: Tue, 16 Dec 2014 15:23:52 -0600 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: <5490967C.8020801@gmail.com> References: <5490967C.8020801@gmail.com> Message-ID: The goal here is protection against deletion of in-use images, not a workaround that can be executed by an admin. For instance, someone without admin still can't do that, and someone with a fat finger can still delete images in use. "Don't lose your data" is a fine workaround for taking backups, but most of us take backups anyway. Same deal. On Tue, Dec 16, 2014 at 2:30 PM, Jay Pipes wrote: > Just set the images to is_public=False as an admin and they'll disappear > from everyone except the admin. > > -jay > > > On 12/16/2014 03:09 PM, Chris St. Pierre wrote: > >> Currently, with delay_delete enabled, the Glance scrubber happily >> deletes whatever images you ask it to. That includes images that are >> currently in use by Nova guests, which can really hose things. It'd be >> nice to have an option to tell the scrubber to skip deletion of images >> that are currently in use, which is fairly trivial to check for and >> provides a nice measure of protection. >> >> Without delay_delete enabled, checking for images in use likely takes >> too much time, so this would be limited to just images that are scrubbed >> with delay_delete. >> >> I wanted to bring this up here before I go to the trouble of writing a >> spec for it, particularly since it doesn't appear that glance currently >> talks to Nova as a client at all. Is this something that folks would be >> interested in having? Thanks! >> >> -- >> Chris St. Pierre >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Chris St. Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Dec 16 21:33:02 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 16 Dec 2014 16:33:02 -0500 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: References: <5490967C.8020801@gmail.com> Message-ID: <5490A50E.4000605@gmail.com> On 12/16/2014 04:23 PM, Chris St. Pierre wrote: > The goal here is protection against deletion of in-use images, not a > workaround that can be executed by an admin. For instance, someone > without admin still can't do that, and someone with a fat finger can > still delete images in use. Then set the protected property on the image, which prevents it from being deleted. From the glance CLI image-update help output: --is-protected [True|False] Prevent image from being deleted. > "Don't lose your data" is a fine workaround for taking backups, but most > of us take backups anyway. Same deal. > > On Tue, Dec 16, 2014 at 2:30 PM, Jay Pipes > wrote: > > Just set the images to is_public=False as an admin and they'll > disappear from everyone except the admin. > > -jay > > > On 12/16/2014 03:09 PM, Chris St. Pierre wrote: > > Currently, with delay_delete enabled, the Glance scrubber happily > deletes whatever images you ask it to. That includes images that are > currently in use by Nova guests, which can really hose things. > It'd be > nice to have an option to tell the scrubber to skip deletion of > images > that are currently in use, which is fairly trivial to check for and > provides a nice measure of protection. > > Without delay_delete enabled, checking for images in use likely > takes > too much time, so this would be limited to just images that are > scrubbed > with delay_delete. > > I wanted to bring this up here before I go to the trouble of > writing a > spec for it, particularly since it doesn't appear that glance > currently > talks to Nova as a client at all. Is this something that folks > would be > interested in having? Thanks! > > -- > Chris St. Pierre > > > _________________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.__org > > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev > > > > _________________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.__org > > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev > > > > > -- > Chris St. Pierre > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From nikhil.komawar at RACKSPACE.COM Tue Dec 16 21:35:15 2014 From: nikhil.komawar at RACKSPACE.COM (Nikhil Komawar) Date: Tue, 16 Dec 2014 21:35:15 +0000 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: <5490A50E.4000605@gmail.com> References: <5490967C.8020801@gmail.com> , <5490A50E.4000605@gmail.com> Message-ID: <0FBF5631AB7B504D89C7E6929695B6249302EBD4@ORD1EXD02.RACKSPACE.CORP> +1 Thanks, -Nikhil ________________________________________ From: Jay Pipes [jaypipes at gmail.com] Sent: Tuesday, December 16, 2014 4:33 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use? On 12/16/2014 04:23 PM, Chris St. Pierre wrote: > The goal here is protection against deletion of in-use images, not a > workaround that can be executed by an admin. For instance, someone > without admin still can't do that, and someone with a fat finger can > still delete images in use. Then set the protected property on the image, which prevents it from being deleted. From the glance CLI image-update help output: --is-protected [True|False] Prevent image from being deleted. > "Don't lose your data" is a fine workaround for taking backups, but most > of us take backups anyway. Same deal. > > On Tue, Dec 16, 2014 at 2:30 PM, Jay Pipes > wrote: > > Just set the images to is_public=False as an admin and they'll > disappear from everyone except the admin. > > -jay > > > On 12/16/2014 03:09 PM, Chris St. Pierre wrote: > > Currently, with delay_delete enabled, the Glance scrubber happily > deletes whatever images you ask it to. That includes images that are > currently in use by Nova guests, which can really hose things. > It'd be > nice to have an option to tell the scrubber to skip deletion of > images > that are currently in use, which is fairly trivial to check for and > provides a nice measure of protection. > > Without delay_delete enabled, checking for images in use likely > takes > too much time, so this would be limited to just images that are > scrubbed > with delay_delete. > > I wanted to bring this up here before I go to the trouble of > writing a > spec for it, particularly since it doesn't appear that glance > currently > talks to Nova as a client at all. Is this something that folks > would be > interested in having? Thanks! > > -- > Chris St. Pierre > > > _________________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.__org > > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev > > > > _________________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.__org > > http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev > > > > > -- > Chris St. Pierre > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From feilong at catalyst.net.nz Tue Dec 16 21:36:24 2014 From: feilong at catalyst.net.nz (Fei Long Wang) Date: Wed, 17 Dec 2014 10:36:24 +1300 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: References: <5490967C.8020801@gmail.com> Message-ID: <5490A5D8.2090804@catalyst.net.nz> Hi Chris, Are you looking for the 'protected' attribute? You can mark an image with 'protected'=True, then the image can't be deleted by accidentally. On 17/12/14 10:23, Chris St. Pierre wrote: > The goal here is protection against deletion of in-use images, not a > workaround that can be executed by an admin. For instance, someone > without admin still can't do that, and someone with a fat finger can > still delete images in use. > > "Don't lose your data" is a fine workaround for taking backups, but > most of us take backups anyway. Same deal. > > On Tue, Dec 16, 2014 at 2:30 PM, Jay Pipes > wrote: > > Just set the images to is_public=False as an admin and they'll > disappear from everyone except the admin. > > -jay > > > On 12/16/2014 03:09 PM, Chris St. Pierre wrote: > > Currently, with delay_delete enabled, the Glance scrubber happily > deletes whatever images you ask it to. That includes images > that are > currently in use by Nova guests, which can really hose things. > It'd be > nice to have an option to tell the scrubber to skip deletion > of images > that are currently in use, which is fairly trivial to check > for and > provides a nice measure of protection. > > Without delay_delete enabled, checking for images in use > likely takes > too much time, so this would be limited to just images that > are scrubbed > with delay_delete. > > I wanted to bring this up here before I go to the trouble of > writing a > spec for it, particularly since it doesn't appear that glance > currently > talks to Nova as a client at all. Is this something that folks > would be > interested in having? Thanks! > > -- > Chris St. Pierre > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Chris St. Pierre > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers & Best regards, Fei Long Wang (???) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsbryant at electronicjungle.net Tue Dec 16 22:01:57 2014 From: jsbryant at electronicjungle.net (Jay S. Bryant) Date: Tue, 16 Dec 2014 16:01:57 -0600 Subject: [openstack-dev] [cinder] Anyone knows whether there is freezing day of spec approval? In-Reply-To: References: Message-ID: <5490ABD5.207@electronicjungle.net> Dave, My apologies. We have not yet set a day that we are freezing BP/Spec approval for Cinder. We had a deadline in November for new drivers being proposed but haven't frozen other proposals yet. I mixed things up with Nova's 12/18 cutoff. Not sure when we will be cutting off BPs for Cinder. The goal is to spend as much of K-2 and K-3 on Cinder clean-up. So, I wouldn't let anything you want considered linger too long. Thanks, Jay On 12/15/2014 09:16 PM, Chen, Wei D wrote: > Hi, > > I know nova has such day around Dec. 18, is there a similar day in Cinder project? thanks! > > Best Regards, > Dave Chen > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Dec 16 22:07:04 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 16 Dec 2014 17:07:04 -0500 Subject: [openstack-dev] oslo.db 1.2.1 release coming to fix stable/juno In-Reply-To: <31C68845-C96B-4744-B106-11B1CDDCED36@doughellmann.com> References: <27A76931-16EF-400B-8EDD-6C3E18F52053@doughellmann.com> <193BFB76-D89B-4B3A-94F8-DB6FEFEC5138@doughellmann.com> <31C68845-C96B-4744-B106-11B1CDDCED36@doughellmann.com> Message-ID: On Dec 16, 2014, at 8:15 AM, Doug Hellmann wrote: > > On Dec 15, 2014, at 5:58 PM, Doug Hellmann wrote: > >> >> On Dec 15, 2014, at 3:21 PM, Doug Hellmann wrote: >> >>> The issue with stable/juno jobs failing because of the difference in the SQLAlchemy requirements between the older applications and the newer oslo.db is being addressed with a new release of the 1.2.x series. We will then cap the requirements for stable/juno to 1.2.1. We decided we did not need to raise the minimum version of oslo.db allowed in kilo, because the old versions of the library do work, if they are installed from packages and not through setuptools. >>> >>> Jeremy created a feature/1.2 branch for us, and I have 2 patches up [1][2] to apply the requirements fix. The change to the oslo.db version in stable/juno is [3]. >>> >>> After the changes in oslo.db merge, I will tag 1.2.1. >> >> After spending several hours exploring a bunch of options to make this actually work, some of which require making changes to test job definitions, grenade, or other long-term changes, I?m proposing a new approach: >> >> 1. Undo the change in master that broke the compatibility with versions of SQLAlchemy by making master match juno: https://review.openstack.org/141927 >> 2. Update oslo.db after ^^ lands. >> 3. Tag oslo.db 1.4.0 with a set of requirements compatible with Juno. >> 4. Change the requirements in stable/juno to skip oslo.db 1.1, 1.2, and 1.3. >> >> I?ll proceed with that plan tomorrow morning (~15 hours from now) unless someone points out why that won?t work in the mean time. > > I just reset a few approved patches that were not going to land because of this issue to kick them out of the gate to expedite landing part of the fix. I did this by modifying their commit messages. I tried to limit the changes to simple cosmetic tweaks, so if you see a weird change to one of your patches that?s probably why. That solution evolved into a third approach, which has taken most of the day to land. We now have an oslo.db 1.0.3 with SQLAlchemy requirements that work with setuptools 8. stable/juno is currently capped to oslo.db>1.0.0,<1.3 but another change to move the cap down to <1.1 is in the queue right now [1]. This is a lower cap than the last tests we were running, but it has the benefit of providing a version of oslo.db that does not introduce any other requirements changes in stable/juno as 1.2.1 would have. More details are available in the etherpad we used for notes today [2], and of course please post here if you have questions. Doug [1] https://review.openstack.org/#/c/142180/2 [2] https://etherpad.openstack.org/p/cloL2FzTRd > >> >> Doug >> >>> >>> Doug >>> >>> [1] https://review.openstack.org/#/c/141893/ >>> [2] https://review.openstack.org/#/c/141894/ >>> [3] https://review.openstack.org/#/c/141896/ >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From chris.a.st.pierre at gmail.com Tue Dec 16 22:12:31 2014 From: chris.a.st.pierre at gmail.com (Chris St. Pierre) Date: Tue, 16 Dec 2014 16:12:31 -0600 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: <5490A5D8.2090804@catalyst.net.nz> References: <5490967C.8020801@gmail.com> <5490A5D8.2090804@catalyst.net.nz> Message-ID: No, I'm looking to prevent images that are in use from being deleted. "In use" and "protected" are disjoint sets. On Tue, Dec 16, 2014 at 3:36 PM, Fei Long Wang wrote: > Hi Chris, > > Are you looking for the 'protected' attribute? You can mark an image with > 'protected'=True, then the image can't be deleted by accidentally. > > On 17/12/14 10:23, Chris St. Pierre wrote: > > The goal here is protection against deletion of in-use images, not a > workaround that can be executed by an admin. For instance, someone without > admin still can't do that, and someone with a fat finger can still delete > images in use. > > "Don't lose your data" is a fine workaround for taking backups, but most > of us take backups anyway. Same deal. > > On Tue, Dec 16, 2014 at 2:30 PM, Jay Pipes wrote: > >> Just set the images to is_public=False as an admin and they'll disappear >> from everyone except the admin. >> >> -jay >> >> >> On 12/16/2014 03:09 PM, Chris St. Pierre wrote: >> >>> Currently, with delay_delete enabled, the Glance scrubber happily >>> deletes whatever images you ask it to. That includes images that are >>> currently in use by Nova guests, which can really hose things. It'd be >>> nice to have an option to tell the scrubber to skip deletion of images >>> that are currently in use, which is fairly trivial to check for and >>> provides a nice measure of protection. >>> >>> Without delay_delete enabled, checking for images in use likely takes >>> too much time, so this would be limited to just images that are scrubbed >>> with delay_delete. >>> >>> I wanted to bring this up here before I go to the trouble of writing a >>> spec for it, particularly since it doesn't appear that glance currently >>> talks to Nova as a client at all. Is this something that folks would be >>> interested in having? Thanks! >>> >>> -- >>> Chris St. Pierre >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Chris St. Pierre > > > _______________________________________________ > OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- > Cheers & Best regards, > Fei Long Wang (???) > -------------------------------------------------------------------------- > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > -------------------------------------------------------------------------- > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Chris St. Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedem at linux.vnet.ibm.com Tue Dec 16 22:18:45 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Tue, 16 Dec 2014 16:18:45 -0600 Subject: [openstack-dev] [nova] - Revert change of default ephemeral fs to ext4 In-Reply-To: <1388453333-sup-5447@clint-HP> References: <52C0C0DB.1080708@draigBrady.com> <52C16E45.1050601@draigBrady.com> <1388453333-sup-5447@clint-HP> Message-ID: <5490AFC5.1030109@linux.vnet.ibm.com> On 12/30/2013 7:30 PM, Clint Byrum wrote: > Excerpts from Day, Phil's message of 2013-12-30 11:05:17 -0800: >> Hi, so it seems we were saying the same thing - new vms get a shared "blank" (empty) file system, not blank disc. How big a problem it is that in many cases this will be the already created ext3 disk and not ext4 depends I guess on how important consistency is to you (to me its pretty important). Either way the change as it stands wont give all new vms an ext4 fs as intended, so its flawed in that regard. >> >> Like you I was thinking that we may have to move away from "default" being in the file name to fix this. >> > > Indeed, "default"'s meaning is mutable and thus it is flawed as a > cache key. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > jogo brought this bug up in IRC today [1]. The bug report says that we should put in the Icehouse release notes that ext3 is going to be changed to ext4 in Juno but that never happened. So question is, now that we're well into Kilo, what can be done about this now? The thread here talks about doing more than just changing the default value like in the original change [2], but is someone willing to work on that? [1] https://bugs.launchpad.net/nova/+bug/1266262 [2] https://review.openstack.org/#/c/63209/ -- Thanks, Matt Riedemann From davanum at gmail.com Tue Dec 16 22:32:16 2014 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 16 Dec 2014 17:32:16 -0500 Subject: [openstack-dev] [nova] - Revert change of default ephemeral fs to ext4 In-Reply-To: <5490AFC5.1030109@linux.vnet.ibm.com> References: <52C0C0DB.1080708@draigBrady.com> <52C16E45.1050601@draigBrady.com> <1388453333-sup-5447@clint-HP> <5490AFC5.1030109@linux.vnet.ibm.com> Message-ID: Matt, i'll take a stab at it thanks, dims On Tue, Dec 16, 2014 at 5:18 PM, Matt Riedemann wrote: > > > On 12/30/2013 7:30 PM, Clint Byrum wrote: >> >> Excerpts from Day, Phil's message of 2013-12-30 11:05:17 -0800: >>> >>> Hi, so it seems we were saying the same thing - new vms get a shared >>> "blank" (empty) file system, not blank disc. How big a problem it is that >>> in many cases this will be the already created ext3 disk and not ext4 >>> depends I guess on how important consistency is to you (to me its pretty >>> important). Either way the change as it stands wont give all new vms an >>> ext4 fs as intended, so its flawed in that regard. >>> >>> Like you I was thinking that we may have to move away from "default" >>> being in the file name to fix this. >>> >> >> Indeed, "default"'s meaning is mutable and thus it is flawed as a >> cache key. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > jogo brought this bug up in IRC today [1]. The bug report says that we > should put in the Icehouse release notes that ext3 is going to be changed to > ext4 in Juno but that never happened. So question is, now that we're well > into Kilo, what can be done about this now? The thread here talks about > doing more than just changing the default value like in the original change > [2], but is someone willing to work on that? > > [1] https://bugs.launchpad.net/nova/+bug/1266262 > [2] https://review.openstack.org/#/c/63209/ > > -- > > Thanks, > > Matt Riedemann > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From nikhil.komawar at RACKSPACE.COM Tue Dec 16 22:56:22 2014 From: nikhil.komawar at RACKSPACE.COM (Nikhil Komawar) Date: Tue, 16 Dec 2014 22:56:22 +0000 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: References: <5490967C.8020801@gmail.com> <5490A5D8.2090804@catalyst.net.nz>, Message-ID: <0FBF5631AB7B504D89C7E6929695B6249302EC45@ORD1EXD02.RACKSPACE.CORP> Hi Chris, Apologies for not having heard your use case completely. From the description as well as the information you've provided; it is my recommendation to use a protected property in Glance for the Image entity that is in use. You can then use it in the service of your choice (Nova, Cinder) for not deleting the same. It is that service which shall have more accurate information as well as be source of truth for the in-use state of the Image entity. Making a call out to different service (except backend stores) is out of the scope of Glance. (Nova is the client of Glance and we would like to avoid the circular dependency mess there!) Hope it helps. Please let me know if you need more information. Thanks and Regards, -Nikhil ________________________________ From: Chris St. Pierre [chris.a.st.pierre at gmail.com] Sent: Tuesday, December 16, 2014 5:12 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use? No, I'm looking to prevent images that are in use from being deleted. "In use" and "protected" are disjoint sets. On Tue, Dec 16, 2014 at 3:36 PM, Fei Long Wang > wrote: Hi Chris, Are you looking for the 'protected' attribute? You can mark an image with 'protected'=True, then the image can't be deleted by accidentally. On 17/12/14 10:23, Chris St. Pierre wrote: The goal here is protection against deletion of in-use images, not a workaround that can be executed by an admin. For instance, someone without admin still can't do that, and someone with a fat finger can still delete images in use. "Don't lose your data" is a fine workaround for taking backups, but most of us take backups anyway. Same deal. On Tue, Dec 16, 2014 at 2:30 PM, Jay Pipes > wrote: Just set the images to is_public=False as an admin and they'll disappear from everyone except the admin. -jay On 12/16/2014 03:09 PM, Chris St. Pierre wrote: Currently, with delay_delete enabled, the Glance scrubber happily deletes whatever images you ask it to. That includes images that are currently in use by Nova guests, which can really hose things. It'd be nice to have an option to tell the scrubber to skip deletion of images that are currently in use, which is fairly trivial to check for and provides a nice measure of protection. Without delay_delete enabled, checking for images in use likely takes too much time, so this would be limited to just images that are scrubbed with delay_delete. I wanted to bring this up here before I go to the trouble of writing a spec for it, particularly since it doesn't appear that glance currently talks to Nova as a client at all. Is this something that folks would be interested in having? Thanks! -- Chris St. Pierre _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Chris St. Pierre _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers & Best regards, Fei Long Wang (???) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Chris St. Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sean_Collins2 at cable.comcast.com Tue Dec 16 23:19:51 2014 From: Sean_Collins2 at cable.comcast.com (Collins, Sean) Date: Tue, 16 Dec 2014 23:19:51 +0000 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: References: <5490967C.8020801@gmail.com> <5490A5D8.2090804@catalyst.net.nz> Message-ID: <20141216231940.GA61726@HQSML-1081034.cable.comcast.com> On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote: > No, I'm looking to prevent images that are in use from being deleted. "In > use" and "protected" are disjoint sets. I have seen multiple cases of images (and snapshots) being deleted while still in use in Nova, which leads to some very, shall we say, interesting bugs and support problems. I do think that we should try and determine a way forward on this, they are indeed disjoint sets. Setting an image as protected is a proactive measure, we should try and figure out a way to keep tenants from shooting themselves in the foot if possible. -- Sean M. Collins From openstack at nemebean.com Tue Dec 16 23:22:43 2014 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 16 Dec 2014 17:22:43 -0600 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <54870FCC.3010006@dague.net> References: <5486DF7F.7080706@dague.net> <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> <54870FCC.3010006@dague.net> Message-ID: <5490BEC3.40805@nemebean.com> Some thoughts inline. I'll go ahead and push a change to remove the things everyone seems to agree on. On 12/09/2014 09:05 AM, Sean Dague wrote: > On 12/09/2014 09:11 AM, Doug Hellmann wrote: >> >> On Dec 9, 2014, at 6:39 AM, Sean Dague wrote: >> >>> I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely. >>> >>> 1 - the entire H8* group. This doesn't function on python code, it >>> functions on git commit message, which makes it tough to run locally. It >>> also would be a reason to prevent us from not rerunning tests on commit >>> message changes (something we could do after the next gerrit update). >>> >>> 2 - the entire H3* group - because of this - >>> https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm >>> >>> A look at the H3* code shows that it's terribly complicated, and is >>> often full of bugs (a few bit us last week). I'd rather just delete it >>> and move on. >> >> I don?t have the hacking rules memorized. Could you describe them briefly? > > Sure, the H8* group is git commit messages. It's checking for line > length in the commit message. > > - [H802] First, provide a brief summary of 50 characters or less. Summaries > of greater then 72 characters will be rejected by the gate. > > - [H801] The first line of the commit message should provide an accurate > description of the change, not just a reference to a bug or > blueprint. > > > H802 is mechanically enforced (though not the 50 characters part, so the > code isn't the same as the rule). > > H801 is enforced by a regex that looks to see if the first line is a > launchpad bug and fails on it. You can't mechanically enforce that > english provides an accurate description. +1. It would be nice to provide automatic notification to people if they submit something with an absurdly long commit message, but I agree that hacking isn't the place to do that. > > > H3* are all the module import rules: > > Imports > ------- > - [H302] Do not import objects, only modules (*) > - [H301] Do not import more than one module per line (*) > - [H303] Do not use wildcard ``*`` import (*) > - [H304] Do not make relative imports > - Order your imports by the full module path > - [H305 H306 H307] Organize your imports according to the `Import order > template`_ and `Real-world Import Order Examples`_ below. > > I think these remain reasonable guidelines, but H302 is exceptionally > tricky to get right, and we keep not getting it right. > > H305-307 are actually impossible to get right. Things come in and out of > stdlib in python all the time. tdlr; I'd like to remove H302, H305 and, H307 and leave the rest. Reasons below. +1 to H305 and H307. I'm going to have to admit defeat and accept that I can't make them work in a sane fashion. H306 is different though - that one is only checking alphabetical order and only works on the text of the import so it doesn't have the issues around having modules installed or mis-categorizing. AFAIK it has never actually caused any problems either (the H306 failure in https://review.openstack.org/#/c/140168/2/nova/tests/unit/test_fixtures.py is correct - nova.tests.fixtures should come before nova.tests.unit.conf_fixture). As far as 301-304, only 302 actually depends on the is_module stuff. The others are all text-based too so I think we should leave them. H302 I'm kind of indifferent on - we hit an edge case with the olso namespace thing which is now fixed, but if removing that allows us to not install requirements.txt to run pep8 I think I'm onboard with removing it too. > > > I think it's time to just decide to be reasonable Humans and that these > are guidelines. > > The H3* set of rules is also why you have to install *all* of > requirements.txt and test-requirements.txt in your pep8 tox target, > because H302 actually inspects the sys.modules to attempt to figure out > if things are correct. > > -Sean > >> >> Doug >> - [H802] First, provide a brief summary of 50 characters or less. Summaries > of greater then 72 characters will be rejected by the gate. > > - [H801] The first line of the commit message should provide an accurate > description of the change, not just a reference to a bug or > blueprint. > >> >>> >>> -Sean >>> >>> -- >>> Sean Dague >>> http://dague.net >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > From vishvananda at gmail.com Wed Dec 17 00:11:33 2014 From: vishvananda at gmail.com (Vishvananda Ishaya) Date: Tue, 16 Dec 2014 16:11:33 -0800 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: <20141216231940.GA61726@HQSML-1081034.cable.comcast.com> References: <5490967C.8020801@gmail.com> <5490A5D8.2090804@catalyst.net.nz> <20141216231940.GA61726@HQSML-1081034.cable.comcast.com> Message-ID: A simple solution that wouldn?t require modification of glance would be a cron job that lists images and snapshots and marks them protected while they are in use. Vish On Dec 16, 2014, at 3:19 PM, Collins, Sean wrote: > On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote: >> No, I'm looking to prevent images that are in use from being deleted. "In >> use" and "protected" are disjoint sets. > > I have seen multiple cases of images (and snapshots) being deleted while > still in use in Nova, which leads to some very, shall we say, > interesting bugs and support problems. > > I do think that we should try and determine a way forward on this, they > are indeed disjoint sets. Setting an image as protected is a proactive > measure, we should try and figure out a way to keep tenants from > shooting themselves in the foot if possible. > > -- > Sean M. Collins > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sbalukoff at bluebox.net Wed Dec 17 00:54:46 2014 From: sbalukoff at bluebox.net (Stephen Balukoff) Date: Tue, 16 Dec 2014 16:54:46 -0800 Subject: [openstack-dev] [Octavia] Meetings canceled until Jan 7 Message-ID: Since we're in the middle of the Octavia hack-a-thon (and have been meeting in person and online all week), it doesn't make sense for us to have an Octavia meeting next week. I also suggested holding Octavia meetings on Christmas Eve and New Year's Eve (when the following two meetings would be held), but I was assured by the usual participants in these meetings that I would probably be the only one attending them. As such, the next Octavia meeting we'll be holding will happen on January 7th. In the mean time, let's bang out some code and get it reviewed! Thanks, Stephen -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lakshmi at lakshmikannan.me Wed Dec 17 01:28:00 2014 From: lakshmi at lakshmikannan.me (Lakshmi Kannan) Date: Tue, 16 Dec 2014 17:28:00 -0800 Subject: [openstack-dev] [Mistral] RFC - Action spec CLI Message-ID: Apologies for the long email. If this fancy email doesn?t render correctly for you, please read it here: https://gist.github.com/lakshmi-kannan/cf953f66a397b153254a I was looking into fixing bug: https://bugs.launchpad.net/mistral/+bug/1401039. My idea was to use shlex to parse the string. This actually would work for anything that is supplied in the linux shell syntax. Problem is this craps out when we want to support complex data structures such as arrays and dicts as arguments. I did not think we supported a syntax to take in complex data structures in a one line format. Consider for example: task7: for-each: vm_info: $.vms workflow: wf2 is_true=true object_list=[1, null, "str"] on-complete: - task9 - task10 Specifically wf2 is_true=true object_list=[1, null, "str"] shlex will not handle this correctly because object_list is an array. Same problem with dict. There are 3 potential options here: Option 1 1) Provide a spec for specifying lists and dicts like so: list_arg=1,2,3 and dict_arg=h1:h2,h3:h4,h5:h6 shlex will handle this fine but there needs to be a code that converts the argument values to appropriate data types based on schema. (ActionSpec should have a parameter schema probably in jsonschema). This is doable. wf2 is_true=true object_list="1,null,"str"" Option 2 2) Allow JSON strings to be used as arguments so we can json.loads them (if it fails, use them as simple string). For example, with this approach, the line becomes wf2 is_true=true object_list="[1, null, "str"]" This would pretty much resemble http://stackoverflow.com/questions/7625786/type-dict-in-argparse-add-argument Option 3 3) Keep the spec as such and try to parse it. I have no idea how we can do this reliably. We need a more rigorous lexer. This syntax doesn?t translate well when we want to build a CLI. Linux shells cannot support this syntax natively. This means people would have to use shlex syntax and a translation needs to happen in CLI layer. This will lead to inconsistency. CLI uses some syntax and the action input line in workflow definition will use another. We should try and avoid this. Option 4 4) Completely drop support for this fancy one line syntax in workflow. This is probably the least desired option. My preference Looking the options, I like option2/option 1/option 4/option 3 in the order of preference. With some documentation, we can tell people why this is hard. People will also grok because they are already familiar with CLI limitations in linux. Thoughts? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramy.asselin at hp.com Wed Dec 17 01:37:36 2014 From: ramy.asselin at hp.com (Asselin, Ramy) Date: Wed, 17 Dec 2014 01:37:36 +0000 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: References: <5486D947.4090209@hp.com> Message-ID: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> Manually running the script requires a few environment settings. Take a look at the README here: https://github.com/openstack-infra/devstack-gate Regarding cinder, I?m using this repo to run our cinder jobs (fork from jaypipes). https://github.com/rasselin/os-ext-testing Note that this solution doesn?t use the Jenkins gerrit trigger pluggin, but zuul. There?s a sample job for cinder here. It?s in Jenkins Job Builder format. https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample You can ask more questions in IRC freenode #openstack-cinder. (irc# asselin) Ramy From: Eduard Matei [mailto:eduard.matei at cloudfounders.com] Sent: Tuesday, December 16, 2014 12:41 AM To: Bailey, Darragh Cc: OpenStack Development Mailing List (not for usage questions); OpenStack Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Hi, Can someone point me to some working documentation on how to setup third party CI? (joinfu's instructions don't seem to work, and manually running devstack-gate scripts fails: Running gate_hook Job timeout set to: 163 minutes timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory ERROR: the main setup script run by this job failed - exit code: 127 please look at the relevant log files to determine the root cause Cleaning up host ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) Build step 'Execute shell' marked build as failure. I have a working Jenkins slave with devstack and our internal libraries, i have Gerrit Trigger Plugin working and triggering on patches created, i just need the actual job contents so that it can get to comment with the test results. Thanks, Eduard On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei > wrote: Hi Darragh, thanks for your input I double checked the job settings and fixed it: - build triggers is set to Gerrit event - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin and tested separately) - Trigger on: Patchset Created - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: Type: Path, Pattern: ** (was Type Plain on both) Now the job is triggered by commit on openstack-dev/sandbox :) Regarding the Query and Trigger Gerrit Patches, i found my patch using query: status:open project:openstack-dev/sandbox change:139585 and i can trigger it manually and it executes the job. But i still have the problem: what should the job do? It doesn't actually do anything, it doesn't run tests or comment on the patch. Do you have an example of job? Thanks, Eduard On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh > wrote: Hi Eduard, I would check the trigger settings in the job, particularly which "type" of pattern matching is being used for the branches. Found it tends to be the spot that catches most people out when configuring jobs with the Gerrit Trigger plugin. If you're looking to trigger against all branches then you would want "Type: Path" and "Pattern: **" appearing in the UI. If you have sufficient access using the 'Query and Trigger Gerrit Patches' page accessible from the main view will make it easier to confirm that your Jenkins instance can actually see changes in gerrit for the given project (which should mean that it can see the corresponding events as well). Can also use the same page to re-trigger for PatchsetCreated events to see if you've set the patterns on the job correctly. Regards, Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" - Unknown On 08/12/14 14:33, Eduard Matei wrote: > Resending this to dev ML as it seems i get quicker response :) > > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: > Patchset Created", chose as server the configured Gerrit server that > was previously tested, then added the project openstack-dev/sandbox > and saved. > I made a change on dev sandbox repo but couldn't trigger my job. > > Any ideas? > > Thanks, > Eduard > > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei > > >> wrote: > > Hello everyone, > > Thanks to the latest changes to the creation of service accounts > process we're one step closer to setting up our own CI platform > for Cinder. > > So far we've got: > - Jenkins master (with Gerrit plugin) and slave (with DevStack and > our storage solution) > - Service account configured and tested (can manually connect to > review.openstack.org and get events > and publish comments) > > Next step would be to set up a job to do the actual testing, this > is where we're stuck. > Can someone please point us to a clear example on how a job should > look like (preferably for testing Cinder on Kilo)? Most links > we've found are broken, or tools/scripts are no longer working. > Also, we cannot change the Jenkins master too much (it's owned by > Ops team and they need a list of tools/scripts to review before > installing/running so we're not allowed to experiment). > > Thanks, > Eduard > > -- > > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom > they are addressed.If you are not the named addressee or an > employee or agent responsible for delivering this message to the > named addressee, you are hereby notified that you are not > authorized to read, print, retain, copy or disseminate this > message or any part of it. If you have received this email in > error we request you to notify us by reply e-mail and to delete > all electronic files of the message. If you are not the intended > recipient you are notified that disclosing, copying, distributing > or taking any action in reliance on the contents of this > information is strictly prohibited. E-mail transmission cannot be > guaranteed to be secure or error free as information could be > intercepted, corrupted, lost, destroyed, arrive late or > incomplete, or contain viruses. The sender therefore does not > accept liability for any errors or omissions in the content of > this message, and shall have no liability for any loss or damage > suffered by the user, which arise as a result of e-mail transmission. > > > > > -- > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom they > are addressed.If you are not the named addressee or an employee or > agent responsible for delivering this message to the named addressee, > you are hereby notified that you are not authorized to read, print, > retain, copy or disseminate this message or any part of it. If you > have received this email in error we request you to notify us by reply > e-mail and to delete all electronic files of the message. If you are > not the intended recipient you are notified that disclosing, copying, > distributing or taking any action in reliance on the contents of this > information is strictly prohibited. E-mail transmission cannot be > guaranteed to be secure or error free as information could be > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or > contain viruses. The sender therefore does not accept liability for > any errors or omissions in the content of this message, and shall have > no liability for any loss or damage suffered by the user, which arise > as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mscherbakov at mirantis.com Wed Dec 17 01:57:25 2014 From: mscherbakov at mirantis.com (Mike Scherbakov) Date: Wed, 17 Dec 2014 04:57:25 +0300 Subject: [openstack-dev] [Fuel] Adding code to add node to fuel UI In-Reply-To: References: Message-ID: Hi, did you come across http://docs.mirantis.com/fuel-dev/develop/addition_examples.html ? I believe it should cover your use case. Thanks, On Tue, Dec 16, 2014 at 11:43 PM, Satyasanjibani Rautaray < engg.sanj at gmail.com> wrote: > > I just need to deploy the node and install my required packages. > On 17-Dec-2014 1:31 am, "Andrey Danin" wrote: > >> Hello. >> >> What version of Fuel do you use? Did you reupload openstack.yaml into >> Nailgun? Do you want just to deploy an operating system and configure a >> network on a new node? >> >> I would really appreciate if you use a period at the end of sentences. >> >> On Tuesday, December 16, 2014, Satyasanjibani Rautaray < >> engg.sanj at gmail.com> wrote: >> >>> Hi, >>> >>> *i am in a process of creating an additional node by editing the code >>> where the new node will be solving a different propose than installing >>> openstack components just for testing currently the new node will install >>> vim for me please help me what else i need to look into to create the >>> complete setup and deploy with fuel i have edited openstack.yaml at >>> /root/fuel-web/nailgun/nailgun/fixtures http://pastebin.com/P1MmDBzP >>> * >>> -- >>> Thanks >>> Satya >>> Mob:9844101001 >>> >>> No one is the best by birth, Its his brain/ knowledge which make him the >>> best. >>> >> >> >> -- >> Andrey Danin >> adanin at mirantis.com >> skype: gcon.monolake >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Mike Scherbakov #mihgen -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Wed Dec 17 02:01:34 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Tue, 16 Dec 2014 18:01:34 -0800 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <5490BEC3.40805@nemebean.com> References: <5486DF7F.7080706@dague.net> <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> <54870FCC.3010006@dague.net> <5490BEC3.40805@nemebean.com> Message-ID: On Tue, Dec 16, 2014 at 3:22 PM, Ben Nemec wrote: > Some thoughts inline. I'll go ahead and push a change to remove the > things everyone seems to agree on. > > On 12/09/2014 09:05 AM, Sean Dague wrote: > > On 12/09/2014 09:11 AM, Doug Hellmann wrote: > >> > >> On Dec 9, 2014, at 6:39 AM, Sean Dague wrote: > >> > >>> I'd like to propose that for hacking 1.0 we drop 2 groups of rules > entirely. > >>> > >>> 1 - the entire H8* group. This doesn't function on python code, it > >>> functions on git commit message, which makes it tough to run locally. > It > >>> also would be a reason to prevent us from not rerunning tests on commit > >>> message changes (something we could do after the next gerrit update). > >>> > >>> 2 - the entire H3* group - because of this - > >>> https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm > >>> > >>> A look at the H3* code shows that it's terribly complicated, and is > >>> often full of bugs (a few bit us last week). I'd rather just delete it > >>> and move on. > >> > >> I don?t have the hacking rules memorized. Could you describe them > briefly? > > > > Sure, the H8* group is git commit messages. It's checking for line > > length in the commit message. > > > > - [H802] First, provide a brief summary of 50 characters or less. > Summaries > > of greater then 72 characters will be rejected by the gate. > > > > - [H801] The first line of the commit message should provide an accurate > > description of the change, not just a reference to a bug or > > blueprint. > > > > > > H802 is mechanically enforced (though not the 50 characters part, so the > > code isn't the same as the rule). > > > > H801 is enforced by a regex that looks to see if the first line is a > > launchpad bug and fails on it. You can't mechanically enforce that > > english provides an accurate description. > > +1. It would be nice to provide automatic notification to people if > they submit something with an absurdly long commit message, but I agree > that hacking isn't the place to do that. > > > > > > > H3* are all the module import rules: > > > > Imports > > ------- > > - [H302] Do not import objects, only modules (*) > > - [H301] Do not import more than one module per line (*) > > - [H303] Do not use wildcard ``*`` import (*) > > - [H304] Do not make relative imports > > - Order your imports by the full module path > > - [H305 H306 H307] Organize your imports according to the `Import order > > template`_ and `Real-world Import Order Examples`_ below. > > > > I think these remain reasonable guidelines, but H302 is exceptionally > > tricky to get right, and we keep not getting it right. > > > > H305-307 are actually impossible to get right. Things come in and out of > > stdlib in python all the time. > > tdlr; I'd like to remove H302, H305 and, H307 and leave the rest. > Reasons below. > > +1 to H305 and H307. I'm going to have to admit defeat and accept that > I can't make them work in a sane fashion. > ++, these have been nothing but trouble. > > H306 is different though - that one is only checking alphabetical order > and only works on the text of the import so it doesn't have the issues > around having modules installed or mis-categorizing. AFAIK it has never > actually caused any problems either (the H306 failure in > https://review.openstack.org/#/c/140168/2/nova/tests/unit/test_fixtures.py > is correct - nova.tests.fixtures should come before > nova.tests.unit.conf_fixture). > Agreed H306 is mechanically enforceable and is there in part to reduce the risk of merge conflicts in the imports section > > As far as 301-304, only 302 actually depends on the is_module stuff. > The others are all text-based too so I think we should leave them. H302 > I'm kind of indifferent on - we hit an edge case with the olso namespace > thing which is now fixed, but if removing that allows us to not install > requirements.txt to run pep8 I think I'm onboard with removing it too. > As for H302, it comes from https://google-styleguide.googlecode.com/svn/trunk/pyguide.html?showone=Imports#Imports We still don't have this one working right, running flake8 outside a venv is still causing oslo packaging related issues for me ./nova/i18n.py:21:1: H302 import only modules.'from oslo import i18n' does not import a module So +1 to just removing it. > > > > > > > I think it's time to just decide to be reasonable Humans and that these > > are guidelines. > > > > The H3* set of rules is also why you have to install *all* of > > requirements.txt and test-requirements.txt in your pep8 tox target, > > because H302 actually inspects the sys.modules to attempt to figure out > > if things are correct. > > > > -Sean > > > >> > >> Doug > >> - [H802] First, provide a brief summary of 50 characters or less. > Summaries > > of greater then 72 characters will be rejected by the gate. > > > > - [H801] The first line of the commit message should provide an accurate > > description of the change, not just a reference to a bug or > > blueprint. > > > >> > >>> > >>> -Sean > >>> > >>> -- > >>> Sean Dague > >>> http://dague.net > >>> > >>> _______________________________________________ > >>> OpenStack-dev mailing list > >>> OpenStack-dev at lists.openstack.org > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mscherbakov at mirantis.com Wed Dec 17 02:03:27 2014 From: mscherbakov at mirantis.com (Mike Scherbakov) Date: Wed, 17 Dec 2014 05:03:27 +0300 Subject: [openstack-dev] [Fuel] Image based provisioning In-Reply-To: References: Message-ID: Dmitry, as part of 6.1 roadmap, we are going to work on patching feature. There are two types of workflow to consider: - patch existing environment (already deployed nodes, aka "target" nodes) - ensure that new nodes, added to the existing and already patched envs, will install updated packages too. In case of anakonda/preseed install, we can simply update repo on master node and run createrepo/etc. What do we do in case of image? Will we need a separate repo alongside with main one, "updates" repo - and do post-provisioning "yum update" to fetch all patched packages? On Tue, Dec 16, 2014 at 11:09 PM, Andrey Danin wrote: > > Adding Mellanox team explicitly. > > Gil, Nurit, Aviram, can you confirm that you tested that feature? It can > be enabled on every fresh ISO. You just need to enable the Experimental > mode (please, see the documentation for instructions). > > On Tuesday, December 16, 2014, Dmitry Pyzhov wrote: > >> Guys, >> >> we are about to enable image based provisioning in our master by default. >> I'm trying to figure out requirement for this change. As far as I know, it >> was not tested on scale lab. Is it true? Have we ever run full system tests >> cycle with this option? >> >> Do we have any other pre-requirements? >> > > > -- > Andrey Danin > adanin at mirantis.com > skype: gcon.monolake > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Mike Scherbakov #mihgen -------------- next part -------------- An HTML attachment was scrubbed... URL: From djhon9813 at gmail.com Wed Dec 17 02:31:30 2014 From: djhon9813 at gmail.com (david jhon) Date: Wed, 17 Dec 2014 07:31:30 +0500 Subject: [openstack-dev] SRIOV-error In-Reply-To: References: Message-ID: Hi Irena, Thanks a lot for your help, it has helped me a lot to fix bugs and get issues to be resolved. Thanks again! On Tue, Dec 16, 2014 at 4:40 PM, Irena Berezovsky wrote: > > Hi David, > > As I mentioned before, you do not need to run sriov agent in your setup, > just set agent_required=False in your neutron-server configuration. I think > that initially this may be easier to make things work this way. > > I also cannot understand why you have two neutron config files that > contain same sections with different settings. > > > > You can find me on #openstack-neuron IRC channel, I can try to help. > > > > BR, > > Irena > > > > > > *From:* david jhon [mailto:djhon9813 at gmail.com] > *Sent:* Tuesday, December 16, 2014 9:44 AM > *To:* Irena Berezovsky > *Cc:* OpenStack Development Mailing List (not for usage questions); > Murali B > *Subject:* Re: [openstack-dev] SRIOV-error > > > > Hi Irena and Murali, > > Thanks a lot for your reply! > > Here is the output from pci_devices table of nova db: > > select * from pci_devices; > > +---------------------+------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+-----------------------------------+---------------+------------+ > | created_at | updated_at | deleted_at | deleted | id | > compute_node_id | address | product_id | vendor_id | dev_type | > dev_id | label | status | > extra_info | instance_uuid | request_id | > > +---------------------+------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+-----------------------------------+---------------+------------+ > | 2014-12-15 12:10:52 | NULL | NULL | 0 | 1 > | 1 | 0000:03:10.0 | 10ed | 8086 | type-VF | > pci_0000_03_10_0 | label_8086_10ed | available | {"phys_function": > "0000:03:00.0"} | NULL | NULL | > | 2014-12-15 12:10:52 | NULL | NULL | 0 | 2 > | 1 | 0000:03:10.2 | 10ed | 8086 | type-VF | > pci_0000_03_10_2 | label_8086_10ed | available | {"phys_function": > "0000:03:00.0"} | NULL | NULL | > | 2014-12-15 12:10:52 | NULL | NULL | 0 | 3 > | 1 | 0000:03:10.4 | 10ed | 8086 | type-VF | > pci_0000_03_10_4 | label_8086_10ed | available | {"phys_function": > "0000:03:00.0"} | NULL | NULL | > | 2014-12-15 12:10:52 | NULL | NULL | 0 | 4 > | 1 | 0000:03:10.6 | 10ed | 8086 | type-VF | > pci_0000_03_10_6 | label_8086_10ed | available | {"phys_function": > "0000:03:00.0"} | NULL | NULL | > | 2014-12-15 12:10:53 | NULL | NULL | 0 | 5 > | 1 | 0000:03:10.1 | 10ed | 8086 | type-VF | > pci_0000_03_10_1 | label_8086_10ed | available | {"phys_function": > "0000:03:00.1"} | NULL | NULL | > | 2014-12-15 12:10:53 | NULL | NULL | 0 | 6 > | 1 | 0000:03:10.3 | 10ed | 8086 | type-VF | > pci_0000_03_10_3 | label_8086_10ed | available | {"phys_function": > "0000:03:00.1"} | NULL | NULL | > | 2014-12-15 12:10:53 | NULL | NULL | 0 | 7 > | 1 | 0000:03:10.5 | 10ed | 8086 | type-VF | > pci_0000_03_10_5 | label_8086_10ed | available | {"phys_function": > "0000:03:00.1"} | NULL | NULL | > | 2014-12-15 12:10:53 | NULL | NULL | 0 | 8 > | 1 | 0000:03:10.7 | 10ed | 8086 | type-VF | > pci_0000_03_10_7 | label_8086_10ed | available | {"phys_function": > "0000:03:00.1"} | NULL | NULL | > > +---------------------+------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+-----------------------------------+---------------+------------+ > > output from select hypervisor_hostname,pci_stats from compute_nodes; is: > > +---------------------+-------------------------------------------------------------------------------------------+ > | hypervisor_hostname | > pci_stats > | > > +---------------------+-------------------------------------------------------------------------------------------+ > | blade08 | [{"count": 8, "vendor_id": "8086", > "physical_network": "ext-net", "product_id": "10ed"}] | > > +---------------------+-------------------------------------------------------------------------------------------+ > > Moreover, I have set agent_required = True in /etc/neutron/plugins/ml2/ml2_conf_sriov.ini. > but still found no sriov agent running. > # Defines configuration options for SRIOV NIC Switch MechanismDriver > # and Agent > > [ml2_sriov] > # (ListOpt) Comma-separated list of > # supported Vendor PCI Devices, in format vendor_id:product_id > # > #supported_pci_vendor_devs = 8086:10ca, 8086:10ed > supported_pci_vendor_devs = 8086:10ed > # Example: supported_pci_vendor_devs = 15b3:1004 > # > # (BoolOpt) Requires running SRIOV neutron agent for port binding > agent_required = True > > [sriov_nic] > # (ListOpt) Comma-separated list of : > # tuples mapping physical network names to the agent's node-specific > # physical network device interfaces of SR-IOV physical function to be used > # for VLAN networks. All physical networks listed in network_vlan_ranges on > # the server should have mappings to appropriate interfaces on each agent. > # > > physical_device_mappings = ext-net:br-ex > # Example: physical_device_mappings = physnet1:eth1 > # > # (ListOpt) Comma-separated list of : > # tuples, mapping network_device to the agent's node-specific list of > virtual > # functions that should not be used for virtual networking. > # vfs_to_exclude is a semicolon-separated list of virtual > # functions to exclude from network_device. The network_device in the > # mapping should appear in the physical_device_mappings list. > # exclude_devices = > # Example: exclude_devices = eth1:0000:07:00.2; 0000:07:00.3 > > ======================================================================================== > pci_passthrough_whitelist from /etc/nova/nova.conf: > pci_passthrough_whitelist = > {"address":"*:03:10.*","physical_network":"ext-net"} > > ==================================================== > /etc/neutron/plugins/ml2/ml2_conf.ini: > > [ml2] > # (ListOpt) List of network type driver entrypoints to be loaded from > # the neutron.ml2.type_drivers namespace. > # > #type_drivers = local,flat,vlan,gre,vxlan > #Example: type_drivers = flat,vlan,gre,vxlan > #type_drivers = flat,gre, vlan > type_drivers = flat,vlan > > # (ListOpt) Ordered list of network_types to allocate as tenant > # networks. The default value 'local' is useful for single-box testing > # but provides no connectivity between hosts. > # > # tenant_network_types = local > # Example: tenant_network_types = vlan,gre,vxlan > #tenant_network_types = gre, vlan > tenant_network_types = vlan > > # (ListOpt) Ordered list of networking mechanism driver entrypoints > # to be loaded from the neutron.ml2.mechanism_drivers namespace. > mechanism_drivers = openvswitch, sriovnicswitch > # Example: mechanism_drivers = openvswitch,mlnx > # Example: mechanism_drivers = arista > # Example: mechanism_drivers = cisco,logger > # Example: mechanism_drivers = openvswitch,brocade > # Example: mechanism_drivers = linuxbridge,brocade > > # (ListOpt) Ordered list of extension driver entrypoints > # to be loaded from the neutron.ml2.extension_drivers namespace. > # extension_drivers = > # Example: extension_drivers = anewextensiondriver > > [ml2_type_flat] > # (ListOpt) List of physical_network names with which flat networks > # can be created. Use * to allow flat networks with arbitrary > # physical_network names. > # > flat_networks = ext-net > # Example:flat_networks = physnet1,physnet2 > # Example:flat_networks = * > > [ml2_type_vlan] > # (ListOpt) List of [::] tuples > # specifying physical_network names usable for VLAN provider and > # tenant networks, as well as ranges of VLAN tags on each > # physical_network available for allocation as tenant networks. > network_vlan_ranges = ext-net:2:100 > # Example: network_vlan_ranges = physnet1:1000:2999,physnet2 > > [ml2_type_gre] > # (ListOpt) Comma-separated list of : tuples enumerating > ranges of GRE tunnel IDs that are available for tenant network allocation > #tunnel_id_ranges = 1:1000 > > [ml2_type_vxlan] > # (ListOpt) Comma-separated list of : tuples enumerating > # ranges of VXLAN VNI IDs that are available for tenant network allocation. > # > # vni_ranges = > > # (StrOpt) Multicast group for the VXLAN interface. When configured, will > # enable sending all broadcast traffic to this multicast group. When left > # unconfigured, will disable multicast VXLAN mode. > # > # vxlan_group = > # Example: vxlan_group = 239.1.1.1 > > [securitygroup] > # Controls if neutron security group is enabled or not. > # It should be false when you use nova security group. > enable_security_group = True > > # Use ipset to speed-up the iptables security groups. Enabling ipset > support > # requires that ipset is installed on L2 agent node. > enable_ipset = True > firewall_driver = > neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver > [ovs] > local_ip = controller > enable_tunneling = True > bridge_mappings = external:br-ex > > [agent] > tunnel_types = vlan > > [ml2_sriov] > agent_required = True > > Please tell me what is wrong in there plus what exactly "physnet1" should > be? Thanks again for all your help and suggestion.... > > Regards, > > > > On Tue, Dec 16, 2014 at 10:42 AM, Irena Berezovsky > wrote: > > Hi David, > > You error is not related to agent. > > I would suggest to check: > > 1. nova.conf at your compute node for pci whitelist configuration > > 2. Neutron server configuration for correct physical_network label > matching the label in pci whitelist > > 3. Nova DB tables containing PCI devices entries: > > "#echo 'use nova;select hypervisor_hostname,pci_stats from compute_nodes;' > | mysql -u root" > > You should not run SR-IOV agent in you setup. SR-IOV agent is an optional > and currently does not add value if you use Intel NIC. > > > > > > Regards, > > Irena > > *From:* david jhon [mailto:djhon9813 at gmail.com] > *Sent:* Tuesday, December 16, 2014 5:54 AM > *To:* Murali B > *Cc:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] SRIOV-error > > > > Just to be more clear, command $lspci | grep -i Ethernet gives following > output: > > 01:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port > Backplane Connection (rev 01) > 01:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port > Backplane Connection (rev 01) > 03:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port > Backplane Connection (rev 01) > 03:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port > Backplane Connection (rev 01) > 03:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 03:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 03:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 03:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 03:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 03:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 03:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 03:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port > Backplane Connection (rev 01) > 04:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port > Backplane Connection (rev 01) > 04:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > 04:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller > Virtual Function (rev 01) > > How can I make SR-IOV agent run and fix this bug? > > > > > > On Tue, Dec 16, 2014 at 8:36 AM, david jhon wrote: > > Hi Murali, > > Thanks for your response, I did the same, it has resolved errors > apparently but 1) neutron agent-list shows no agent for sriov, 2) neutron > port is created successfully but creating vm is erred in scheduling as > follows: > > result from neutron agent-list: > > +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ > | id | agent_type | host | > alive | admin_state_up | binary | > > +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ > | 2acc7044-e552-4601-b00b-00ba591b453f | Open vSwitch agent | blade08 | > xxx | True | neutron-openvswitch-agent | > | 595d07c6-120e-42ea-a950-6c77a6455f10 | Metadata agent | blade08 | > :-) | True | neutron-metadata-agent | > | a1f253a8-e02e-4498-8609-4e265285534b | DHCP agent | blade08 | > :-) | True | neutron-dhcp-agent | > | d46b29d8-4b5f-4838-bf25-b7925cb3e3a7 | L3 agent | blade08 | > :-) | True | neutron-l3-agent | > > +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+ > > 2014-12-15 19:30:44.546 40249 ERROR oslo.messaging.rpc.dispatcher > [req-c7741cff-a7d8-422f-b605-6a1d976aeb09 ] Exception during message > handling: PCI $ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > Traceback (most recent call last): > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line > 13$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > incoming.message)) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line > 17$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > return self._do_dispatch(endpoint, method, ctxt, args) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line > 12$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > result = getattr(endpoint, method)(ctxt, **new_args) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 139, > i$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > return func(*args, **kwargs) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 175, in > s$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > filter_properties) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line > $ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > filter_properties) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line > $ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > chosen_host.obj.consume_from_instance(instance_properties) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line > 246,$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > self.pci_stats.apply_requests(pci_requests.requests) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher File > "/usr/lib/python2.7/dist-packages/nova/pci/pci_stats.py", line 209, in > apply$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > raise exception.PciDeviceRequestFailed(requests=requests) > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher > PciDeviceRequestFailed: PCI device request ({'requests': > [InstancePCIRequest(alias_$ > 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher. > > Moreover, no /var/log/sriov-agent.log file exists. Please help me to fix > this issue. Thanks everyone! > > > > On Mon, Dec 15, 2014 at 5:18 PM, Murali B wrote: > > Hi David, > > > > Please add as per the Irena suggestion > > > > FYI: refer the below configuration > > > > http://pastebin.com/DGmW7ZEg > > > > > > Thanks > > -Murali > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mscherbakov at mirantis.com Wed Dec 17 04:33:06 2014 From: mscherbakov at mirantis.com (Mike Scherbakov) Date: Wed, 17 Dec 2014 07:33:06 +0300 Subject: [openstack-dev] [Fuel] Feature delivery rules and automated tests In-Reply-To: References: Message-ID: I fully support the idea. Feature Lead has to know, that his feature is under threat if it's not yet covered by system tests (unit/integration tests are not enough!!!), and should proactive work with QA engineers to get tests implemented and passing before SCF. On Fri, Dec 12, 2014 at 5:55 PM, Dmitry Pyzhov wrote: > > Guys, > > we've done a good job in 6.0. Most of the features were merged before > feature freeze. Our QA were involved in testing even earlier. It was much > better than before. > > We had a discussion with Anastasia. There were several bug reports for > features yesterday, far beyond HCF. So we still have a long way to be > perfect. We should add one rule: we need to have automated tests before HCF. > > Actually, we should have results of these tests just after FF. It is quite > challengeable because we have a short development cycle. So my proposal is > to require full deployment and run of automated tests for each feature > before soft code freeze. And it needs to be tracked in checklists and on > feature syncups. > > Your opinion? > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Mike Scherbakov #mihgen -------------- next part -------------- An HTML attachment was scrubbed... URL: From swati.shukla1 at tcs.com Wed Dec 17 04:53:11 2014 From: swati.shukla1 at tcs.com (Swati Shukla1) Date: Wed, 17 Dec 2014 10:23:11 +0530 Subject: [openstack-dev] #PERSONAL# : Git checkout command for Blueprints submission Message-ID: Hi All, Generally, for bug submissions, we use ""git checkout -b bug/"" What is the similar 'git checkout' command for blueprints submission? Swati Shukla Tata Consultancy Services Mailto: swati.shukla1 at tcs.com Website: http://www.tcs.com ____________________________________________ Experience certainty. IT Services Business Solutions Consulting ____________________________________________ =====-----=====-----===== Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramy.asselin at hp.com Wed Dec 17 05:04:15 2014 From: ramy.asselin at hp.com (Asselin, Ramy) Date: Wed, 17 Dec 2014 05:04:15 +0000 Subject: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party In-Reply-To: References: <547CCA69.4000909@anteaya.info> <837B116B6E5B934DA06D9AD0FD79C6A3018E9C60@SHSMSX104.ccr.corp.intel.com> <547EB5C9.8020008@rackspace.com> <547F8DC1.5000202@anteaya.info> <548F85C8.6020400@openstack.org> Message-ID: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422FB5@G4W3223.americas.hpqcorp.net> In this case, I see the mentoring to be more like ?office hours?, which can be done by ci operators & other volunteers spread across time zones. I think this is a great idea. Ramy From: Kurt Taylor [mailto:kurt.r.taylor at gmail.com] Sent: Tuesday, December 16, 2014 9:39 AM To: Stefano Maffulli Cc: OpenStack Development Mailing List (not for usage questions); openstack-infra at lists.openstack.org Subject: Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party On Mon, Dec 15, 2014 at 7:07 PM, Stefano Maffulli > wrote: On 12/05/2014 07:08 AM, Kurt Taylor wrote: > 1. Meeting content: Having 2 meetings per week is more than is needed at > this stage of the working group. There just isn't enough meeting content > to justify having two meetings every week. I'd like to discuss this further: the stated objectives of the meetings are very wide and may allow for more than one slot per week. In particular I'm seeing the two below as good candidates for 'meet as many times as possible': * to provide a forum for the curious and for OpenStack programs who are not yet in this space but may be in the future * to encourage questions from third party folks and support the sourcing of answers As I mentioned above, probably one way to do this is to make some slots more focused on engaging newcomers and answering questions, more like serendipitous mentoring sessions with the less involved, while another slot could be dedicated to more focused and long term efforts, with more committed people? This is an excellent idea, let's split the meetings into: 1) Mentoring - mentoring new CI team members and operators, help them understand infra tools and processes. Anita can continue her fantastic work here. 2) Working Group - working meeting for documentation, reviewing patches for relevant work, and improving the consumability of infra CI components. I will be happy to chair these meetings initially. I am sure I can get help with these meetings for the other time zones also. With this approach we can also continue to use the new meeting times voted on by the group, and each is focused on targeting a specific group with very different needs. Thanks Stefano! Kurt Taylor (krtaylor) -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhishek.talwar at tcs.com Wed Dec 17 05:17:09 2014 From: abhishek.talwar at tcs.com (Abhishek Talwar/HYD/TCS) Date: Wed, 17 Dec 2014 10:47:09 +0530 (IST) Subject: [openstack-dev] Not able to locate tests for glanceclient Message-ID: Hi All, I am currently working on stable Juno release for a fix on glanceclient, but I am not able to locate tests in glanceclient. So if you can help me locate it as I need to add a unit test. The current path for glanceclient is /usr/local/lib/python2.7/dist-packages/glanceclient. Thanks and Regards Abhishek Talwar =====-----=====-----===== Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikram.choudhary at huawei.com Wed Dec 17 05:18:07 2014 From: vikram.choudhary at huawei.com (Vikram Choudhary) Date: Wed, 17 Dec 2014 05:18:07 +0000 Subject: [openstack-dev] Query regarding BluePrint submission for Review Message-ID: <99F160A7D70E22438C8ECB52BDB54B70B1D153@blreml503-mbx> Dear All, We want to submit a new blueprint for review. Can you please provide the steps for doing it. Thanks Vikram -------------- next part -------------- An HTML attachment was scrubbed... URL: From amit.das at cloudbyte.com Wed Dec 17 05:21:50 2014 From: amit.das at cloudbyte.com (Amit Das) Date: Wed, 17 Dec 2014 10:51:50 +0530 Subject: [openstack-dev] #PERSONAL# : Git checkout command for Blueprints submission In-Reply-To: References: Message-ID: Hi, It is "git checkout -b bp/" Regards, Amit *CloudByte Inc.* On Wed, Dec 17, 2014 at 10:23 AM, Swati Shukla1 wrote: > > > Hi All, > > Generally, for bug submissions, we use ""git checkout -b bug/"" > > What is the similar 'git checkout' command for blueprints submission? > > Swati Shukla > Tata Consultancy Services > Mailto: swati.shukla1 at tcs.com > Website: http://www.tcs.com > ____________________________________________ > Experience certainty. IT Services > Business Solutions > Consulting > ____________________________________________ > > =====-----=====-----===== > Notice: The information contained in this e-mail > message and/or attachments to it may contain > confidential or privileged information. If you are > not the intended recipient, any dissemination, use, > review, distribution, printing or copying of the > information contained in this e-mail message > and/or attachments to it are strictly prohibited. If > you have received this communication in error, > please notify us by reply e-mail or telephone and > immediately and permanently delete the message > and any attachments. Thank you > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsbryant at electronicjungle.net Wed Dec 17 05:22:04 2014 From: jsbryant at electronicjungle.net (Jay Bryant) Date: Tue, 16 Dec 2014 23:22:04 -0600 Subject: [openstack-dev] Query regarding BluePrint submission for Review In-Reply-To: <99F160A7D70E22438C8ECB52BDB54B70B1D153@blreml503-mbx> References: <99F160A7D70E22438C8ECB52BDB54B70B1D153@blreml503-mbx> Message-ID: Vikram, The process is documented here: https://wiki.openstack.org/wiki/Blueprints Let me know if you have questions. Jay On Dec 16, 2014 11:18 PM, "Vikram Choudhary" wrote: > Dear All, > > > > We want to submit a new blueprint for review. > > Can you please provide the steps for doing it. > > > > Thanks > > Vikram > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shreshtha.joshi at tcs.com Wed Dec 17 05:36:21 2014 From: shreshtha.joshi at tcs.com (Shreshtha Joshi) Date: Wed, 17 Dec 2014 11:06:21 +0530 Subject: [openstack-dev] Querries Regarding Blueprint for LBaaS API and Object Model improvement Message-ID: Hi All, I wanted to know the approach has been followed for LBaaS implementation in Openstack (juno) out of the following in the link(https://etherpad.openstack.org/p/neutron-lbaas-api-proposals). Is it- Existing?Core Resource Model LoadBalancer Instance Model Vip-Centric Model Currently I find Pool as the root object that has a VIP associated with it rather than?Listeners and LoadBalancers in various openstack documents. But while investigating the same, I came across a blueprint?(https://blueprints.launchpad.net/neutron/+spec/lbaas-api-and-objmodel-improvement) for LBaaS Api and object model improvement. It?talks?about moving the current VIP object to Listener and Listener will be linked to a LoadBalancer in the upcoming releases, I wanted to know?the current?approach?followed for openstack-juno and?if in the upcoming releases(Kilo)- ?Are?we planning to have new APIs for /Listener and /Loader and there will be no VIP object and its corresponding?API. ?Or we will be having VIP?object and its corresponding?API, creation of which will result in creation of?Loadbalancer and /Listener at the backend itself. If you find the above investigation incorrect, please feel free to point to the right direction. Thanks & Regards Shreshtha Joshi =====-----=====-----===== Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougw at a10networks.com Wed Dec 17 05:42:35 2014 From: dougw at a10networks.com (Doug Wiegley) Date: Wed, 17 Dec 2014 05:42:35 +0000 Subject: [openstack-dev] [neutron][lbaas] Querries Regarding Blueprint for LBaaS API and Object Model improvement Message-ID: Adding tags for [neutron][lbaas] Juno lbaas (v1) has pool as the root model, with VIP. Kilo lbaas (v2), you are correct, vip is splitting into loadbalancer and listener, and loadbalancer is the root object. And yes, the new objects get new URIs. Both v1 and v2 plugins will be available in Kilo. Thanks, doug From: Shreshtha Joshi > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, December 16, 2014 at 9:36 PM To: "openstack-dev at lists.openstack.org" > Cc: Partha Datta >, Deepankar Gupta >, "johnbrandonlogan at gmail.com" > Subject: [openstack-dev] Querries Regarding Blueprint for LBaaS API and Object Model improvement Hi All, I wanted to know the approach has been followed for LBaaS implementation in Openstack (juno) out of the following in the link(https://etherpad.openstack.org/p/neutron-lbaas-api-proposals). Is it- 1. Existing Core Resource Model 2. LoadBalancer Instance Model 3. Vip-Centric Model Currently I find Pool as the root object that has a VIP associated with it rather than Listeners and LoadBalancers in various openstack documents. But while investigating the same, I came across a blueprint (https://blueprints.launchpad.net/neutron/+spec/lbaas-api-and-objmodel-improvement) for LBaaS Api and object model improvement. It talks about moving the current VIP object to Listener and Listener will be linked to a LoadBalancer in the upcoming releases, I wanted to know the current approach followed for openstack-juno and if in the upcoming releases(Kilo)- * Are we planning to have new APIs for /Listener and /Loader and there will be no VIP object and its corresponding API. * Or we will be having VIP object and its corresponding API, creation of which will result in creation of Loadbalancer and /Listener at the backend itself. If you find the above investigation incorrect, please feel free to point to the right direction. Thanks & Regards Shreshtha Joshi =====-----=====-----===== Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From amit.das at cloudbyte.com Wed Dec 17 05:46:39 2014 From: amit.das at cloudbyte.com (Amit Das) Date: Wed, 17 Dec 2014 11:16:39 +0530 Subject: [openstack-dev] [Horizon] [UX] Curvature interactive virtual network design In-Reply-To: References: Message-ID: +1 after looking at the videos. Regards, Amit *CloudByte Inc.* On Tue, Dec 16, 2014 at 11:43 PM, Liz Blanchard wrote: > > > On Nov 7, 2014, at 11:16 AM, John Davidge (jodavidg) > wrote: > > As discussed in the Horizon contributor meet up, here at Cisco we?re > interested in upstreaming our work on the Curvature dashboard into Horizon. > We think that it can solve a lot of issues around guidance for new users > and generally improving the experience of interacting with Neutron. > Possibly an alternative persona for novice users? > > For reference, see: > > 1. http://youtu.be/oFTmHHCn2-g ? Video Demo > 2. > https://www.openstack.org/summit/portland-2013/session-videos/presentation/interactive-visual-orchestration-with-curvature-and-donabe ? > Portland presentation > 3. https://github.com/CiscoSystems/curvature ? original (Rails based) > code > > We?d like to gauge interest from the community on whether this is > something people want. > > Thanks, > > John, Brad & Sam > > > Hey guys, > > Sorry for my delayed response here?just coming back from maternity leave. > > I?ve been waiting and hoping since the Portland summit that the curvature > work you have done would be brought in to Horizon. A definite +1 from me > from a user experience point of view. It would be great to have a solid > plan on how this could work with or be additional to the Orchestration and > Network Topology pieces that currently exist in Horizon. > > Let me know if I can help out with any design review, wireframe, or > usability testing aspects. > > Best, > Liz > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Wed Dec 17 05:48:11 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Wed, 17 Dec 2014 11:48:11 +0600 Subject: [openstack-dev] [Mistral] RFC - Action spec CLI In-Reply-To: References: Message-ID: Ok, I would prefer to spend some time and think how to improve the existing reg exp that we use to parse key-value pairs. We definitely can?t just drop support of this syntax and can?t even change it significantly since people already use it. Renat Akhmerov @ Mirantis Inc. > On 17 Dec 2014, at 07:28, Lakshmi Kannan wrote: > > Apologies for the long email. If this fancy email doesn?t render correctly for you, please read it here: https://gist.github.com/lakshmi-kannan/cf953f66a397b153254a > I was looking into fixing bug: https://bugs.launchpad.net/mistral/+bug/1401039 . My idea was to use shlex to parse the string. This actually would work for anything that is supplied in the linux shell syntax. Problem is this craps out when we want to support complex data structures such as arrays and dicts as arguments. I did not think we supported a syntax to take in complex data structures in a one line format. Consider for example: > > task7: > for-each: > vm_info: $.vms > workflow: wf2 is_true=true object_list=[1, null, "str"] > on-complete: > - task9 > - task10 > Specifically > > wf2 is_true=true object_list=[1, null, "str"] > shlex will not handle this correctly because object_list is an array. Same problem with dict. > > There are 3 potential options here: > > Option 1 > > 1) Provide a spec for specifying lists and dicts like so: > list_arg=1,2,3 and dict_arg=h1:h2,h3:h4,h5:h6 > > shlex will handle this fine but there needs to be a code that converts the argument values to appropriate data types based on schema. (ActionSpec should have a parameter schema probably in jsonschema). This is doable. > > wf2 is_true=true object_list="1,null,"str"" > Option 2 > > 2) Allow JSON strings to be used as arguments so we can json.loads them (if it fails, use them as simple string). For example, with this approach, the line becomes > > wf2 is_true=true object_list="[1, null, "str"]" > This would pretty much resemble http://stackoverflow.com/questions/7625786/type-dict-in-argparse-add-argument > Option 3 > > 3) Keep the spec as such and try to parse it. I have no idea how we can do this reliably. We need a more rigorous lexer. This syntax doesn?t translate well when we want to build a CLI. Linux shells cannot support this syntax natively. This means people would have to use shlex syntax and a translation needs to happen in CLI layer. This will lead to inconsistency. CLI uses some syntax and the action input line in workflow definition will use another. We should try and avoid this. > > Option 4 > > 4) Completely drop support for this fancy one line syntax in workflow. This is probably the least desired option. > > My preference > > Looking the options, I like option2/option 1/option 4/option 3 in the order of preference. > > With some documentation, we can tell people why this is hard. People will also grok because they are already familiar with CLI limitations in linux. > > Thoughts? > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From shreshtha.joshi at tcs.com Wed Dec 17 06:02:44 2014 From: shreshtha.joshi at tcs.com (Shreshtha Joshi) Date: Wed, 17 Dec 2014 11:32:44 +0530 Subject: [openstack-dev] [neutron][lbaas] Querries Regarding Blueprint for LBaaS API and Object Model improvement In-Reply-To: References: Message-ID: Thanks Doug. Regards Shreshtha Joshi -----Doug Wiegley wrote: ----- To: "OpenStack Development Mailing List (not for usage questions)" From: Doug Wiegley Date: 12/17/2014 11:15AM Cc: "johnbrandonlogan at gmail.com" , Deepankar Gupta , Partha Datta Subject: Re: [openstack-dev] [neutron][lbaas] Querries Regarding Blueprint for LBaaS API and Object Model improvement Adding tags for [neutron][lbaas] Juno lbaas (v1) has pool as the root model, with VIP. Kilo lbaas (v2), you are correct, vip is splitting into loadbalancer and listener, and loadbalancer is the root object. And yes, the new objects get new URIs. Both v1 and v2 plugins will be available in Kilo. Thanks, doug From: Shreshtha Joshi Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Tuesday, December 16, 2014 at 9:36 PM To: "openstack-dev at lists.openstack.org" Cc: Partha Datta , Deepankar Gupta , "johnbrandonlogan at gmail.com" Subject: [openstack-dev] Querries Regarding Blueprint for LBaaS API and Object Model improvement Hi All, I wanted to know the approach has been followed for LBaaS implementation in Openstack (juno) out of the following in the link(https://etherpad.openstack.org/p/neutron-lbaas-api-proposals). Is it- Existing?Core Resource Model LoadBalancer Instance Model Vip-Centric Model Currently I find Pool as the root object that has a VIP associated with it rather than?Listeners and LoadBalancers in various openstack documents. But while investigating the same, I came across a blueprint?(https://blueprints.launchpad.net/neutron/+spec/lbaas-api-and-objmodel-improvement) for LBaaS Api and object model improvement. It?talks?about moving the current VIP object to Listener and Listener will be linked to a LoadBalancer in the upcoming releases, I wanted to know?the current?approach?followed for openstack-juno and?if in the upcoming releases(Kilo)- ?Are?we planning to have new APIs for /Listener and /Loader and there will be no VIP object and its corresponding?API. ?Or we will be having VIP?object and its corresponding?API, creation of which will result in creation of?Loadbalancer and /Listener at the backend itself. If you find the above investigation incorrect, please feel free to point to the right direction. Thanks & Regards Shreshtha Joshi =====-----=====-----===== Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp at jamezpolley.com Wed Dec 17 06:58:53 2014 From: jp at jamezpolley.com (James Polley) Date: Wed, 17 Dec 2014 07:58:53 +0100 Subject: [openstack-dev] [all][gerrit] Showing all inline comments from all patch sets In-Reply-To: <20141216154501.GU2497@yuggoth.org> References: <54900903.6090008@vmware.com> <20141216135906.GR2497@yuggoth.org> <54904D9B.2000204@vmware.com> <20141216154501.GU2497@yuggoth.org> Message-ID: I was looking at the new change screen on https://review.openstack.org today[1] and it seems to do something vaguely similar. Rather than saying "James polley made 4 inline comments", the contents of the comments are shown, along with a link to the file so you can see the context. Have you seen this? It seems fairly similar to what you're wanting. Have [1] To activate it, go to https://review.openstack.org/#/settings/preferences and set "Change view" to "New Screen", then look at a change screen (such as https://review.openstack.org/#/c/127283/) On Tue, Dec 16, 2014 at 4:45 PM, Jeremy Stanley wrote: > > On 2014-12-16 17:19:55 +0200 (+0200), Radoslav Gerganov wrote: > > We don't need GoogleAppEngine if we decide that this is useful. We > > simply need to put the html page which renders the view on > > https://review.openstack.org. It is all javascript which talks > > asynchronously to the Gerrit backend. > > > > I am using GAE to simply illustrate the idea without having to > > spin up an entire Gerrit server. > > That makes a lot more sense--thanks for the clarification! > > > I guess I can also submit a patch to the infra project and see how > > this works on https://review-dev.openstack.org if you want. > > If there's a general desire from the developer community for it, > then that's probably the next step. However, ultimately this seems > like something better suited as an upstream feature request for > Gerrit (there may even already be thread-oriented improvements in > the works for the new change screen--I haven't kept up with their > progress lately). > -- > Jeremy Stanley > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m4d.coder at gmail.com Wed Dec 17 07:22:16 2014 From: m4d.coder at gmail.com (W Chan) Date: Tue, 16 Dec 2014 23:22:16 -0800 Subject: [openstack-dev] [Mistral] ActionProvider Message-ID: Renat, We want to introduce the concept of an ActionProvider to Mistral. We are thinking that with an ActionProvider, a third party system can extend Mistral with it's own action catalog and set of dedicated and specialized action executors. The ActionProvider will return it's own list of actions via an abstract interface. This minimizes the complexity and latency in managing and sync'ing the Action table. In the DSL, we can define provider specific context/configuration separately and apply to all provider specific actions without explicitly passing as inputs. WDYT? Winson -------------- next part -------------- An HTML attachment was scrubbed... URL: From rgerganov at vmware.com Wed Dec 17 07:38:35 2014 From: rgerganov at vmware.com (Radoslav Gerganov) Date: Wed, 17 Dec 2014 09:38:35 +0200 Subject: [openstack-dev] [all][gerrit] Showing all inline comments from all patch sets In-Reply-To: References: <54900903.6090008@vmware.com> <20141216135906.GR2497@yuggoth.org> <54904D9B.2000204@vmware.com> <20141216154501.GU2497@yuggoth.org> Message-ID: <549132FB.8090308@vmware.com> I am aware of this "New Screen" but it is not useful to me. I'd like to see comments grouped by patchset, file and commented line rather than a flat view mixed with everything else. Anyway, I guess there is no one-size-fits-all solution for this and everyone has different preferences which is cool. -Rado On 12/17/14, 8:58 AM, James Polley wrote: > I was looking at the new change screen on https://review.openstack.org > today[1] and it seems to do something vaguely similar. > > Rather than saying "James polley made 4 inline comments", the contents > of the comments are shown, along with a link to the file so you can see > the context. > > Have you seen this? It seems fairly similar to what you're wanting. > > Have > [1] To activate it, go to > https://review.openstack.org/#/settings/preferences and set "Change > view" to "New Screen", then look at a change screen (such as > https://review.openstack.org/#/c/127283/) > > On Tue, Dec 16, 2014 at 4:45 PM, Jeremy Stanley > wrote: > > On 2014-12-16 17:19:55 +0200 (+0200), Radoslav Gerganov wrote: > > We don't need GoogleAppEngine if we decide that this is useful. We > > simply need to put the html page which renders the view on > >https://review.openstack.org. It is all javascript which talks > > asynchronously to the Gerrit backend. > > > > I am using GAE to simply illustrate the idea without having to > > spin up an entire Gerrit server. > > That makes a lot more sense--thanks for the clarification! > > > I guess I can also submit a patch to the infra project and see how > > this works onhttps://review-dev.openstack.org if you want. > > If there's a general desire from the developer community for it, > then that's probably the next step. However, ultimately this seems > like something better suited as an upstream feature request for > Gerrit (there may even already be thread-oriented improvements in > the works for the new change screen--I haven't kept up with their > progress lately). > -- > Jeremy Stanley > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From eduard.matei at cloudfounders.com Wed Dec 17 08:01:33 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Wed, 17 Dec 2014 10:01:33 +0200 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> References: <5486D947.4090209@hp.com> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> Message-ID: Thanks, i'll have a look. Eduard On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy wrote: > Manually running the script requires a few environment settings. Take a > look at the README here: > > https://github.com/openstack-infra/devstack-gate > > > > Regarding cinder, I?m using this repo to run our cinder jobs (fork from > jaypipes). > > https://github.com/rasselin/os-ext-testing > > > > Note that this solution doesn?t use the Jenkins gerrit trigger pluggin, > but zuul. > > > > There?s a sample job for cinder here. It?s in Jenkins Job Builder format. > > > https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample > > > > You can ask more questions in IRC freenode #openstack-cinder. (irc# > asselin) > > > > Ramy > > > > *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] > *Sent:* Tuesday, December 16, 2014 12:41 AM > *To:* Bailey, Darragh > *Cc:* OpenStack Development Mailing List (not for usage questions); > OpenStack > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Hi, > > > > Can someone point me to some working documentation on how to setup third > party CI? (joinfu's instructions don't seem to work, and manually running > devstack-gate scripts fails: > > Running gate_hook > > Job timeout set to: 163 minutes > > timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory > > ERROR: the main setup script run by this job failed - exit code: 127 > > please look at the relevant log files to determine the root cause > > Cleaning up host > > ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) > > Build step 'Execute shell' marked build as failure. > > > > I have a working Jenkins slave with devstack and our internal libraries, i > have Gerrit Trigger Plugin working and triggering on patches created, i > just need the actual job contents so that it can get to comment with the > test results. > > > > Thanks, > > > > Eduard > > > > On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei < > eduard.matei at cloudfounders.com> wrote: > > Hi Darragh, thanks for your input > > > > I double checked the job settings and fixed it: > > - build triggers is set to Gerrit event > > - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin > and tested separately) > > - Trigger on: Patchset Created > > - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: > Type: Path, Pattern: ** (was Type Plain on both) > > Now the job is triggered by commit on openstack-dev/sandbox :) > > > > Regarding the Query and Trigger Gerrit Patches, i found my patch using > query: status:open project:openstack-dev/sandbox change:139585 and i can > trigger it manually and it executes the job. > > > > But i still have the problem: what should the job do? It doesn't actually > do anything, it doesn't run tests or comment on the patch. > > Do you have an example of job? > > > > Thanks, > > Eduard > > > > On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh wrote: > > Hi Eduard, > > > I would check the trigger settings in the job, particularly which "type" > of pattern matching is being used for the branches. Found it tends to be > the spot that catches most people out when configuring jobs with the > Gerrit Trigger plugin. If you're looking to trigger against all branches > then you would want "Type: Path" and "Pattern: **" appearing in the UI. > > If you have sufficient access using the 'Query and Trigger Gerrit > Patches' page accessible from the main view will make it easier to > confirm that your Jenkins instance can actually see changes in gerrit > for the given project (which should mean that it can see the > corresponding events as well). Can also use the same page to re-trigger > for PatchsetCreated events to see if you've set the patterns on the job > correctly. > > Regards, > Darragh Bailey > > "Nothing is foolproof to a sufficiently talented fool" - Unknown > > On 08/12/14 14:33, Eduard Matei wrote: > > Resending this to dev ML as it seems i get quicker response :) > > > > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: > > Patchset Created", chose as server the configured Gerrit server that > > was previously tested, then added the project openstack-dev/sandbox > > and saved. > > I made a change on dev sandbox repo but couldn't trigger my job. > > > > Any ideas? > > > > Thanks, > > Eduard > > > > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei > > > > wrote: > > > > Hello everyone, > > > > Thanks to the latest changes to the creation of service accounts > > process we're one step closer to setting up our own CI platform > > for Cinder. > > > > So far we've got: > > - Jenkins master (with Gerrit plugin) and slave (with DevStack and > > our storage solution) > > - Service account configured and tested (can manually connect to > > review.openstack.org and get events > > and publish comments) > > > > Next step would be to set up a job to do the actual testing, this > > is where we're stuck. > > Can someone please point us to a clear example on how a job should > > look like (preferably for testing Cinder on Kilo)? Most links > > we've found are broken, or tools/scripts are no longer working. > > Also, we cannot change the Jenkins master too much (it's owned by > > Ops team and they need a list of tools/scripts to review before > > installing/running so we're not allowed to experiment). > > > > Thanks, > > Eduard > > > > -- > > > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > > intended solely for the use of the individual or entity to whom > > they are addressed.If you are not the named addressee or an > > employee or agent responsible for delivering this message to the > > named addressee, you are hereby notified that you are not > > authorized to read, print, retain, copy or disseminate this > > message or any part of it. If you have received this email in > > error we request you to notify us by reply e-mail and to delete > > all electronic files of the message. If you are not the intended > > recipient you are notified that disclosing, copying, distributing > > or taking any action in reliance on the contents of this > > information is strictly prohibited. E-mail transmission cannot be > > guaranteed to be secure or error free as information could be > > intercepted, corrupted, lost, destroyed, arrive late or > > incomplete, or contain viruses. The sender therefore does not > > accept liability for any errors or omissions in the content of > > this message, and shall have no liability for any loss or damage > > suffered by the user, which arise as a result of e-mail transmission. > > > > > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > > intended solely for the use of the individual or entity to whom they > > are addressed.If you are not the named addressee or an employee or > > agent responsible for delivering this message to the named addressee, > > you are hereby notified that you are not authorized to read, print, > > retain, copy or disseminate this message or any part of it. If you > > have received this email in error we request you to notify us by reply > > e-mail and to delete all electronic files of the message. If you are > > not the intended recipient you are notified that disclosing, copying, > > distributing or taking any action in reliance on the contents of this > > information is strictly prohibited. E-mail transmission cannot be > > guaranteed to be secure or error free as information could be > > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or > > contain viruses. The sender therefore does not accept liability for > > any errors or omissions in the content of this message, and shall have > > no liability for any loss or damage suffered by the user, which arise > > as a result of e-mail transmission. > > > > > > > _______________________________________________ > > OpenStack-Infra mailing list > > OpenStack-Infra at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kprad1 at yahoo.com Wed Dec 17 08:15:34 2014 From: kprad1 at yahoo.com (Padmanabhan Krishnan) Date: Wed, 17 Dec 2014 08:15:34 +0000 (UTC) Subject: [openstack-dev] [Neutron][Nova][DHCP] Use case of enable_dhcp option when creating a network Message-ID: <1517147686.131964.1418804134592.JavaMail.yahoo@jws106138.mail.bf1.yahoo.com> Hello,I have a question regarding the enable_dhcp option when creating a network. When a VM is attached to? a network where enable_dhcp is False, I understand that the DHCP namespace is not created for the network and the VM does not get any IP address after it boots up and sends a DHCP Discover.But, I also see that the Neutron port is filled with the fixed IP value from the network pool even though there's no DHCP associated with the subnet.?So, for such VM's, does one need to statically configure the IP address with whatever Neutron has allocated from the pool? What exactly is the use case of the above?? I do understand that for providing public network access to VM's, an external network is generally created with enable-dhcp option set to False. Is it only for this purpose? I was thinking of a case of external/provider DHCP servers from where VM's can get their IP addresses and when one does not want to use L3 agent/DVR. In such cases, one may want to disable DHCP when creating networks. ?Isn't this a use-case? Appreciate any response or corrections with my above understanding. Thanks,Paddu? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pasquale.porreca at dektech.com.au Wed Dec 17 08:37:38 2014 From: pasquale.porreca at dektech.com.au (Pasquale Porreca) Date: Wed, 17 Dec 2014 09:37:38 +0100 Subject: [openstack-dev] [Neutron][Nova][DHCP] Use case of enable_dhcp option when creating a network In-Reply-To: <1517147686.131964.1418804134592.JavaMail.yahoo@jws106138.mail.bf1.yahoo.com> References: <1517147686.131964.1418804134592.JavaMail.yahoo@jws106138.mail.bf1.yahoo.com> Message-ID: <549140D2.2010503@dektech.com.au> Just yesterday I asked a similar question on ML, this is the answer I got: In Neutron IP address management and distribution are separated concepts. IP addresses are assigned to ports even when DHCP is disabled. That IP address is indeed used to configure anti-spoofing rules and security groups. http://lists.openstack.org/pipermail/openstack-dev/2014-December/053069.html On 12/17/14 09:15, Padmanabhan Krishnan wrote: > Hello, > I have a question regarding the enable_dhcp option when creating a > network. > > When a VM is attached to a network where enable_dhcp is False, I > understand that the DHCP namespace is not created for the network and > the VM does not get any IP address after it boots up and sends a DHCP > Discover. > But, I also see that the Neutron port is filled with the fixed IP > value from the network pool even though there's no DHCP associated > with the subnet. > So, for such VM's, does one need to statically configure the IP > address with whatever Neutron has allocated from the pool? > > What exactly is the use case of the above? > > I do understand that for providing public network access to VM's, an > external network is generally created with enable-dhcp option set to > False. Is it only for this purpose? > > I was thinking of a case of external/provider DHCP servers from where > VM's can get their IP addresses and when one does not want to use L3 > agent/DVR. In such cases, one may want to disable DHCP when creating > networks. Isn't this a use-case? > > Appreciate any response or corrections with my above understanding. > > Thanks, > Paddu > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack-dev at storpool.com Wed Dec 17 08:44:06 2014 From: openstack-dev at storpool.com (Peter Penchev) Date: Wed, 17 Dec 2014 10:44:06 +0200 Subject: [openstack-dev] [nova] How should libvirt pools work with distributed storage drivers? In-Reply-To: References: <2146525823.5845403.1417033570759.JavaMail.zimbra@redhat.com> <202973990.8208444.1417463547361.JavaMail.zimbra@redhat.com> Message-ID: On Mon, Dec 1, 2014 at 9:52 PM, Solly Ross wrote: > Hi Peter, > >> Right. So just one more question now - seeing as the plan is to >> deprecate non-libvirt-pool drivers in Kilo and then drop them entirely >> in L, would it still make sense for me to submit a spec today for a >> driver that would keep the images in our own proprietary distributed >> storage format? It would certainly seem to make sense for us and for >> our customers right now and in the upcoming months - a bird in the >> hand and so on; and we would certainly prefer it to be upstreamed in >> OpenStack, since subclassing imagebackend.Backend is a bit difficult >> right now without modifying the installed imagebackend.py (and of >> course I meant Backend when I spoke about subclassing DiskImage in my >> previous message). So is there any chance that such a spec would be >> accepted for Kilo? > > It doesn't hurt to try submitting a spec. On the one hand, the driver > would "come into life" (so to speak) as deprecated, which seems kind > of silly (if there's no libvirt support at all for your driver, you > couldn't just subclass the libvirt storage pool backend). On the > other hand, it's preferable to have code be upstream, and since you > don't have a libvirt storage driver yet, the only way to have support > is to use a legacy-style driver. Thanks for the understanding! > Personally, I wouldn't mind having a new legacy driver as long as > you're committed to getting your storage driver into libvirt, so that > we don't have to do extra work when the time comes to remove the legacy > drivers. Yes, that's very reasonable, and we are indeed committed to getting our work into libvirt proper. > If you do end up submitting a spec, keep in mind is that, for ease of > migration to the libvirt storage pool driver, you should have volume names of > '{instance_uuid}_{disk_name}' (similarly to the way that LVM does it). > > If you have a spec or some code, I'd be happy to give some feedback, > if you'd like (post it on Gerrit as WIP, or something like that). Well, I might have mentioned this earlier, seeing as the Kilo-1 spec deadline is almost upon us, but the spec itself is at https://review.openstack.org/137830/ - it would be great if you could spare a minute to look at it. Thanks in advance! G'luck, Peter From flavio at redhat.com Wed Dec 17 08:49:08 2014 From: flavio at redhat.com (Flavio Percoco) Date: Wed, 17 Dec 2014 09:49:08 +0100 Subject: [openstack-dev] Not able to locate tests for glanceclient In-Reply-To: References: Message-ID: <20141217084908.GN28705@redhat.com> On 17/12/14 10:47 +0530, Abhishek Talwar/HYD/TCS wrote: >Hi All, > >I am currently working on stable Juno release for a fix on glanceclient, but I >am not able to locate tests in glanceclient. So if you can help me locate it as >I need to add a unit test. >The current path for glanceclient is /usr/local/lib/python2.7/dist-packages/ >glanceclient. This is because glanceclient tests live outside the glanceclient package. https://github.com/openstack/python-glanceclient/tree/0cdc947bf998c7f00a23c11bf1be4bc5929b7803/tests Cheers, Flavio > > >Thanks and Regards >Abhishek Talwar > >=====-----=====-----===== >Notice: The information contained in this e-mail >message and/or attachments to it may contain >confidential or privileged information. If you are >not the intended recipient, any dissemination, use, >review, distribution, printing or copying of the >information contained in this e-mail message >and/or attachments to it are strictly prohibited. If >you have received this communication in error, >please notify us by reply e-mail or telephone and >immediately and permanently delete the message >and any attachments. Thank you > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From dzimine at stackstorm.com Wed Dec 17 09:56:01 2014 From: dzimine at stackstorm.com (Dmitri Zimine) Date: Wed, 17 Dec 2014 01:56:01 -0800 Subject: [openstack-dev] [Mistral] RFC - Action spec CLI In-Reply-To: References: Message-ID: The problem with existing syntax is it is not defined: there is no docs on inlining complex variables [*], and we haven?t tested it for anything more than the simplest cases: https://github.com/stackforge/mistral/blob/master/mistral/tests/unit/workbook/v2/test_dsl_specs_v2.py#L114. I will be surprised if anyone figured how to provide a complex object as an inline parameter. Do you think regex is the right approach for parsing arbitrary key-values where values is arbitrary json structures? Will it work with something like workflow: wf2 object_list=[ {?url?: ?http://{$hostname}.example.com:8080?x=a&y={$.b}}, 33, null, {{$.key}, [{$.value1}, {$.value2}]} How much tests should we write to be confident we covered all cases? I share Lakshmi?s concern it is fragile and maintaining it reliably is difficult. But back to the original question, it?s about requirements, not implementation. My preference is ?option 3?, ?make it work as is now?. But if it?s too hard I am ok to compromise. Than option 2 as it resembles option 3 and YAML/JSON conversion makes complete sense. At the expense of quoting the objects. Slight change, not significant. Option 1 introduce a new syntax; although familiar to CLI users, I think it?s a bit out of place in YAML definition. Option 4 is no go :) DZ. [*] ?there is no docs to this? - I subscribe on fixing this. On Dec 16, 2014, at 9:48 PM, Renat Akhmerov wrote: > Ok, I would prefer to spend some time and think how to improve the existing reg exp that we use to parse key-value pairs. We definitely can?t just drop support of this syntax and can?t even change it significantly since people already use it. > > Renat Akhmerov > @ Mirantis Inc. > > > >> On 17 Dec 2014, at 07:28, Lakshmi Kannan wrote: >> >> Apologies for the long email. If this fancy email doesn?t render correctly for you, please read it here: https://gist.github.com/lakshmi-kannan/cf953f66a397b153254a >> >> I was looking into fixing bug: https://bugs.launchpad.net/mistral/+bug/1401039. My idea was to use shlex to parse the string. This actually would work for anything that is supplied in the linux shell syntax. Problem is this craps out when we want to support complex data structures such as arrays and dicts as arguments. I did not think we supported a syntax to take in complex data structures in a one line format. Consider for example: >> >> task7: >> for-each: >> vm_info: $.vms >> workflow: wf2 is_true=true object_list=[1, null, "str"] >> on-complete: >> - task9 >> - task10 >> Specifically >> >> wf2 is_true=true object_list=[1, null, "str"] >> shlex will not handle this correctly because object_list is an array. Same problem with dict. >> >> There are 3 potential options here: >> >> Option 1 >> >> 1) Provide a spec for specifying lists and dicts like so: >> list_arg=1,2,3 and dict_arg=h1:h2,h3:h4,h5:h6 >> >> shlex will handle this fine but there needs to be a code that converts the argument values to appropriate data types based on schema. (ActionSpec should have a parameter schema probably in jsonschema). This is doable. >> >> wf2 is_true=true object_list="1,null,"str"" >> Option 2 >> >> 2) Allow JSON strings to be used as arguments so we can json.loads them (if it fails, use them as simple string). For example, with this approach, the line becomes >> >> wf2 is_true=true object_list="[1, null, "str"]" >> This would pretty much resemble http://stackoverflow.com/questions/7625786/type-dict-in-argparse-add-argument >> >> Option 3 >> >> 3) Keep the spec as such and try to parse it. I have no idea how we can do this reliably. We need a more rigorous lexer. This syntax doesn?t translate well when we want to build a CLI. Linux shells cannot support this syntax natively. This means people would have to use shlex syntax and a translation needs to happen in CLI layer. This will lead to inconsistency. CLI uses some syntax and the action input line in workflow definition will use another. We should try and avoid this. >> >> Option 4 >> >> 4) Completely drop support for this fancy one line syntax in workflow. This is probably the least desired option. >> >> My preference >> >> Looking the options, I like option2/option 1/option 4/option 3 in the order of preference. >> >> With some documentation, we can tell people why this is hard. People will also grok because they are already familiar with CLI limitations in linux. >> >> Thoughts? >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From bigzhao at cisco.com Wed Dec 17 10:34:18 2014 From: bigzhao at cisco.com (Accela Zhao (bigzhao)) Date: Wed, 17 Dec 2014 10:34:18 +0000 Subject: [openstack-dev] [Nova] RemoteError: Remote error: OperationalError (OperationalError) (1048, "Column 'instance_uuid' cannot be null") In-Reply-To: Message-ID: I have formatted the messy clutter in the middle of your trace log. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 400, in _object_dispatch return getattr(target, method)(context, *args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 204, in wrapper return fn(self, ctxt, *args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 500, in save columns_to_join=_expected_cols(expected_attrs)) File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 746, in instance_update_and_get_original columns_to_join=columns_to_join) File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 143, in wrapper return f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2289, in instance_update_and_get_original columns_to_join=columns_to_join) File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2380, in _instance_update session.add(instance_ref) File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 470, in __exit__ self.rollback() File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__ compat.reraise(exc_type, exc_value, exc_tb) File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 467, in __exit__ self.commit() File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 377, in commit self._prepare_impl() File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 357, in _prepare_impl self.session.flush() File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1919, in flush self._flush(objects) File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2037, in _flush transaction.rollback(_capture_exception=True) File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__ compat.reraise(exc_type, exc_value, exc_tb) File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2001, in _flush flush_context.execute() File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 372, in execute rec.execute(self) File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 526, in execute uow File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 60, in save_obj mapper, table, update) File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 518, in _emit_update_statements execute(statement, params) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute return meth(self, multiparams, params) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 321, in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 826, in _execute_clauseelement compiled_sql, distilled_params File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context context) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1156, in _handle_dbapi_exception util.raise_from_cause(newraise, exc_info) File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context context) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute cursor.execute(statement, parameters) File "/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 174, in execute self.errorhandler(self, exc, value) File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue OperationalError: (OperationalError) (1048, "Column 'instance_uuid' cannot be null") 'UPDATE instance_extra SET updated_at=%s, instance_uuid=%s WHERE instance_extra.id = %s' (datetime.datetime(2014, 12, 12, 9, 16, 52, 434376), None, 5L) Looks like your new instance doesn't have uuid, and causes to _allocate_network to fail. Instance uuid should have been allocated in nova/compute/api.py::_provision_instances on default. Thanks & Regards, -- Accela Zhao From: joejiang Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Friday, December 12, 2014 at 5:36 PM To: "openstack-dev at lists.openstack.org" , "" Subject: [openstack-dev] [Nova] RemoteError: Remote error: OperationalError (OperationalError) (1048, "Column 'instance_uuid' cannot be null") Hi folks, when i launch instance use cirros image in the new openstack environment(juno version & centos7 OS base), the following piece is error logs from compute node. anybody meet the same error? ---------------------------- 2014-12-12 17:16:52.481 12966 ERROR nova.compute.manager [-] [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Failed to allocate network(s) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Traceback (most recent call last): 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2190, in _build_resources 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] requested_networks, security_groups) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1683, in _build_networks_for_instance 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] requested_networks, macs, security_groups, dhcp_options) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1717, in _allocate_network 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] instance.save(expected_task_state=[None]) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 189, in wrapper 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] ctxt, self, fn.__name__, args, kwargs) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 351, in object_action 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] objmethod=objmethod, args=args, kwargs=kwargs) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 152, in call 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] retry=self.retry) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in _send 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] timeout=timeout, retry=retry) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 408, in send 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] retry=retry) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 399, in _send 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] raise result 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] RemoteError: Remote error: OperationalError (OperationalError) (1048, "Column 'instance_uuid' cannot be null") 'UPDATE instance_extra SET updated_at=%s, instance_uuid=%s WHERE instance_extra.id = %s' (datetime.datetime(2014, 12, 12, 9, 16, 52, 434376), None, 5L) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 400, in _object_dispatch\n return getattr(target, method)(context, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 204, in wrapper\n return fn(self, ctxt, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 500, in save\n columns_to_join=_expected_cols(expected_attrs))\n', u' File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 746, in instance_update_and_get_original\n columns_to_join=columns_to_join)\n', u' File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 143, in wrapper\n return f(*args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2289, in instance_update_and_get_original\n columns_to_join=columns_to_join)\n', u' File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2380, in _instance_update\n session.add(instance_ref)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 470, in __exit__\n self.rollback()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__\n compat.reraise(exc_type, exc_value, exc_tb)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 467, in __exit__\n self.commit()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 377, in commit\n self._prepare_impl()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 357, in _prepare_impl\n self.session.flush()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1919, in flush\n self._flush(objects)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2037, in _flush\n transaction.rollback(_capture_exception=True)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__\n compat.reraise(exc_type, exc_value, exc_tb)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2001, in _flush\n flush_context.execute()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 372, in execute\n rec.execute(self)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 526, in execute\n uow\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 60, in save_obj\n mapper, table, update)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 518, in _emit_update_statements\n execute(statement, params)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute\n return meth(self, multiparams, params)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 321, in _execute_on_connection\n return connection._execute_clauseelement(self, multiparams, params)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 826, in _execute_clauseelement\n compiled_sql, distilled_params\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context\n context)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1156, in _handle_dbapi_exception\n util.raise_from_cause(newraise, exc_info)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause\n reraise(type(exception), exception, tb=exc_tb)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context\n context)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute\n cursor.execute(statement, parameters)\n', u' File "/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 174, in execute\n self.errorhandler(self, exc, value)\n', u' File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler\n raise errorclass, errorvalue\n', u'OperationalError: (OperationalError) (1048, "Column \'instance_uuid\' cannot be null") \'UPDATE instance_extra SET updated_at=%s, instance_uuid=%s WHERE instance_extra.id = %s\' (datetime.datetime(2014, 12, 12, 9, 16, 52, 434376), None, 5L)\n']. 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] 2014-12-12 17:16:52.515 12966 INFO nova.scheduler.client.report [-] Compute_service record updated for ('computenode.domain.com') 2014-12-12 17:16:52.517 12966 ERROR nova.compute.manager [-] [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Build of instance 67e215e0-2193-439d-89c4-be8c378df78d aborted: Failed to allocate the network(s), not rescheduling. 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Traceback (most recent call last): 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _do_build_and_run_instance 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] filter_properties) 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2129, in _build_and_run_instance 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] 'create.error', fault=e) 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] six.reraise(self.type_, self.value, self.tb) 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2102, in _build_and_run_instance 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] block_device_mapping) as resources: 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__ 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] return self.gen.next() 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2205, in _build_resources 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] reason=msg) 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] BuildAbortException: Build of instance 67e215e0-2193-439d-89c4-be8c378df78d aborted: Failed to allocate the network(s), not rescheduling. 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] 2014-12-12 17:16:52.566 12966 INFO nova.network.neutronv2.api [-] [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Unable to reset device ID for port None 2014-12-12 17:17:04.977 12966 WARNING nova.compute.manager [req-f9b96041-ff4c-4b3c-8a0e-bdedf79193d6 None] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor From vkozhukalov at mirantis.com Wed Dec 17 11:08:41 2014 From: vkozhukalov at mirantis.com (Vladimir Kozhukalov) Date: Wed, 17 Dec 2014 15:08:41 +0400 Subject: [openstack-dev] [Fuel] Image based provisioning In-Reply-To: References: Message-ID: In case of image based we need either to update image or run "yum update/apt-get upgrade" right after first boot (second option partly devalues advantages of image based scheme). Besides, we are planning to re-implement image build script so as to be able to build images on a master node (but unfortunately 6.1 is not a real estimate for that). Vladimir Kozhukalov On Wed, Dec 17, 2014 at 5:03 AM, Mike Scherbakov wrote: > > Dmitry, > as part of 6.1 roadmap, we are going to work on patching feature. > There are two types of workflow to consider: > - patch existing environment (already deployed nodes, aka "target" nodes) > - ensure that new nodes, added to the existing and already patched envs, > will install updated packages too. > > In case of anakonda/preseed install, we can simply update repo on master > node and run createrepo/etc. What do we do in case of image? Will we need a > separate repo alongside with main one, "updates" repo - and do > post-provisioning "yum update" to fetch all patched packages? > > On Tue, Dec 16, 2014 at 11:09 PM, Andrey Danin > wrote: > >> Adding Mellanox team explicitly. >> >> Gil, Nurit, Aviram, can you confirm that you tested that feature? It can >> be enabled on every fresh ISO. You just need to enable the Experimental >> mode (please, see the documentation for instructions). >> >> On Tuesday, December 16, 2014, Dmitry Pyzhov >> wrote: >> >>> Guys, >>> >>> we are about to enable image based provisioning in our master by >>> default. I'm trying to figure out requirement for this change. As far as I >>> know, it was not tested on scale lab. Is it true? Have we ever run full >>> system tests cycle with this option? >>> >>> Do we have any other pre-requirements? >>> >> >> >> -- >> Andrey Danin >> adanin at mirantis.com >> skype: gcon.monolake >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > -- > Mike Scherbakov > #mihgen > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at dague.net Wed Dec 17 11:14:51 2014 From: sean at dague.net (Sean Dague) Date: Wed, 17 Dec 2014 06:14:51 -0500 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <5490BEC3.40805@nemebean.com> References: <5486DF7F.7080706@dague.net> <51327974-0351-48A4-B1F5-A0185505BF7B@doughellmann.com> <54870FCC.3010006@dague.net> <5490BEC3.40805@nemebean.com> Message-ID: <549165AB.6080008@dague.net> On 12/16/2014 06:22 PM, Ben Nemec wrote: > Some thoughts inline. I'll go ahead and push a change to remove the > things everyone seems to agree on. > > On 12/09/2014 09:05 AM, Sean Dague wrote: >> On 12/09/2014 09:11 AM, Doug Hellmann wrote: >>> >>> On Dec 9, 2014, at 6:39 AM, Sean Dague wrote: >>> >>>> I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely. >>>> >>>> 1 - the entire H8* group. This doesn't function on python code, it >>>> functions on git commit message, which makes it tough to run locally. It >>>> also would be a reason to prevent us from not rerunning tests on commit >>>> message changes (something we could do after the next gerrit update). >>>> >>>> 2 - the entire H3* group - because of this - >>>> https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm >>>> >>>> A look at the H3* code shows that it's terribly complicated, and is >>>> often full of bugs (a few bit us last week). I'd rather just delete it >>>> and move on. >>> >>> I don?t have the hacking rules memorized. Could you describe them briefly? >> >> Sure, the H8* group is git commit messages. It's checking for line >> length in the commit message. >> >> - [H802] First, provide a brief summary of 50 characters or less. Summaries >> of greater then 72 characters will be rejected by the gate. >> >> - [H801] The first line of the commit message should provide an accurate >> description of the change, not just a reference to a bug or >> blueprint. >> >> >> H802 is mechanically enforced (though not the 50 characters part, so the >> code isn't the same as the rule). >> >> H801 is enforced by a regex that looks to see if the first line is a >> launchpad bug and fails on it. You can't mechanically enforce that >> english provides an accurate description. > > +1. It would be nice to provide automatic notification to people if > they submit something with an absurdly long commit message, but I agree > that hacking isn't the place to do that. > >> >> >> H3* are all the module import rules: >> >> Imports >> ------- >> - [H302] Do not import objects, only modules (*) >> - [H301] Do not import more than one module per line (*) >> - [H303] Do not use wildcard ``*`` import (*) >> - [H304] Do not make relative imports >> - Order your imports by the full module path >> - [H305 H306 H307] Organize your imports according to the `Import order >> template`_ and `Real-world Import Order Examples`_ below. >> >> I think these remain reasonable guidelines, but H302 is exceptionally >> tricky to get right, and we keep not getting it right. >> >> H305-307 are actually impossible to get right. Things come in and out of >> stdlib in python all the time. > > tdlr; I'd like to remove H302, H305 and, H307 and leave the rest. > Reasons below. > > +1 to H305 and H307. I'm going to have to admit defeat and accept that > I can't make them work in a sane fashion. > > H306 is different though - that one is only checking alphabetical order > and only works on the text of the import so it doesn't have the issues > around having modules installed or mis-categorizing. AFAIK it has never > actually caused any problems either (the H306 failure in > https://review.openstack.org/#/c/140168/2/nova/tests/unit/test_fixtures.py > is correct - nova.tests.fixtures should come before > nova.tests.unit.conf_fixture). The issue I originally had was in nova.tests.fixtures, where it resolved fixtures as a relative import instead of an absolute one, and exploded. It's not reproducing now though. > As far as 301-304, only 302 actually depends on the is_module stuff. > The others are all text-based too so I think we should leave them. H302 > I'm kind of indifferent on - we hit an edge case with the olso namespace > thing which is now fixed, but if removing that allows us to not install > requirements.txt to run pep8 I think I'm onboard with removing it too. H304 needs is_import_exception. is_module and is_import_exception means we have to import all the code, which means the depends for pep8 is *all* of requirements.txt, all of test-requirements.txt all of any optional (not listed in those requirements). If the content isn't in the venv, the check passes. So adding / removing an optional requirement can change the flake8 test results. Evaluating the code is something that we should avoid. -Sean -- Sean Dague http://dague.net From eli at mirantis.com Wed Dec 17 11:33:40 2014 From: eli at mirantis.com (Evgeniy L) Date: Wed, 17 Dec 2014 15:33:40 +0400 Subject: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format In-Reply-To: References: Message-ID: Vitaly, what do you think about that? On Fri, Dec 12, 2014 at 5:58 PM, Evgeniy L wrote: > > Hi, > > I don't agree with many of your statements but, I would like to > continue discussion about really important topic i.e. UI flow, my > suggestion was to add groups, for plugin in metadata.yaml plugin > developer can have description of the groups which it belongs to: > > groups: > - id: storage > subgroup: > - id: cinder > > With this information we can show a new option on UI (wizard), > if option is selected, it means that plugin is enabled, if plugin belongs > to several groups, we can use OR statement. > > The main point is, for environment creation we must specify > ids of plugins. Yet another reason for that is plugins multiversioning, > we must know exactly which plugin with which version > is used for environment, and I don't see how "conditions" can help > us with it. > > Thanks, > >> >>> >>> > On Wed, Dec 10, 2014 at 8:23 PM, Vitaly Kramskikh > wrote: >> >> >> >> 2014-12-10 19:31 GMT+03:00 Evgeniy L : >> >>> >>> >>> On Wed, Dec 10, 2014 at 6:50 PM, Vitaly Kramskikh < >>> vkramskikh at mirantis.com> wrote: >>> >>>> >>>> >>>> 2014-12-10 16:57 GMT+03:00 Evgeniy L : >>>> >>>>> Hi, >>>>> >>>>> First let me describe what our plans for the nearest release. We want >>>>> to deliver >>>>> role as a simple plugin, it means that plugin developer can define his >>>>> own role >>>>> with yaml and also it should work fine with our current approach when >>>>> user can >>>>> define several fields on the settings tab. >>>>> >>>>> Also I would like to mention another thing which we should probably >>>>> discuss >>>>> in separate thread, how plugins should be implemented. We have two >>>>> types >>>>> of plugins, simple and complicated, the definition of simple - I can >>>>> do everything >>>>> I need with yaml, the definition of complicated - probably I have to >>>>> write some >>>>> python code. It doesn't mean that this python code should do absolutely >>>>> everything it wants, but it means we should implement stable, >>>>> documented >>>>> interface where plugin is connected to the core. >>>>> >>>>> Now lets talk about UI flow, our current problem is how to get the >>>>> information >>>>> if plugins is used in the environment or not, this information is >>>>> required for >>>>> backend which generates appropriate tasks for task executor, also this >>>>> information can be used in the future if we decide to implement >>>>> plugins deletion >>>>> mechanism. >>>>> >>>>> I didn't come up with a some new solution, as before we have two >>>>> options to >>>>> solve the problem: >>>>> >>>>> # 1 >>>>> >>>>> Use conditional language which is currently used on UI, it will look >>>>> like >>>>> Vitaly described in the example [1]. >>>>> Plugin developer should: >>>>> >>>>> 1. describe at least one element for UI, which he will be able to use >>>>> in task >>>>> >>>>> 2. add condition which is written in our own programming language >>>>> >>>>> Example of the condition for LBaaS plugin: >>>>> >>>>> condition: settings:lbaas.metadata.enabled == true >>>>> >>>>> 3. add condition to metadata.yaml a condition which defines if plugin >>>>> is enabled >>>>> >>>>> is_enabled: settings:lbaas.metadata.enabled == true >>>>> >>>>> This approach has good flexibility, but also it has problems: >>>>> >>>>> a. It's complicated and not intuitive for plugin developer. >>>>> >>>> It is less complicated than python code >>>> >>> >>> I'm not sure why are you talking about python code here, my point >>> is we should not force developer to use this conditions in any language. >>> >>> But that's how current plugin-like stuff works. There are various tasks >> which are run only if some checkboxes are set, so stuff like Ceph and >> vCenter will need conditions to describe tasks. >> >>> Anyway I don't agree with the statement there are more people who know >>> python than "fuel ui conditional language". >>> >>> >>>> b. It doesn't cover case when the user installs 3rd party plugin >>>>> which doesn't have any conditions (because of # a) and >>>>> user doesn't have a way to disable it for environment if it >>>>> breaks his configuration. >>>>> >>>> If plugin doesn't have conditions for tasks, then it has invalid >>>> metadata. >>>> >>> >>> Yep, and it's a problem of the platform, which provides a bad interface. >>> >> Why is it bad? It plugin writer doesn't provide plugin name or version, >> then metadata is invalid also. It is plugin writer's fault that he didn't >> write metadata properly. >> >>> >>> >>>> >>>>> # 2 >>>>> >>>>> As we discussed from the very beginning after user selects a release >>>>> he can >>>>> choose a set of plugins which he wants to be enabled for environment. >>>>> After that we can say that plugin is enabled for the environment and >>>>> we send >>>>> tasks related to this plugin to task executor. >>>>> >>>>> >> My approach also allows to eliminate "enableness" of plugins which >>>>> will cause UX issues and issues like you described above. vCenter and Ceph >>>>> also don't have "enabled" state. vCenter has hypervisor and storage, Ceph >>>>> provides backends for Cinder and Glance which can be used simultaneously or >>>>> only one of them can be used. >>>>> >>>>> Both of described plugins have enabled/disabled state, vCenter is >>>>> enabled >>>>> when vCenter is selected as hypervisor. Ceph is enabled when it's >>>>> selected >>>>> as a backend for Cinder or Glance. >>>>> >>>> Nope, Ceph for Volumes can be used without Ceph for Images. Both of >>>> these plugins can also have some granular tasks which are enabled by >>>> various checkboxes (like VMware vCenter for volumes). How would you >>>> determine whether tasks which installs VMware vCenter for volumes should >>>> run? >>>> >>> >>> Why "nope"? I have "Cinder OR Glance". >>> >> Oh, I missed it. So there are 2 checkboxes, how would you determine >> "enableness"? >> >>> It can be easily handled in deployment script. >>> >> I don't know much about the status of granular deployment blueprint, but >> AFAIK that's what we are going to get rid of. >> >>> >>> >>>>> If you don't like the idea of having Ceph/vCenter checkboxes on the >>>>> first page, >>>>> I can suggest as an idea (research is required) to define groups like >>>>> Storage Backend, >>>>> Network Manager and we will allow plugin developer to embed his option >>>>> in radiobutton >>>>> field on wizard pages. But plugin developer should not describe >>>>> conditions, he should >>>>> just write that his plugin is a Storage Backend, Hypervisor or new >>>>> Network Manager. >>>>> And the plugins e.g. Zabbix, Nagios, which don't belong to any of this >>>>> groups >>>>> should be shown as checkboxes on the first page of the wizard. >>>>> >>>> Why don't you just ditch "enableness" of plugins and get rid of this >>>> complex stuff? Can you explain why do you need to know if plugin is >>>> "enabled"? Let me summarize my opinion on this: >>>> >>> >>> I described why we need it many times. Also it looks like you skipped >>> another option >>> and I would like to see some more information why you don't like it and >>> why it's >>> a bad from UX stand point of view. >>> >> Yes, I skipped it. You said "research is required", so please do it, >> write a proposal and then we will compare it with condition approach. You >> still don't have your proposal, so there is nothing to compare and discuss. >> From the first perspective it seems complex and restrictive. >> >>> >>>> - You don't need to know whether plugin is enabled or not. You need >>>> to know what tasks should be run and whether plugin is removable (anything >>>> else?). These conditions can be described by the DSL. >>>> >>>> I do need to know if plugin is enabled to figure out if it's removable, >>> in fact those are the same things. >>> >> So there is nothing else you need "enableness", right? If you "described >> why we need it many times", I think you need to do it one more time (in >> form of a list). If we need "enableness" just to determine whether the >> plugin is removable, then it is not the reason to ruin our UX. >> >>> >>>> - >>>> - Explicitly asking the user to enable plugin for new environment >>>> should be considered as a last resort solution because it significantly >>>> impair our UX for inexperienced user. Just imagine: a new user which barely >>>> knows about OpenStack chooses a name for the environment, OS release and >>>> then he needs to choose plugins. Really? >>>> >>>> I really think that it's absolutely ok to show checkbox with LBaaS for >>> the user who found the >>> plugin, downloaded it on the master and installed it with CLI. >>> >>> And right now this user have to go to this settings tab with attempts to >>> find this checkbox, >>> also he may not find it for example because of incompatible release >>> version, and it's clearly >>> a bad UX. >>> >> I like how it is done in modern browsers - after upgrade of master node >> there is notification about incompatible plugins, and in list of plugins >> there is a message that plugin is incompatible. We need to design how we >> will handle it. Currently it is definitely a bad UX because nothing is done >> for this. >> >>> My proposal for "complex" plugin interface: there should be python >>>> classes with exactly the same fields from yaml files: plugin name, version, >>>> etc. But condition for cluster deletion and for tasks which are written in >>>> DSL in case of "simple" yaml config should become methods which plugin >>>> writer can make as complex as he wants. >>>> >>> Why do you want to use python to define plugin name, version etc? It's a >>> static data which are >>> used for installation, I don't think that in fuel client (or some other >>> installation tool) we want >>> to unpack the plugin and import this module to get the information which >>> is required for installation. >>> >> It is just a proposal in which I try to solve problems which you see in >> my approach. If you want a "complex" interface with arbitrary python code, >> that's how I see it. All fields are the same here, the approach is the >> same, just conditions are in python. And YAML config can be converted to >> this class, and all other code won't need to handle 2 different interfaces >> for plugins. >> >>> >>>>> [1] >>>>> https://github.com/vkramskikh/fuel-plugins/commit/1ddb166731fc4bf614f502b276eb136687cb20cf >>>>> >>>>> On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh < >>>>> vkramskikh at mirantis.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak : >>>>>> >>>>>>> >>>>>>>> - environment_config.yaml should contain exact config which >>>>>>>> will be mixed into cluster_attributes. No need to implicitly generate any >>>>>>>> controls like it is done now. >>>>>>>> >>>>>>>> Initially i had the same thoughts and wanted to use it the way it >>>>>>> is, but now i completely agree with Evgeniy that additional DSL will cause >>>>>>> a lot >>>>>>> of problems with compatibility between versions and developer >>>>>>> experience. >>>>>>> >>>>>> As far as I understand, you want to introduce another approach to >>>>>> describe UI part or plugins? >>>>>> >>>>>>> We need to search for alternatives.. >>>>>>> 1. for UI i would prefer separate tab for plugins, where user will >>>>>>> be able to enable/disable plugin explicitly. >>>>>>> >>>>>> Of course, we need a separate page for plugin management. >>>>>> >>>>>>> Currently settings tab is overloaded. >>>>>>> 2. on backend we need to validate plugins against certain env before >>>>>>> enabling it, >>>>>>> and for simple case we may expose some basic entities like >>>>>>> network_mode. >>>>>>> For case where you need complex logic - python code is far more >>>>>>> flexible that new DSL. >>>>>>> >>>>>>>> >>>>>>>> - metadata.yaml should also contain "is_removable" field. This >>>>>>>> field is needed to determine whether it is possible to remove installed >>>>>>>> plugin. It is impossible to remove plugins in the current implementation. >>>>>>>> This field should contain an expression written in our DSL which we already >>>>>>>> use in a few places. The LBaaS plugin also uses it to hide the checkbox if >>>>>>>> Neutron is not used, so even simple plugins like this need to utilize it. >>>>>>>> This field can also be autogenerated, for more complex plugins plugin >>>>>>>> writer needs to fix it manually. For example, for Ceph it could look like >>>>>>>> "settings:storage.volumes_ceph.value == false and >>>>>>>> settings:storage.images_ceph.value == false". >>>>>>>> >>>>>>>> How checkbox will help? There is several cases of plugin removal.. >>>>>>> >>>>>> It is not a checkbox, this is condition that determines whether the >>>>>> plugin is removable. It allows plugin developer specify when plguin can be >>>>>> safely removed from Fuel if there are some environments which were created >>>>>> after the plugin had been installed. >>>>>> >>>>>>> 1. Plugin is installed, but not enabled for any env - just remove >>>>>>> the plugin >>>>>>> 2. Plugin is installed, enabled and cluster deployed - forget about >>>>>>> it for now.. >>>>>>> 3. Plugin is installed and only enabled - we need to maintain state >>>>>>> of db consistent after plugin is removed, it is problematic, but possible >>>>>>> >>>>>> My approach also allows to eliminate "enableness" of plugins which >>>>>> will cause UX issues and issues like you described above. vCenter and Ceph >>>>>> also don't have "enabled" state. vCenter has hypervisor and storage, Ceph >>>>>> provides backends for Cinder and Glance which can be used simultaneously or >>>>>> only one of them can be used. >>>>>> >>>>>>> My main point that plugin is enabled/disabled explicitly by user, >>>>>>> after that we can decide ourselves can it be removed or not. >>>>>>> >>>>>>>> >>>>>>>> - For every task in tasks.yaml there should be added new >>>>>>>> "condition" field with an expression which determines whether the task >>>>>>>> should be run. In the current implementation tasks are always run for >>>>>>>> specified roles. For example, vCenter plugin can have a few tasks with >>>>>>>> conditions like "settings:common.libvirt_type.value == 'vcenter'" or >>>>>>>> "settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar >>>>>>>> approach will be used in implementation of Granular Deployment feature. >>>>>>>> >>>>>>>> I had some thoughts about using DSL, it seemed to me especially >>>>>>> helpfull when you need to disable part of embedded into core functionality, >>>>>>> like deploying with another hypervisor, or network dirver (contrail >>>>>>> for example). And DSL wont cover all cases here, this quite similar to >>>>>>> metadata.yaml, simple cases can be covered by some variables in tasks (like >>>>>>> group, unique, etc), but complex is easier to test and describe in python. >>>>>>> >>>>>> Could you please provide example of such conditions? vCenter and Ceph >>>>>> can be turned into plugins using this approach. >>>>>> >>>>>> Also, I'm not against python version of plugins. It could look like a >>>>>> python class with exactly the same fields form YAML files, but conditions >>>>>> will be written in python. >>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> OpenStack-dev mailing list >>>>>>> OpenStack-dev at lists.openstack.org >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Vitaly Kramskikh, >>>>>> Software Engineer, >>>>>> Mirantis, Inc. >>>>>> >>>>>> _______________________________________________ >>>>>> OpenStack-dev mailing list >>>>>> OpenStack-dev at lists.openstack.org >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>> >>>>> On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh < >>>>> vkramskikh at mirantis.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak : >>>>>> >>>>>>> >>>>>>>> - environment_config.yaml should contain exact config which >>>>>>>> will be mixed into cluster_attributes. No need to implicitly generate any >>>>>>>> controls like it is done now. >>>>>>>> >>>>>>>> Initially i had the same thoughts and wanted to use it the way it >>>>>>> is, but now i completely agree with Evgeniy that additional DSL will cause >>>>>>> a lot >>>>>>> of problems with compatibility between versions and developer >>>>>>> experience. >>>>>>> >>>>>> As far as I understand, you want to introduce another approach to >>>>>> describe UI part or plugins? >>>>>> >>>>>>> We need to search for alternatives.. >>>>>>> 1. for UI i would prefer separate tab for plugins, where user will >>>>>>> be able to enable/disable plugin explicitly. >>>>>>> >>>>>> Of course, we need a separate page for plugin management. >>>>>> >>>>>>> Currently settings tab is overloaded. >>>>>>> 2. on backend we need to validate plugins against certain env before >>>>>>> enabling it, >>>>>>> and for simple case we may expose some basic entities like >>>>>>> network_mode. >>>>>>> For case where you need complex logic - python code is far more >>>>>>> flexible that new DSL. >>>>>>> >>>>>>>> >>>>>>>> - metadata.yaml should also contain "is_removable" field. This >>>>>>>> field is needed to determine whether it is possible to remove installed >>>>>>>> plugin. It is impossible to remove plugins in the current implementation. >>>>>>>> This field should contain an expression written in our DSL which we already >>>>>>>> use in a few places. The LBaaS plugin also uses it to hide the checkbox if >>>>>>>> Neutron is not used, so even simple plugins like this need to utilize it. >>>>>>>> This field can also be autogenerated, for more complex plugins plugin >>>>>>>> writer needs to fix it manually. For example, for Ceph it could look like >>>>>>>> "settings:storage.volumes_ceph.value == false and >>>>>>>> settings:storage.images_ceph.value == false". >>>>>>>> >>>>>>>> How checkbox will help? There is several cases of plugin removal.. >>>>>>> >>>>>> It is not a checkbox, this is condition that determines whether the >>>>>> plugin is removable. It allows plugin developer specify when plguin can be >>>>>> safely removed from Fuel if there are some environments which were created >>>>>> after the plugin had been installed. >>>>>> >>>>>>> 1. Plugin is installed, but not enabled for any env - just remove >>>>>>> the plugin >>>>>>> 2. Plugin is installed, enabled and cluster deployed - forget about >>>>>>> it for now.. >>>>>>> 3. Plugin is installed and only enabled - we need to maintain state >>>>>>> of db consistent after plugin is removed, it is problematic, but possible >>>>>>> >>>>>> My approach also allows to eliminate "enableness" of plugins which >>>>>> will cause UX issues and issues like you described above. vCenter and Ceph >>>>>> also don't have "enabled" state. vCenter has hypervisor and storage, Ceph >>>>>> provides backends for Cinder and Glance which can be used simultaneously or >>>>>> only one of them can be used. >>>>>> >>>>>>> My main point that plugin is enabled/disabled explicitly by user, >>>>>>> after that we can decide ourselves can it be removed or not. >>>>>>> >>>>>>>> >>>>>>>> - For every task in tasks.yaml there should be added new >>>>>>>> "condition" field with an expression which determines whether the task >>>>>>>> should be run. In the current implementation tasks are always run for >>>>>>>> specified roles. For example, vCenter plugin can have a few tasks with >>>>>>>> conditions like "settings:common.libvirt_type.value == 'vcenter'" or >>>>>>>> "settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar >>>>>>>> approach will be used in implementation of Granular Deployment feature. >>>>>>>> >>>>>>>> I had some thoughts about using DSL, it seemed to me especially >>>>>>> helpfull when you need to disable part of embedded into core functionality, >>>>>>> like deploying with another hypervisor, or network dirver (contrail >>>>>>> for example). And DSL wont cover all cases here, this quite similar to >>>>>>> metadata.yaml, simple cases can be covered by some variables in tasks (like >>>>>>> group, unique, etc), but complex is easier to test and describe in python. >>>>>>> >>>>>> Could you please provide example of such conditions? vCenter and Ceph >>>>>> can be turned into plugins using this approach. >>>>>> >>>>>> Also, I'm not against python version of plugins. It could look like a >>>>>> python class with exactly the same fields form YAML files, but conditions >>>>>> will be written in python. >>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> OpenStack-dev mailing list >>>>>>> OpenStack-dev at lists.openstack.org >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Vitaly Kramskikh, >>>>>> Software Engineer, >>>>>> Mirantis, Inc. >>>>>> >>>>>> _______________________________________________ >>>>>> OpenStack-dev mailing list >>>>>> OpenStack-dev at lists.openstack.org >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> >>>> -- >>>> Vitaly Kramskikh, >>>> Software Engineer, >>>> Mirantis, Inc. >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Vitaly Kramskikh, >> Software Engineer, >> Mirantis, Inc. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsbryant at electronicjungle.net Wed Dec 17 13:30:57 2014 From: jsbryant at electronicjungle.net (Jay Bryant) Date: Wed, 17 Dec 2014 07:30:57 -0600 Subject: [openstack-dev] [cinder] Anyone knows whether there is freezing day of spec approval? In-Reply-To: References: Message-ID: Dave, My apologies. We have not yet set a day that we are freezing BP/Spec approval for Cinder. We had a deadline in November for new drivers being proposed but haven't frozen other proposals yet. I mixed things up with Nova's 12/18 cutoff. Not sure when we will be cutting off BPs for Cinder. The goal is to spend as much of K-2 and K-3 on Cinder clean-up. So, I wouldn't let anything you want considered linger too long. Thanks, Jay On 12/15/2014 09:16 PM, Chen, Wei D wrote: Hi, I know nova has such day around Dec. 18, is there a similar day in Cinder project? thanks! Best Regards, Dave Chen _______________________________________________ OpenStack-dev mailing listOpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- jsbryant at electronicjungle.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed Dec 17 14:02:26 2014 From: zigo at debian.org (Thomas Goirand) Date: Wed, 17 Dec 2014 22:02:26 +0800 Subject: [openstack-dev] oslo.db 1.2.1 release coming to fix stable/juno In-Reply-To: <27A76931-16EF-400B-8EDD-6C3E18F52053@doughellmann.com> References: <27A76931-16EF-400B-8EDD-6C3E18F52053@doughellmann.com> Message-ID: <54918CF2.6010208@debian.org> On 12/16/2014 04:21 AM, Doug Hellmann wrote: > The issue with stable/juno jobs failing because of the difference in the > SQLAlchemy requirements between the older applications and the newer > oslo.db is being addressed with a new release of the 1.2.x series. We > will then cap the requirements for stable/juno to 1.2.1. We decided we > did not need to raise the minimum version of oslo.db allowed in kilo, > because the old versions of the library do work, if they are installed > from packages and not through setuptools. > > Jeremy created a feature/1.2 branch for us, and I have 2 patches up > [1][2] to apply the requirements fix. The change to the oslo.db version > in stable/juno is [3]. > > After the changes in oslo.db merge, I will tag 1.2.1. > > Doug > > [1] https://review.openstack.org/#/c/141893/ > [2] https://review.openstack.org/#/c/141894/ > [3] https://review.openstack.org/#/c/141896/ Doug, I'm not sure I get it. Is this related to newer versions of SQLAlchemy? If so, then from my package maintainer point of view, keeping an older version of SQLA (eg: 0.9.8) and oslo.db 1.0.2 for Juno is ok, right? Will Kilo require a newer version of SQLA? Cheers, Thomas From mtreinish at kortar.org Wed Dec 17 14:45:59 2014 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 17 Dec 2014 09:45:59 -0500 Subject: [openstack-dev] [QA] Meeting Thursday December 18th at 17:00 UTC Message-ID: <20141217144559.GA2075@sazabi.treinish> Hi everyone, Just a quick reminder that the weekly OpenStack QA team IRC meeting will be tomorrow Thursday, December 18th at 17:00 UTC in the #openstack-meeting channel. The agenda for tomorrow's meeting can be found here: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting Anyone is welcome to add an item to the agenda. It's also worth noting that several weeks ago we started having a regular dedicated Devstack topic during the meetings. So if anyone is interested in Devstack development please join the meetings to be a part of the discussion. To help people figure out what time 17:00 UTC is in other timezones tomorrow's meeting will be at: 12:00 EST 02:00 JST 03:30 ACDT 18:00 CET 11:00 CST 9:00 PST -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From moreira.belmiro.email.lists at gmail.com Wed Dec 17 15:03:19 2014 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Wed, 17 Dec 2014 16:03:19 +0100 Subject: [openstack-dev] [qa] How to delete a VM which is in ERROR state? In-Reply-To: <2075A451-DE64-4AEE-9B86-6201A8612C5B@gmail.com> References: <548B6844.4020804@gmail.com> <2075A451-DE64-4AEE-9B86-6201A8612C5B@gmail.com> Message-ID: Hi Vish, do you have more info about the libvirt deadlocks that you observed? Maybe I'm observing the same on SLC6 where I can't even "kill" libvirtd process. Belmiro On Tue, Dec 16, 2014 at 12:01 AM, Vishvananda Ishaya wrote: > > I have seen deadlocks in libvirt that could cause this. When you are in > this state, check to see if you can do a virsh list on the node. If not, > libvirt is deadlocked, and ubuntu may need to pull in a fix/newer version. > > Vish > > On Dec 12, 2014, at 2:12 PM, pcrews wrote: > > > On 12/09/2014 03:54 PM, Ken'ichi Ohmichi wrote: > >> Hi, > >> > >> This case is always tested by Tempest on the gate. > >> > >> > https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_delete_server.py#L152 > >> > >> So I guess this problem wouldn't happen on the latest version at least. > >> > >> Thanks > >> Ken'ichi Ohmichi > >> > >> --- > >> > >> 2014-12-10 6:32 GMT+09:00 Joe Gordon : > >>> > >>> > >>> On Sat, Dec 6, 2014 at 5:08 PM, Danny Choi (dannchoi) < > dannchoi at cisco.com> > >>> wrote: > >>>> > >>>> Hi, > >>>> > >>>> I have a VM which is in ERROR state. > >>>> > >>>> > >>>> > +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ > >>>> > >>>> | ID | Name > >>>> | Status | Task State | Power State | Networks | > >>>> > >>>> > >>>> > +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ > >>>> > >>>> | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | > >>>> cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR | - | > NOSTATE > >>>> | | > >>>> > >>>> > >>>> I tried in both CLI ?nova delete? and Horizon ?terminate instance?. > >>>> Both accepted the delete command without any error. > >>>> However, the VM never got deleted. > >>>> > >>>> Is there a way to remove the VM? > >>> > >>> > >>> What version of nova are you using? This is definitely a serious bug, > you > >>> should be able to delete an instance in error state. Can you file a > bug that > >>> includes steps on how to reproduce the bug along with all relevant > logs. > >>> > >>> bugs.launchpad.net/nova > >>> > >>>> > >>>> > >>>> Thanks, > >>>> Danny > >>>> > >>>> _______________________________________________ > >>>> OpenStack-dev mailing list > >>>> OpenStack-dev at lists.openstack.org > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > >>> > >>> > >>> _______________________________________________ > >>> OpenStack-dev mailing list > >>> OpenStack-dev at lists.openstack.org > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > Hi, > > > > I've encountered this in my own testing and have found that it appears > to be tied to libvirt. > > > > When I hit this, reset-state as the admin user reports success (and > state is set), *but* things aren't really working as advertised and > subsequent attempts to do anything with the errant vm's will send them > right back into 'FLAIL' / can't delete / endless DELETING mode. > > > > restarting libvirt-bin on my machine fixes this - after restart, the > deleting vm's are properly wiped without any further user input to > nova/horizon and all seems right in the world. > > > > using: > > devstack > > ubuntu 14.04 > > libvirtd (libvirt) 1.2.2 > > > > triggered via: > > lots of random create/reboot/resize/delete requests of varying validity > and sanity. > > > > Am in the process of cleaning up my test code so as not to hurt anyone's > brain with the ugly and will file a bug once done, but thought this worth > sharing. > > > > Thanks, > > Patrick > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.a.st.pierre at gmail.com Wed Dec 17 15:23:00 2014 From: chris.a.st.pierre at gmail.com (Chris St. Pierre) Date: Wed, 17 Dec 2014 09:23:00 -0600 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: References: <5490967C.8020801@gmail.com> <5490A5D8.2090804@catalyst.net.nz> <20141216231940.GA61726@HQSML-1081034.cable.comcast.com> Message-ID: That's unfortunately too simple. You run into one of two cases: 1. If the job automatically removes the protected attribute when an image is no longer in use, then you lose the ability to use "protected" on images that are not in use. I.e., there's no way to say, "nothing is currently using this image, but please keep it around." (This seems particularly useful for snapshots, for instance.) 2. If the job does not automatically remove the protected attribute, then an image would be protected if it had ever been in use; to delete an image, you'd have to manually un-protect it, which is a workflow that quite explicitly defeats the whole purpose of flagging images as protected when they're in use. It seems like flagging an image as *not* in use is actually a fairly difficult problem, since it requires consensus among all components that might be using images. The only solution that readily occurs to me would be to add something like a filesystem link count to images in Glance. Then when Nova spawns an instance, it increments the usage count; when the instance is destroyed, the usage count is decremented. And similarly with other components that use images. An image could only be deleted when its usage count was zero. There are ample opportunities to get out of sync there, but it's at least a sketch of something that might work, and isn't *too* horribly hackish. Thoughts? On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya wrote: > A simple solution that wouldn?t require modification of glance would be a > cron job > that lists images and snapshots and marks them protected while they are in > use. > > Vish > > On Dec 16, 2014, at 3:19 PM, Collins, Sean < > Sean_Collins2 at cable.comcast.com> wrote: > > > On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote: > >> No, I'm looking to prevent images that are in use from being deleted. > "In > >> use" and "protected" are disjoint sets. > > > > I have seen multiple cases of images (and snapshots) being deleted while > > still in use in Nova, which leads to some very, shall we say, > > interesting bugs and support problems. > > > > I do think that we should try and determine a way forward on this, they > > are indeed disjoint sets. Setting an image as protected is a proactive > > measure, we should try and figure out a way to keep tenants from > > shooting themselves in the foot if possible. > > > > -- > > Sean M. Collins > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Chris St. Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Dec 17 15:23:48 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 17 Dec 2014 10:23:48 -0500 Subject: [openstack-dev] [all][stable] fixing SQLAlchemy and oslo.db requirements in stable/juno Message-ID: <210158B6-487F-4D3B-946F-DA8BA961F394@doughellmann.com> Now that yesterday?s patch to cap the version of oslo.db used in stable/juno <1.1 merged, we have a bunch of updates pending in projects that use oslo.db or SQLAlchemy to fix the in-project requirement specifications [1]. Having the global requirements list updated takes care of our CI environment, but we should prioritize those reviews so we can get our stable branches into a good state for sites doing CD from stable branches. Thanks, Doug [1] https://review.openstack.org/#/q/branch:stable/juno++is:open+owner:%22openstack+proposal+bot%22,n,z From vkramskikh at mirantis.com Wed Dec 17 15:29:58 2014 From: vkramskikh at mirantis.com (Vitaly Kramskikh) Date: Wed, 17 Dec 2014 16:29:58 +0100 Subject: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format In-Reply-To: References: Message-ID: As I said, it is not flexible and restrictive. What if there are some other "backends" for anything appear? What to do if I want to write a plugin that just adds some extra styles to the UI? Invent a new structures/flags on demand? That's not viable. I still think "enableness" of plugin is the root of all issues with your approach. With your approach we lose single source of truth (cluster attributes/settings tab) we'll need to search for strange solutions like these groups/flags. 2014-12-17 12:33 GMT+01:00 Evgeniy L : > > Vitaly, what do you think about that? > > On Fri, Dec 12, 2014 at 5:58 PM, Evgeniy L wrote: >> >> Hi, >> >> I don't agree with many of your statements but, I would like to >> continue discussion about really important topic i.e. UI flow, my >> suggestion was to add groups, for plugin in metadata.yaml plugin >> developer can have description of the groups which it belongs to: >> >> groups: >> - id: storage >> subgroup: >> - id: cinder >> >> With this information we can show a new option on UI (wizard), >> if option is selected, it means that plugin is enabled, if plugin belongs >> to several groups, we can use OR statement. >> >> The main point is, for environment creation we must specify >> ids of plugins. Yet another reason for that is plugins multiversioning, >> we must know exactly which plugin with which version >> is used for environment, and I don't see how "conditions" can help >> us with it. >> >> Thanks, >> >>> >>>> >>>> >> On Wed, Dec 10, 2014 at 8:23 PM, Vitaly Kramskikh < >> vkramskikh at mirantis.com> wrote: >>> >>> >>> >>> 2014-12-10 19:31 GMT+03:00 Evgeniy L : >>> >>>> >>>> >>>> On Wed, Dec 10, 2014 at 6:50 PM, Vitaly Kramskikh < >>>> vkramskikh at mirantis.com> wrote: >>>> >>>>> >>>>> >>>>> 2014-12-10 16:57 GMT+03:00 Evgeniy L : >>>>> >>>>>> Hi, >>>>>> >>>>>> First let me describe what our plans for the nearest release. We want >>>>>> to deliver >>>>>> role as a simple plugin, it means that plugin developer can define >>>>>> his own role >>>>>> with yaml and also it should work fine with our current approach when >>>>>> user can >>>>>> define several fields on the settings tab. >>>>>> >>>>>> Also I would like to mention another thing which we should probably >>>>>> discuss >>>>>> in separate thread, how plugins should be implemented. We have two >>>>>> types >>>>>> of plugins, simple and complicated, the definition of simple - I can >>>>>> do everything >>>>>> I need with yaml, the definition of complicated - probably I have to >>>>>> write some >>>>>> python code. It doesn't mean that this python code should do >>>>>> absolutely >>>>>> everything it wants, but it means we should implement stable, >>>>>> documented >>>>>> interface where plugin is connected to the core. >>>>>> >>>>>> Now lets talk about UI flow, our current problem is how to get the >>>>>> information >>>>>> if plugins is used in the environment or not, this information is >>>>>> required for >>>>>> backend which generates appropriate tasks for task executor, also this >>>>>> information can be used in the future if we decide to implement >>>>>> plugins deletion >>>>>> mechanism. >>>>>> >>>>>> I didn't come up with a some new solution, as before we have two >>>>>> options to >>>>>> solve the problem: >>>>>> >>>>>> # 1 >>>>>> >>>>>> Use conditional language which is currently used on UI, it will look >>>>>> like >>>>>> Vitaly described in the example [1]. >>>>>> Plugin developer should: >>>>>> >>>>>> 1. describe at least one element for UI, which he will be able to use >>>>>> in task >>>>>> >>>>>> 2. add condition which is written in our own programming language >>>>>> >>>>>> Example of the condition for LBaaS plugin: >>>>>> >>>>>> condition: settings:lbaas.metadata.enabled == true >>>>>> >>>>>> 3. add condition to metadata.yaml a condition which defines if plugin >>>>>> is enabled >>>>>> >>>>>> is_enabled: settings:lbaas.metadata.enabled == true >>>>>> >>>>>> This approach has good flexibility, but also it has problems: >>>>>> >>>>>> a. It's complicated and not intuitive for plugin developer. >>>>>> >>>>> It is less complicated than python code >>>>> >>>> >>>> I'm not sure why are you talking about python code here, my point >>>> is we should not force developer to use this conditions in any language. >>>> >>>> But that's how current plugin-like stuff works. There are various tasks >>> which are run only if some checkboxes are set, so stuff like Ceph and >>> vCenter will need conditions to describe tasks. >>> >>>> Anyway I don't agree with the statement there are more people who know >>>> python than "fuel ui conditional language". >>>> >>>> >>>>> b. It doesn't cover case when the user installs 3rd party plugin >>>>>> which doesn't have any conditions (because of # a) and >>>>>> user doesn't have a way to disable it for environment if it >>>>>> breaks his configuration. >>>>>> >>>>> If plugin doesn't have conditions for tasks, then it has invalid >>>>> metadata. >>>>> >>>> >>>> Yep, and it's a problem of the platform, which provides a bad interface. >>>> >>> Why is it bad? It plugin writer doesn't provide plugin name or version, >>> then metadata is invalid also. It is plugin writer's fault that he didn't >>> write metadata properly. >>> >>>> >>>> >>>>> >>>>>> # 2 >>>>>> >>>>>> As we discussed from the very beginning after user selects a release >>>>>> he can >>>>>> choose a set of plugins which he wants to be enabled for environment. >>>>>> After that we can say that plugin is enabled for the environment and >>>>>> we send >>>>>> tasks related to this plugin to task executor. >>>>>> >>>>>> >> My approach also allows to eliminate "enableness" of plugins >>>>>> which will cause UX issues and issues like you described above. vCenter and >>>>>> Ceph also don't have "enabled" state. vCenter has hypervisor and storage, >>>>>> Ceph provides backends for Cinder and Glance which can be used >>>>>> simultaneously or only one of them can be used. >>>>>> >>>>>> Both of described plugins have enabled/disabled state, vCenter is >>>>>> enabled >>>>>> when vCenter is selected as hypervisor. Ceph is enabled when it's >>>>>> selected >>>>>> as a backend for Cinder or Glance. >>>>>> >>>>> Nope, Ceph for Volumes can be used without Ceph for Images. Both of >>>>> these plugins can also have some granular tasks which are enabled by >>>>> various checkboxes (like VMware vCenter for volumes). How would you >>>>> determine whether tasks which installs VMware vCenter for volumes should >>>>> run? >>>>> >>>> >>>> Why "nope"? I have "Cinder OR Glance". >>>> >>> Oh, I missed it. So there are 2 checkboxes, how would you determine >>> "enableness"? >>> >>>> It can be easily handled in deployment script. >>>> >>> I don't know much about the status of granular deployment blueprint, but >>> AFAIK that's what we are going to get rid of. >>> >>>> >>>> >>>>>> If you don't like the idea of having Ceph/vCenter checkboxes on the >>>>>> first page, >>>>>> I can suggest as an idea (research is required) to define groups like >>>>>> Storage Backend, >>>>>> Network Manager and we will allow plugin developer to embed his >>>>>> option in radiobutton >>>>>> field on wizard pages. But plugin developer should not describe >>>>>> conditions, he should >>>>>> just write that his plugin is a Storage Backend, Hypervisor or new >>>>>> Network Manager. >>>>>> And the plugins e.g. Zabbix, Nagios, which don't belong to any of >>>>>> this groups >>>>>> should be shown as checkboxes on the first page of the wizard. >>>>>> >>>>> Why don't you just ditch "enableness" of plugins and get rid of this >>>>> complex stuff? Can you explain why do you need to know if plugin is >>>>> "enabled"? Let me summarize my opinion on this: >>>>> >>>> >>>> I described why we need it many times. Also it looks like you skipped >>>> another option >>>> and I would like to see some more information why you don't like it and >>>> why it's >>>> a bad from UX stand point of view. >>>> >>> Yes, I skipped it. You said "research is required", so please do it, >>> write a proposal and then we will compare it with condition approach. You >>> still don't have your proposal, so there is nothing to compare and discuss. >>> From the first perspective it seems complex and restrictive. >>> >>>> >>>>> - You don't need to know whether plugin is enabled or not. You >>>>> need to know what tasks should be run and whether plugin is removable >>>>> (anything else?). These conditions can be described by the DSL. >>>>> >>>>> I do need to know if plugin is enabled to figure out if it's >>>> removable, in fact those are the same things. >>>> >>> So there is nothing else you need "enableness", right? If you "described >>> why we need it many times", I think you need to do it one more time (in >>> form of a list). If we need "enableness" just to determine whether the >>> plugin is removable, then it is not the reason to ruin our UX. >>> >>>> >>>>> - >>>>> - Explicitly asking the user to enable plugin for new environment >>>>> should be considered as a last resort solution because it significantly >>>>> impair our UX for inexperienced user. Just imagine: a new user which barely >>>>> knows about OpenStack chooses a name for the environment, OS release and >>>>> then he needs to choose plugins. Really? >>>>> >>>>> I really think that it's absolutely ok to show checkbox with LBaaS for >>>> the user who found the >>>> plugin, downloaded it on the master and installed it with CLI. >>>> >>>> And right now this user have to go to this settings tab with attempts >>>> to find this checkbox, >>>> also he may not find it for example because of incompatible release >>>> version, and it's clearly >>>> a bad UX. >>>> >>> I like how it is done in modern browsers - after upgrade of master node >>> there is notification about incompatible plugins, and in list of plugins >>> there is a message that plugin is incompatible. We need to design how we >>> will handle it. Currently it is definitely a bad UX because nothing is done >>> for this. >>> >>>> My proposal for "complex" plugin interface: there should be python >>>>> classes with exactly the same fields from yaml files: plugin name, version, >>>>> etc. But condition for cluster deletion and for tasks which are written in >>>>> DSL in case of "simple" yaml config should become methods which plugin >>>>> writer can make as complex as he wants. >>>>> >>>> Why do you want to use python to define plugin name, version etc? It's >>>> a static data which are >>>> used for installation, I don't think that in fuel client (or some other >>>> installation tool) we want >>>> to unpack the plugin and import this module to get the information >>>> which is required for installation. >>>> >>> It is just a proposal in which I try to solve problems which you see in >>> my approach. If you want a "complex" interface with arbitrary python code, >>> that's how I see it. All fields are the same here, the approach is the >>> same, just conditions are in python. And YAML config can be converted to >>> this class, and all other code won't need to handle 2 different interfaces >>> for plugins. >>> >>>> >>>>>> [1] >>>>>> https://github.com/vkramskikh/fuel-plugins/commit/1ddb166731fc4bf614f502b276eb136687cb20cf >>>>>> >>>>>> On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh < >>>>>> vkramskikh at mirantis.com> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak : >>>>>>> >>>>>>>> >>>>>>>>> - environment_config.yaml should contain exact config which >>>>>>>>> will be mixed into cluster_attributes. No need to implicitly generate any >>>>>>>>> controls like it is done now. >>>>>>>>> >>>>>>>>> Initially i had the same thoughts and wanted to use it the way it >>>>>>>> is, but now i completely agree with Evgeniy that additional DSL will cause >>>>>>>> a lot >>>>>>>> of problems with compatibility between versions and developer >>>>>>>> experience. >>>>>>>> >>>>>>> As far as I understand, you want to introduce another approach to >>>>>>> describe UI part or plugins? >>>>>>> >>>>>>>> We need to search for alternatives.. >>>>>>>> 1. for UI i would prefer separate tab for plugins, where user will >>>>>>>> be able to enable/disable plugin explicitly. >>>>>>>> >>>>>>> Of course, we need a separate page for plugin management. >>>>>>> >>>>>>>> Currently settings tab is overloaded. >>>>>>>> 2. on backend we need to validate plugins against certain env >>>>>>>> before enabling it, >>>>>>>> and for simple case we may expose some basic entities like >>>>>>>> network_mode. >>>>>>>> For case where you need complex logic - python code is far more >>>>>>>> flexible that new DSL. >>>>>>>> >>>>>>>>> >>>>>>>>> - metadata.yaml should also contain "is_removable" field. This >>>>>>>>> field is needed to determine whether it is possible to remove installed >>>>>>>>> plugin. It is impossible to remove plugins in the current implementation. >>>>>>>>> This field should contain an expression written in our DSL which we already >>>>>>>>> use in a few places. The LBaaS plugin also uses it to hide the checkbox if >>>>>>>>> Neutron is not used, so even simple plugins like this need to utilize it. >>>>>>>>> This field can also be autogenerated, for more complex plugins plugin >>>>>>>>> writer needs to fix it manually. For example, for Ceph it could look like >>>>>>>>> "settings:storage.volumes_ceph.value == false and >>>>>>>>> settings:storage.images_ceph.value == false". >>>>>>>>> >>>>>>>>> How checkbox will help? There is several cases of plugin removal.. >>>>>>>> >>>>>>> It is not a checkbox, this is condition that determines whether the >>>>>>> plugin is removable. It allows plugin developer specify when plguin can be >>>>>>> safely removed from Fuel if there are some environments which were created >>>>>>> after the plugin had been installed. >>>>>>> >>>>>>>> 1. Plugin is installed, but not enabled for any env - just remove >>>>>>>> the plugin >>>>>>>> 2. Plugin is installed, enabled and cluster deployed - forget about >>>>>>>> it for now.. >>>>>>>> 3. Plugin is installed and only enabled - we need to maintain state >>>>>>>> of db consistent after plugin is removed, it is problematic, but possible >>>>>>>> >>>>>>> My approach also allows to eliminate "enableness" of plugins which >>>>>>> will cause UX issues and issues like you described above. vCenter and Ceph >>>>>>> also don't have "enabled" state. vCenter has hypervisor and storage, Ceph >>>>>>> provides backends for Cinder and Glance which can be used simultaneously or >>>>>>> only one of them can be used. >>>>>>> >>>>>>>> My main point that plugin is enabled/disabled explicitly by user, >>>>>>>> after that we can decide ourselves can it be removed or not. >>>>>>>> >>>>>>>>> >>>>>>>>> - For every task in tasks.yaml there should be added new >>>>>>>>> "condition" field with an expression which determines whether the task >>>>>>>>> should be run. In the current implementation tasks are always run for >>>>>>>>> specified roles. For example, vCenter plugin can have a few tasks with >>>>>>>>> conditions like "settings:common.libvirt_type.value == 'vcenter'" or >>>>>>>>> "settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar >>>>>>>>> approach will be used in implementation of Granular Deployment feature. >>>>>>>>> >>>>>>>>> I had some thoughts about using DSL, it seemed to me especially >>>>>>>> helpfull when you need to disable part of embedded into core functionality, >>>>>>>> like deploying with another hypervisor, or network dirver (contrail >>>>>>>> for example). And DSL wont cover all cases here, this quite similar to >>>>>>>> metadata.yaml, simple cases can be covered by some variables in tasks (like >>>>>>>> group, unique, etc), but complex is easier to test and describe in python. >>>>>>>> >>>>>>> Could you please provide example of such conditions? vCenter and >>>>>>> Ceph can be turned into plugins using this approach. >>>>>>> >>>>>>> Also, I'm not against python version of plugins. It could look like >>>>>>> a python class with exactly the same fields form YAML files, but conditions >>>>>>> will be written in python. >>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> OpenStack-dev mailing list >>>>>>>> OpenStack-dev at lists.openstack.org >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Vitaly Kramskikh, >>>>>>> Software Engineer, >>>>>>> Mirantis, Inc. >>>>>>> >>>>>>> _______________________________________________ >>>>>>> OpenStack-dev mailing list >>>>>>> OpenStack-dev at lists.openstack.org >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>>> >>>>>> >>>>>> On Sun, Nov 30, 2014 at 3:12 PM, Vitaly Kramskikh < >>>>>> vkramskikh at mirantis.com> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> 2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak : >>>>>>> >>>>>>>> >>>>>>>>> - environment_config.yaml should contain exact config which >>>>>>>>> will be mixed into cluster_attributes. No need to implicitly generate any >>>>>>>>> controls like it is done now. >>>>>>>>> >>>>>>>>> Initially i had the same thoughts and wanted to use it the way it >>>>>>>> is, but now i completely agree with Evgeniy that additional DSL will cause >>>>>>>> a lot >>>>>>>> of problems with compatibility between versions and developer >>>>>>>> experience. >>>>>>>> >>>>>>> As far as I understand, you want to introduce another approach to >>>>>>> describe UI part or plugins? >>>>>>> >>>>>>>> We need to search for alternatives.. >>>>>>>> 1. for UI i would prefer separate tab for plugins, where user will >>>>>>>> be able to enable/disable plugin explicitly. >>>>>>>> >>>>>>> Of course, we need a separate page for plugin management. >>>>>>> >>>>>>>> Currently settings tab is overloaded. >>>>>>>> 2. on backend we need to validate plugins against certain env >>>>>>>> before enabling it, >>>>>>>> and for simple case we may expose some basic entities like >>>>>>>> network_mode. >>>>>>>> For case where you need complex logic - python code is far more >>>>>>>> flexible that new DSL. >>>>>>>> >>>>>>>>> >>>>>>>>> - metadata.yaml should also contain "is_removable" field. This >>>>>>>>> field is needed to determine whether it is possible to remove installed >>>>>>>>> plugin. It is impossible to remove plugins in the current implementation. >>>>>>>>> This field should contain an expression written in our DSL which we already >>>>>>>>> use in a few places. The LBaaS plugin also uses it to hide the checkbox if >>>>>>>>> Neutron is not used, so even simple plugins like this need to utilize it. >>>>>>>>> This field can also be autogenerated, for more complex plugins plugin >>>>>>>>> writer needs to fix it manually. For example, for Ceph it could look like >>>>>>>>> "settings:storage.volumes_ceph.value == false and >>>>>>>>> settings:storage.images_ceph.value == false". >>>>>>>>> >>>>>>>>> How checkbox will help? There is several cases of plugin removal.. >>>>>>>> >>>>>>> It is not a checkbox, this is condition that determines whether the >>>>>>> plugin is removable. It allows plugin developer specify when plguin can be >>>>>>> safely removed from Fuel if there are some environments which were created >>>>>>> after the plugin had been installed. >>>>>>> >>>>>>>> 1. Plugin is installed, but not enabled for any env - just remove >>>>>>>> the plugin >>>>>>>> 2. Plugin is installed, enabled and cluster deployed - forget about >>>>>>>> it for now.. >>>>>>>> 3. Plugin is installed and only enabled - we need to maintain state >>>>>>>> of db consistent after plugin is removed, it is problematic, but possible >>>>>>>> >>>>>>> My approach also allows to eliminate "enableness" of plugins which >>>>>>> will cause UX issues and issues like you described above. vCenter and Ceph >>>>>>> also don't have "enabled" state. vCenter has hypervisor and storage, Ceph >>>>>>> provides backends for Cinder and Glance which can be used simultaneously or >>>>>>> only one of them can be used. >>>>>>> >>>>>>>> My main point that plugin is enabled/disabled explicitly by user, >>>>>>>> after that we can decide ourselves can it be removed or not. >>>>>>>> >>>>>>>>> >>>>>>>>> - For every task in tasks.yaml there should be added new >>>>>>>>> "condition" field with an expression which determines whether the task >>>>>>>>> should be run. In the current implementation tasks are always run for >>>>>>>>> specified roles. For example, vCenter plugin can have a few tasks with >>>>>>>>> conditions like "settings:common.libvirt_type.value == 'vcenter'" or >>>>>>>>> "settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar >>>>>>>>> approach will be used in implementation of Granular Deployment feature. >>>>>>>>> >>>>>>>>> I had some thoughts about using DSL, it seemed to me especially >>>>>>>> helpfull when you need to disable part of embedded into core functionality, >>>>>>>> like deploying with another hypervisor, or network dirver (contrail >>>>>>>> for example). And DSL wont cover all cases here, this quite similar to >>>>>>>> metadata.yaml, simple cases can be covered by some variables in tasks (like >>>>>>>> group, unique, etc), but complex is easier to test and describe in python. >>>>>>>> >>>>>>> Could you please provide example of such conditions? vCenter and >>>>>>> Ceph can be turned into plugins using this approach. >>>>>>> >>>>>>> Also, I'm not against python version of plugins. It could look like >>>>>>> a python class with exactly the same fields form YAML files, but conditions >>>>>>> will be written in python. >>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> OpenStack-dev mailing list >>>>>>>> OpenStack-dev at lists.openstack.org >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Vitaly Kramskikh, >>>>>>> Software Engineer, >>>>>>> Mirantis, Inc. >>>>>>> >>>>>>> _______________________________________________ >>>>>>> OpenStack-dev mailing list >>>>>>> OpenStack-dev at lists.openstack.org >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> OpenStack-dev mailing list >>>>>> OpenStack-dev at lists.openstack.org >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Vitaly Kramskikh, >>>>> Software Engineer, >>>>> Mirantis, Inc. >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> >>> -- >>> Vitaly Kramskikh, >>> Software Engineer, >>> Mirantis, Inc. >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Dec 17 15:35:34 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 17 Dec 2014 15:35:34 +0000 Subject: [openstack-dev] oslo.db 1.2.1 release coming to fix stable/juno In-Reply-To: <54918CF2.6010208@debian.org> References: <27A76931-16EF-400B-8EDD-6C3E18F52053@doughellmann.com> <54918CF2.6010208@debian.org> Message-ID: <20141217153533.GC2497@yuggoth.org> On 2014-12-17 22:02:26 +0800 (+0800), Thomas Goirand wrote: > I'm not sure I get it. Is this related to newer versions of SQLAlchemy? It's related to how Setuptools 8 failed to parse our requirements line for SQLAlchemy because it contained multiple version ranges. That was fixed by converting it to a single range with a list of excluded versions instead, but we still needed to "backport" that requirements.txt entry to a version of oslo.db for stable/juno. > If so, then from my package maintainer point of view, keeping an older > version of SQLA (eg: 0.9.8) and oslo.db 1.0.2 for Juno is ok, right? In the end we wound up with a 1.0.3 release of oslo.db and pinned stable/juno requirements to oslo.db<1.1 rather than the open-ended maximum it previously had (which was including 1.1.x and 1.2.x versions). > Will Kilo require a newer version of SQLA? In this case "older" SQLAlchemy is 0.8.x (we're listing supported 0.8.x and 0.9.x versions for stable/juno still), while Kilo may release with only a supported range of SQLAlchemy 0.9.x versions. -- Jeremy Stanley From jckasper at linux.vnet.ibm.com Wed Dec 17 15:53:45 2014 From: jckasper at linux.vnet.ibm.com (John Kasperski) Date: Wed, 17 Dec 2014 09:53:45 -0600 Subject: [openstack-dev] [Neutron][Nova][DHCP] Use case of enable_dhcp option when creating a network In-Reply-To: <1517147686.131964.1418804134592.JavaMail.yahoo@jws106138.mail.bf1.yahoo.com> References: <1517147686.131964.1418804134592.JavaMail.yahoo@jws106138.mail.bf1.yahoo.com> Message-ID: <5491A709.1000604@linux.vnet.ibm.com> When enable_dhcp is False, Config drive or metadata service can be used to assign static IP addresses to the deployed VM if the image has cloud_init or something equivalent. On 12/17/2014 2:15 AM, Padmanabhan Krishnan wrote: > Hello, > I have a question regarding the enable_dhcp option when creating a > network. > > When a VM is attached to a network where enable_dhcp is False, I > understand that the DHCP namespace is not created for the network and > the VM does not get any IP address after it boots up and sends a DHCP > Discover. > But, I also see that the Neutron port is filled with the fixed IP > value from the network pool even though there's no DHCP associated > with the subnet. > So, for such VM's, does one need to statically configure the IP > address with whatever Neutron has allocated from the pool? > > What exactly is the use case of the above? > > I do understand that for providing public network access to VM's, an > external network is generally created with enable-dhcp option set to > False. Is it only for this purpose? > > I was thinking of a case of external/provider DHCP servers from where > VM's can get their IP addresses and when one does not want to use L3 > agent/DVR. In such cases, one may want to disable DHCP when creating > networks. Isn't this a use-case? > > Appreciate any response or corrections with my above understanding. > > Thanks, > Paddu > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- John Kasperski -------------- next part -------------- An HTML attachment was scrubbed... URL: From sombrafam at gmail.com Wed Dec 17 15:54:23 2014 From: sombrafam at gmail.com (Erlon Cruz) Date: Wed, 17 Dec 2014 13:54:23 -0200 Subject: [openstack-dev] [cinder driver] A question about Kilo merge point In-Reply-To: References: Message-ID: Hi, Yes, I think that the changes of being merged after K1 are few. Check this docs with the priority list that the core team are working on: https://etherpad.openstack.org/p/cinder-kilo-priorities Erlon On Tue, Dec 16, 2014 at 9:40 AM, liuxinguo wrote: > > If a cinder driver can not be mergerd into Kilo before Kilo-1, does it > means that this driver will has very little chance to be merged into Kilo? > > And what percentage of drivers will be merged before Kilo-1 according to > the whole drivers that will be merged into Kilo at last? > > > > Thanks! > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Wed Dec 17 16:09:39 2014 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 17 Dec 2014 08:09:39 -0800 Subject: [openstack-dev] [Ironic] Weekly subteam status report Message-ID: <20141217160939.GC5494@jimrollenhagen.com> Hi all, Following is the subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. Testing (adam_g) Bugs (dtantsur) (as of Mon, 8 Dec 17:00 UTC) Open: 108 (+6). 5 new (+1), 26 in progress (+1), 0 critical, 12 high (+2) and 3 incomplete Drivers: IPA (jroll/JayF/JoshNang) (update by JayF) check-tempest-dsvm-ironic-agent_ssh-src is now voting on all ironic-python-agent changes check-tempest-dsvm-ironic-agent_ssh-nv is still expected to pass on all Ironic changes, even though it doesn't vote, barring bugs #1398128 and #139770 (existing gate bugs) DRAC (ifarkas/lucas) nothing new // lucasagomes iLO (wanyen) As of 12/15/14 Submitted full spec of "partition image support for agent driver" https://review.openstack.org/#/c/137363/. This spec needs input regarding what's the best way to refactor partition code from Ironic to a common library for IPA and Ironic code. Please review the spec and provide input. Submitted code for two Nova specs https://review.openstack.org/#/c/141010/1 https://review.openstack.org/#/c/141012/ The 141012 changes Nova ironic driver code so please review. Setting up 3rd-party CI iRMC (naohirot) [power driver] merged the spec and started implementation towords kilo-2 https://review.openstack.org/#/c/134487/ [virtual media deploy driver] updated the spec to the patch set 7 for review https://review.openstack.org/#/c/134865/ [management driver] updated the spec to the patch set 5 for review https://review.openstack.org/#/c/136020/ AMT (lintan) Proposed a patch to support the workflow of deploy on AMT/vPro PC https://review.openstack.org/#/c/135184/ AMT driver proposal now to use wsman instead of amttools Oslo (GheRivero) oslo.config https://review.openstack.org/#/c/137447/ More intrusive All config options in the same file -> less error prone https://review.openstack.org/#/c/128005/ Less intrusive More error prone Same approach than other projects oslo.policy - WIP - https://review.openstack.org/#/c/126265/ Need to update to new oslo namespace and sync with oslo.incubator Waiting for oslo.config and oslo.policy patches to land oslo.context go be graduated https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context context and logging to be removed from incubator soon New setuptools causing problems with sqlalchemy and oslo.db New oslo.db release soon to fix this Future libraries: memcached, tooz? [0] https://etherpad.openstack.org/p/IronicWhiteBoard // jim From stevemar at ca.ibm.com Wed Dec 17 16:09:59 2014 From: stevemar at ca.ibm.com (Steve Martinelli) Date: Wed, 17 Dec 2014 11:09:59 -0500 Subject: [openstack-dev] [python-*client] py33 jobs seem to be failing Message-ID: Wondering if anyone can shed some light on this, it seems like a few of the clients have been unable to build py33 environments lately: http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTmFtZUVycm9yOiBuYW1lICdTdGFuZGFyZEVycm9yJyBpcyBub3QgZGVmaW5lZFwiIGJ1aWxkX3N0YXR1czonRkFJTFVSRSciLCJmaWVsZHMiOlsibWVzc2FnZSIsImJ1aWxkX25hbWUiLCJidWlsZF9zdGF0dXMiXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTg4MzIzNzM5ODl9 If you want to see additional logs I went ahead and opened a bug against python-openstackclient since that's where I saw it first: https://bugs.launchpad.net/python-openstackclient/+bug/1403557 Though it seems at least glanceclient/neutronclient/keystoneclient are affected as well. The stack trace leads me to believe that docutils or sphinx is the culprit, but neither has released a new version in the time the bug has been around, so I'm not sure what the root cause of the problem is. Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From pasquale.porreca at dektech.com.au Wed Dec 17 16:18:10 2014 From: pasquale.porreca at dektech.com.au (Pasquale Porreca) Date: Wed, 17 Dec 2014 17:18:10 +0100 Subject: [openstack-dev] [nova] Complexity check and v2 API Message-ID: <5491ACC2.6070207@dektech.com.au> Hello I am working on an API extension that adds a parameter on create server call; to implement the v2 API I added few lines of code to nova/api/openstack/compute/servers.py In particular just adding something like |new_param = None|| ||if self.ext_mgr.is_loaded('os-new-param'):|| || new_param = server_dict.get('new_param')| leads to a pep8 fail with message 'Controller.create' is too complex (47) (Note that in tox.ini the max complexity is fixed to 47 and there is a note specifying 46 is the max complexity present at the moment). It is quite easy to make this test pass creating a new method just to execute these lines of code, anyway all other extensions are handled in that way and one of most important stylish rule states to be consistent with surrounding code, so I don't think a separate function is the way to go (unless it implies a change in how all other extensions are handled too). My thoughts on this situation: 1) New extensions should not consider v2 but only v2.1, so that file should not be touched 2) Ignore this error and go on: if and when the extension will be merged the complexity in tox.ini will be changed too 3) The complexity in tox.ini should be raised to allow new v2 extensions 4) The code of that module should be refactored to lower the complexity (i.e. move the load of each extension in a separate function) I would like to know if any of my point is close to the correct solution. -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr -------------- next part -------------- An HTML attachment was scrubbed... URL: From nkinder at redhat.com Wed Dec 17 16:18:27 2014 From: nkinder at redhat.com (Nathan Kinder) Date: Wed, 17 Dec 2014 08:18:27 -0800 Subject: [openstack-dev] [OSSN 0042] Keystone token scoping provides no security benefit Message-ID: <5491ACD3.5090504@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Keystone token scoping provides no security benefit - --- ### Summary ### Keystone provides "scoped" tokens that are constrained to use by a single project. A user may expect that their scoped token can only be used to perform operations for the project it is scoped to, which is not the case. A service or other party who obtains the scoped token can use it to obtain a token for a different authorized scope, which may be considered a privilege escalation. ### Affected Services / Software ### Keystone, Diablo, Essex, Folsom, Grizzly, Havana, Icehouse, Juno, Kilo ### Discussion ### This is not a bug in keystone, it's a design feature that some users may expect to bring security enhancement when it does not. The OSSG is issuing this security note to highlight the issue. Many operations in OpenStack will take a token from the user and pass it to another service to perform some portion of the intended operation. This token is very powerful and can be used to perform many actions for the user. Scoped tokens appear to limit their use to the project and roles they were granted for but can also be used to request tokens with other scopes. It's important to note that this only works with currently valid tokens. Once a token expires it cannot be used to gain a new token. Token scoping helps avoid accidental leakage of tokens because using tokens with other services requires the extra step of requesting a new re-scoped token from keystone. Scoping can help with audit trails and promote good code practices. There's currently no way to create a tightly scoped token that cannot be used to request a re-scoped token. A scoped token cannot be relied upon to restrict actions to only that scope. ### Recommended Action ### Users and deployers of OpenStack must not rely on the scope of tokens to limit what actions can be performed using them. Concerned users are encouraged to read (OSSG member) Nathan Kinder's blog post on this issue and some of the potential future solutions. ### Contacts / References ### Nathan Kinder on Token Scoping : https://blog-nkinder.rhcloud.com/?p=101 This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0042 Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1341816 OpenStack Security ML : openstack-security at lists.openstack.org OpenStack Security Group : https://launchpad.net/~openstack-ossg -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJUkazTAAoJEJa+6E7Ri+EVnj0H/jQWtbkVN+na2GzI3VbNSLsF MPnGqO6tMcblKvI0m8okbyzhtpSDVAjPTCeoGY4PB5/AE31j1CDrlMT+bnm/Zk+O rAXeYgBvyjw9FbP9/UeNZPjQPByWaxGr8L90kuSGiL7rBvgf8KoxFJ2Kb9zNDWLJ bBAJ0A7QjOAri4RnyXoSINzKKawEJzM8va6R3iFtn6yF8Q/1ta3NBB5uWbgkS26M jtIvTNU/wGxX4b2mQ6gOno/4TwTZIqX+qTdDRXE811NZqSwdNfGRTD1hUQPYYtRq ioNBDrH/gXsmI4Lr/gXxki1zjPiSzWcbWOVu1PsnJTmFpYrI0FafguKwya4+YhI= =w/r8 -----END PGP SIGNATURE----- From carl at ecbaldwin.net Wed Dec 17 16:41:48 2014 From: carl at ecbaldwin.net (Carl Baldwin) Date: Wed, 17 Dec 2014 09:41:48 -0700 Subject: [openstack-dev] [Neutron] Looking for feedback: spec for allowing additional IPs to be shared In-Reply-To: References: Message-ID: On Tue, Dec 16, 2014 at 10:32 AM, Thomas Maddox wrote: > Hey all, > > It seems I missed the Kilo proposal deadline for Neutron, unfortunately, but > I still wanted to propose this spec for Neutron and get feedback/approval, > sooner rather than later, so I can begin working on an implementation, even > if it can't land in Kilo. I opted to put this in an etherpad for now for > collaboration due to missing the Kilo proposal deadline. > > Spec markdown in etherpad: > https://etherpad.openstack.org/p/allow-sharing-additional-ips Thomas, I did a quick look over and made a few comments because this looked similar to other stuff that I've looked at recently. I'd rather read and comment on this proposal in gerrit where all other specs are proposed. Carl From jp at jamezpolley.com Wed Dec 17 16:51:20 2014 From: jp at jamezpolley.com (James Polley) Date: Wed, 17 Dec 2014 17:51:20 +0100 Subject: [openstack-dev] [all] py33 jobs seem to be failing Message-ID: Tweaking subject as this seems to be broader than just the clients It's been seen on os-apply-config as well; I've marked 1403557 as a dupe 1403510. It's also been reported on stackforge/yaql as well as python-*client There's been some discussion of this in #openstack-infra and it seems dstufft has identified the cause; I'll update 1403510 once we have confirmation that we know what the problem is. On Wed, Dec 17, 2014 at 5:09 PM, Steve Martinelli wrote: > > Wondering if anyone can shed some light on this, it seems like a few of > the clients have been unable to build py33 environments lately: > > > http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTmFtZUVycm9yOiBuYW1lICdTdGFuZGFyZEVycm9yJyBpcyBub3QgZGVmaW5lZFwiIGJ1aWxkX3N0YXR1czonRkFJTFVSRSciLCJmaWVsZHMiOlsibWVzc2FnZSIsImJ1aWxkX25hbWUiLCJidWlsZF9zdGF0dXMiXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTg4MzIzNzM5ODl9 > > If you want to see additional logs I went ahead and opened a bug against > python-openstackclient since that's where I saw it first: > https://bugs.launchpad.net/python-openstackclient/+bug/1403557 > > Though it seems at least glanceclient/neutronclient/keystoneclient are > affected as well. > > The stack trace leads me to believe that docutils or sphinx is the > culprit, but neither has released a new version in the time the bug has > been around, so I'm not sure what the root cause of the problem is. > > Steve > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kprad1 at yahoo.com Wed Dec 17 17:04:24 2014 From: kprad1 at yahoo.com (Padmanabhan Krishnan) Date: Wed, 17 Dec 2014 17:04:24 +0000 (UTC) Subject: [openstack-dev] [Neutron][Nova][DHCP] Use case of enable_dhcp option when creating a network In-Reply-To: <954850604.342520.1418835750054.JavaMail.yahoo@jws10604.mail.bf1.yahoo.com> References: <954850604.342520.1418835750054.JavaMail.yahoo@jws10604.mail.bf1.yahoo.com> Message-ID: <1719769447.335867.1418835864745.JavaMail.yahoo@jws106134.mail.bf1.yahoo.com> Thanks for the response, i saw the other thread in the morning. Will use that thread, if i have further questions. -Paddu From: Pasquale Porreca Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, December 17, 2014 12:37 AM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [Neutron][Nova][DHCP] Use case of enable_dhcp option when creating a network Just yesterday I asked a similar question on ML, this is the answer I got: In Neutron IP address management and distribution are separated concepts.IP addresses are assigned to ports even when DHCP is disabled. That IP address is indeed used to configure anti-spoofing rules and security groups. http://lists.openstack.org/pipermail/openstack-dev/2014-December/053069.html On 12/17/14 09:15, Padmanabhan Krishnan wrote: Hello,I have a question regarding the enable_dhcp option when creating a network. When a VM is attached to? a network where enable_dhcp is False, I understand that the DHCP namespace is not created for the network and the VM does not get any IP address after it boots up and sends a DHCP Discover.But, I also see that the Neutron port is filled with the fixed IP value from the network pool even though there's no DHCP associated with the subnet.?So, for such VM's, does one need to statically configure the IP address with whatever Neutron has allocated from the pool? What exactly is the use case of the above?? I do understand that for providing public network access to VM's, an external network is generally created with enable-dhcp option set to False. Is it only for this purpose? I was thinking of a case of external/provider DHCP servers from where VM's can get their IP addresses and when one does not want to use L3 agent/DVR. In such cases, one may want to disable DHCP when creating networks. ?Isn't this a use-case? Appreciate any response or corrections with my above understanding. Thanks,Paddu? _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.gilliard at gmail.com Wed Dec 17 17:05:23 2014 From: matthew.gilliard at gmail.com (Matthew Gilliard) Date: Wed, 17 Dec 2014 17:05:23 +0000 Subject: [openstack-dev] [nova] Complexity check and v2 API In-Reply-To: <5491ACC2.6070207@dektech.com.au> References: <5491ACC2.6070207@dektech.com.au> Message-ID: Hello Pasquale The problem is that you are trying to add a new if/else branch into a method which is already ~250 lines long, and has the highest complexity of any function in the nova codebase. I assume that you didn't contribute much to that complexity, but we've recently added a limit to stop it getting any worse. So, regarding your 4 suggestions: 1/ As I understand it, v2.1 should be the same as v2 at the moment, so they need to be kept the same 2/ You can't ignore it - it will fail CI 3/ No thank you. This limit should only ever be lowered :-) 4/ This is 'the right way'. Your suggestion for the refactor does sound good. I suggest a single patch that refactors and lowers the limit in tox.ini. Once you've done that then you can add the new parameter in a following patch. Please feel free to add me to any patches you create. Matthew On Wed, Dec 17, 2014 at 4:18 PM, Pasquale Porreca wrote: > Hello > > I am working on an API extension that adds a parameter on create server > call; to implement the v2 API I added few lines of code to > nova/api/openstack/compute/servers.py > > In particular just adding something like > > new_param = None > if self.ext_mgr.is_loaded('os-new-param'): > new_param = server_dict.get('new_param') > > leads to a pep8 fail with message 'Controller.create' is too complex (47) > (Note that in tox.ini the max complexity is fixed to 47 and there is a note > specifying 46 is the max complexity present at the moment). > > It is quite easy to make this test pass creating a new method just to > execute these lines of code, anyway all other extensions are handled in that > way and one of most important stylish rule states to be consistent with > surrounding code, so I don't think a separate function is the way to go > (unless it implies a change in how all other extensions are handled too). > > My thoughts on this situation: > > 1) New extensions should not consider v2 but only v2.1, so that file should > not be touched > 2) Ignore this error and go on: if and when the extension will be merged the > complexity in tox.ini will be changed too > 3) The complexity in tox.ini should be raised to allow new v2 extensions > 4) The code of that module should be refactored to lower the complexity > (i.e. move the load of each extension in a separate function) > > I would like to know if any of my point is close to the correct solution. > > -- > Pasquale Porreca > > DEK Technologies > Via dei Castelli Romani, 22 > 00040 Pomezia (Roma) > > Mobile +39 3394823805 > Skype paskporr > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From pasquale.porreca at dektech.com.au Wed Dec 17 17:18:06 2014 From: pasquale.porreca at dektech.com.au (Pasquale Porreca) Date: Wed, 17 Dec 2014 18:18:06 +0100 Subject: [openstack-dev] [nova] Complexity check and v2 API In-Reply-To: References: <5491ACC2.6070207@dektech.com.au> Message-ID: <5491BACE.2010900@dektech.com.au> Thank you for the answer. my API proposal won't be merged in kilo release since the deadline for approval is tomorrow, so I may propose the fix to lower the complexity in another way, what do you think about a bug fix? On 12/17/14 18:05, Matthew Gilliard wrote: > Hello Pasquale > > The problem is that you are trying to add a new if/else branch into > a method which is already ~250 lines long, and has the highest > complexity of any function in the nova codebase. I assume that you > didn't contribute much to that complexity, but we've recently added a > limit to stop it getting any worse. So, regarding your 4 suggestions: > > 1/ As I understand it, v2.1 should be the same as v2 at the > moment, so they need to be kept the same > 2/ You can't ignore it - it will fail CI > 3/ No thank you. This limit should only ever be lowered :-) > 4/ This is 'the right way'. Your suggestion for the refactor does > sound good. > > I suggest a single patch that refactors and lowers the limit in > tox.ini. Once you've done that then you can add the new parameter in > a following patch. Please feel free to add me to any patches you > create. > > Matthew > > > > On Wed, Dec 17, 2014 at 4:18 PM, Pasquale Porreca > wrote: >> Hello >> >> I am working on an API extension that adds a parameter on create server >> call; to implement the v2 API I added few lines of code to >> nova/api/openstack/compute/servers.py >> >> In particular just adding something like >> >> new_param = None >> if self.ext_mgr.is_loaded('os-new-param'): >> new_param = server_dict.get('new_param') >> >> leads to a pep8 fail with message 'Controller.create' is too complex (47) >> (Note that in tox.ini the max complexity is fixed to 47 and there is a note >> specifying 46 is the max complexity present at the moment). >> >> It is quite easy to make this test pass creating a new method just to >> execute these lines of code, anyway all other extensions are handled in that >> way and one of most important stylish rule states to be consistent with >> surrounding code, so I don't think a separate function is the way to go >> (unless it implies a change in how all other extensions are handled too). >> >> My thoughts on this situation: >> >> 1) New extensions should not consider v2 but only v2.1, so that file should >> not be touched >> 2) Ignore this error and go on: if and when the extension will be merged the >> complexity in tox.ini will be changed too >> 3) The complexity in tox.ini should be raised to allow new v2 extensions >> 4) The code of that module should be refactored to lower the complexity >> (i.e. move the load of each extension in a separate function) >> >> I would like to know if any of my point is close to the correct solution. >> >> -- >> Pasquale Porreca >> >> DEK Technologies >> Via dei Castelli Romani, 22 >> 00040 Pomezia (Roma) >> >> Mobile +39 3394823805 >> Skype paskporr >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr From swaminathan.vasudevan at hp.com Wed Dec 17 17:56:41 2014 From: swaminathan.vasudevan at hp.com (Vasudevan, Swaminathan (PNB Roseville)) Date: Wed, 17 Dec 2014 17:56:41 +0000 Subject: [openstack-dev] Topic: Reschedule Router to a different agent with multiple external networks. Message-ID: <4094DC7712AF5D488899847517A3C5B064E717A0@G4W3299.americas.hpqcorp.net> Hi Folks, Reschedule router if new external gateway is on other network An L3 agent may be associated with just one external network. If router's new external gateway is on other network then the router needs to be rescheduled to the proper l3 agent This patch was introduced when there was no support for L3-agent to handle multiple external networks. Do we think we should still retain this original behavior even if we have support for multiple external networks by single L3-agent. Can anyone comment on this. Thanks Swaminathan Vasudevan Systems Software Engineer (TC) HP Networking Hewlett-Packard 8000 Foothills Blvd M/S 5541 Roseville, CA - 95747 tel: 916.785.0937 fax: 916.785.1815 email: swaminathan.vasudevan at hp.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikhil.komawar at RACKSPACE.COM Wed Dec 17 17:59:07 2014 From: nikhil.komawar at RACKSPACE.COM (Nikhil Komawar) Date: Wed, 17 Dec 2014 17:59:07 +0000 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: References: <5490967C.8020801@gmail.com> <5490A5D8.2090804@catalyst.net.nz> <20141216231940.GA61726@HQSML-1081034.cable.comcast.com> , Message-ID: <0FBF5631AB7B504D89C7E6929695B6249302F16F@ORD1EXD02.RACKSPACE.CORP> That looks like a decent alternative if it works. However, it would be too racy unless we we implement a test-and-set for such properties or there is a different job which queues up these requests and perform sequentially for each tenant. Thanks, -Nikhil ________________________________ From: Chris St. Pierre [chris.a.st.pierre at gmail.com] Sent: Wednesday, December 17, 2014 10:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use? That's unfortunately too simple. You run into one of two cases: 1. If the job automatically removes the protected attribute when an image is no longer in use, then you lose the ability to use "protected" on images that are not in use. I.e., there's no way to say, "nothing is currently using this image, but please keep it around." (This seems particularly useful for snapshots, for instance.) 2. If the job does not automatically remove the protected attribute, then an image would be protected if it had ever been in use; to delete an image, you'd have to manually un-protect it, which is a workflow that quite explicitly defeats the whole purpose of flagging images as protected when they're in use. It seems like flagging an image as *not* in use is actually a fairly difficult problem, since it requires consensus among all components that might be using images. The only solution that readily occurs to me would be to add something like a filesystem link count to images in Glance. Then when Nova spawns an instance, it increments the usage count; when the instance is destroyed, the usage count is decremented. And similarly with other components that use images. An image could only be deleted when its usage count was zero. There are ample opportunities to get out of sync there, but it's at least a sketch of something that might work, and isn't *too* horribly hackish. Thoughts? On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya > wrote: A simple solution that wouldn?t require modification of glance would be a cron job that lists images and snapshots and marks them protected while they are in use. Vish On Dec 16, 2014, at 3:19 PM, Collins, Sean > wrote: > On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote: >> No, I'm looking to prevent images that are in use from being deleted. "In >> use" and "protected" are disjoint sets. > > I have seen multiple cases of images (and snapshots) being deleted while > still in use in Nova, which leads to some very, shall we say, > interesting bugs and support problems. > > I do think that we should try and determine a way forward on this, they > are indeed disjoint sets. Setting an image as protected is a proactive > measure, we should try and figure out a way to keep tenants from > shooting themselves in the foot if possible. > > -- > Sean M. Collins > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Chris St. Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From unmesh.gurjar at hp.com Wed Dec 17 18:05:46 2014 From: unmesh.gurjar at hp.com (Gurjar, Unmesh) Date: Wed, 17 Dec 2014 18:05:46 +0000 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <548FB135.30209@redhat.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> <548A3FB8.9030007@redhat.com> <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> <548B8480.9010506@redhat.com> <4641310AFBEE10419D0A020273367C140CA3A305@G1W3645.americas.hpqcorp.net> <548FB135.30209@redhat.com> Message-ID: > -----Original Message----- > From: Zane Bitter [mailto:zbitter at redhat.com] > Sent: Tuesday, December 16, 2014 9:43 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept > showdown > > On 15/12/14 07:47, Murugan, Visnusaran wrote: > > Hi Zane, > > > > We have been going through this chain for quite some time now and we > still feel a disconnect in our understanding. > > Yes, I thought last week that we were on the same page, but now it looks like > we're drifting off again :( > > > Can you put up a etherpad where we can understand your approach. > > Maybe you could put up an etherpad with your questions. Practically all of > the relevant code is in Stack._create_or_update, Stack._dependencies and > Converger.check_resource. That's 134 lines of code by my count. > There's not a lot more I can usefully say about it without knowing which parts > exactly you're stuck on, but I can definitely answer specific questions. > > > For example: for storing resource dependencies, Are you storing its > > name, version tuple or just its ID. > > I'm storing a tuple of its name and database ID. The data structure is > resource.GraphKey. I was originally using the name for something, but I > suspect I could probably drop it now and just store the database ID, but I > haven't tried it yet. (Having the name in there definitely makes debugging > more pleasant though ;) > I agree, having name might come in handy while debugging! > When I build the traversal graph each node is a tuple of the GraphKey and a > boolean to indicate whether it corresponds to an update or a cleanup > operation (both can appear for a single resource in the same graph). Just to confirm my understanding, cleanup operation takes care of both: 1. resources which are deleted as a part of update and 2. previous versioned resource which was updated by replacing with a new resource (UpdateReplace scenario) Also, the cleanup operation is performed after the update completes successfully. > > > If I am correct, you are updating all resources on update regardless > > of their change which will be inefficient if stack contains a million resource. > > I'm calling update() on all resources regardless of change, but update() will > only call handle_update() if something has changed (unless the plugin has > overridden Resource._needs_update()). > > There's no way to know whether a resource needs to be updated before > you're ready to update it, so I don't think of this as 'inefficient', just 'correct'. > > > We have similar questions regarding other areas in your > > implementation, which we believe if we understand the outline of your > > implementation. It is difficult to get a hold on your approach just by looking > at code. Docs strings / Etherpad will help. > > > > > > About streams, Yes in a million resource stack, the data will be huge, but > less than template. > > No way, it's O(n^3) (cubed!) in the worst case to store streams for each > resource. > > > Also this stream is stored > > only In IN_PROGRESS resources. > > Now I'm really confused. Where does it come from if the resource doesn't > get it until it's already in progress? And how will that information help it? > When an operation on stack is initiated, the stream will be identified. To begin the operation, the action is initiated on the leaf (or root) resource(s) and the stream is stored (only) in this/these IN_PROGRESS resource(s). The stream should then keep getting passed to the next/previous level of resource(s) as and when the dependencies for the next/previous level of resource(s) are met. > > The reason to have entire dependency list to reduce DB queries while a > stack update. > > But we never need to know that. We only need to know what just happened > and what to do next. > As mentioned earlier, each level of resources in a graph pass on the dependency list/stream to their next/previous level of resources. This is information should further be used to determine what is to be done next and drive the operation to completion. > > When you have a singular dependency on each resources similar to your > > implantation, then we will end up loading Dependencies one at a time and > altering almost all resource's dependency regardless of their change. > > > > Regarding a 2 template approach for delete, it is not actually 2 > > different templates. Its just that we have a delete stream To be taken up > post update. > > That would be a regression from Heat's current behaviour, where we start > cleaning up resources as soon as they have nothing depending on them. > There's not even a reason to make it worse than what we already have, > because it's actually a lot _easier_ to treat update and clean up as the same > kind of operation and throw both into the same big graph. The dual > implementations and all of the edge cases go away and you can just trust in > the graph traversal to do the Right Thing in the most parallel way possible. > > > (Any post operation will be handled as an update) This approach is > > True when Rollback==True We can always fall back to regular stream > > (non-delete stream) if Rollback=False > > I don't understand what you're saying here. Just to elaborate, in case of update with rollback, there will be 2 streams of operations: 1. first is the create and update resource stream 2. second is the stream for deleting resources (which will be taken if the first stream completes successfully). However, in case of an update without rollback, there will be a single stream of operation (no delete/cleanup stream required). > > > In our view we would like to have only one basic operation and that is > UPDATE. > > > > 1. CREATE will be an update where a realized graph == Empty 2. UPDATE > > will be an update where realized graph == Full/Partial realized > > (possibly with a delete stream as post operation if Rollback==True) 3. > DELETE will be just another update with an empty to_be_realized_graph. > > Yes, that goes without saying. In my implementation > Stack._create_or_update() handles all three operations. > > > It would be great if we can freeze a stable approach by mid-week as > > Christmas vacations are round the corner. :) :) > > > >> -----Original Message----- > >> From: Zane Bitter [mailto:zbitter at redhat.com] > >> Sent: Saturday, December 13, 2014 5:43 AM > >> To: openstack-dev at lists.openstack.org > >> Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept > >> showdown > >> > >> On 12/12/14 05:29, Murugan, Visnusaran wrote: > >>> > >>> > >>>> -----Original Message----- > >>>> From: Zane Bitter [mailto:zbitter at redhat.com] > >>>> Sent: Friday, December 12, 2014 6:37 AM > >>>> To: openstack-dev at lists.openstack.org > >>>> Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept > >>>> showdown > >>>> > >>>> On 11/12/14 08:26, Murugan, Visnusaran wrote: > >>>>>>> [Murugan, Visnusaran] > >>>>>>> In case of rollback where we have to cleanup earlier version of > >>>>>>> resources, > >>>>>> we could get the order from old template. We'd prefer not to have > >>>>>> a graph table. > >>>>>> > >>>>>> In theory you could get it by keeping old templates around. But > >>>>>> that means keeping a lot of templates, and it will be hard to > >>>>>> keep track of when you want to delete them. It also means that > >>>>>> when starting an update you'll need to load every existing > >>>>>> previous version of the template in order to calculate the > >>>>>> dependencies. It also leaves the dependencies in an ambiguous > >>>>>> state when a resource fails, and although that can be worked > >>>>>> around it will be a giant pain to > >> implement. > >>>>>> > >>>>> > >>>>> Agree that looking to all templates for a delete is not good. But > >>>>> baring Complexity, we feel we could achieve it by way of having an > >>>>> update and a delete stream for a stack update operation. I will > >>>>> elaborate in detail in the etherpad sometime tomorrow :) > >>>>> > >>>>>> I agree that I'd prefer not to have a graph table. After trying a > >>>>>> couple of different things I decided to store the dependencies in > >>>>>> the Resource table, where we can read or write them virtually for > >>>>>> free because it turns out that we are always reading or updating > >>>>>> the Resource itself at exactly the same time anyway. > >>>>>> > >>>>> > >>>>> Not sure how this will work in an update scenario when a resource > >>>>> does not change and its dependencies do. > >>>> > >>>> We'll always update the requirements, even when the properties > >>>> don't change. > >>>> > >>> > >>> Can you elaborate a bit on rollback. > >> > >> I didn't do anything special to handle rollback. It's possible that > >> we need to - obviously the difference in the UpdateReplace + rollback > >> case is that the replaced resource is now the one we want to keep, > >> and yet the replaced_by/replaces dependency will force the newer > >> (replacement) resource to be checked for deletion first, which is an > >> inversion of the usual order. > >> > >> However, I tried to think of a scenario where that would cause > >> problems and I couldn't come up with one. Provided we know the > >> actual, real-world dependencies of each resource I don't think the > >> ordering of those two checks matters. > >> > >> In fact, I currently can't think of a case where the dependency order > >> between replacement and replaced resources matters at all. It matters > >> in the current Heat implementation because resources are artificially > >> segmented into the current and backup stacks, but with a holistic > >> view of dependencies that may well not be required. I tried taking > >> that line out of the simulator code and all the tests still passed. > >> If anybody can think of a scenario in which it would make a difference, I > would be very interested to hear it. > >> > >> In any event though, it should be no problem to reverse the direction > >> of that one edge in these particular circumstances if it does turn > >> out to be a problem. > >> > >>> We had an approach with depends_on > >>> and needed_by columns in ResourceTable. But dropped it when we > >> figured > >>> out we had too many DB operations for Update. > >> > >> Yeah, I initially ran into this problem too - you have a bunch of > >> nodes that are waiting on the current node, and now you have to go > >> look them all up in the database to see what else they're waiting on > >> in order to tell if they're ready to be triggered. > >> > >> It turns out the answer is to distribute the writes but centralise > >> the reads. So at the start of the update, we read all of the > >> Resources, obtain their dependencies and build one central graph[1]. > >> We than make that graph available to each resource (either by passing > >> it as a notification parameter, or storing it somewhere central in > >> the DB that they will all have to read anyway, i.e. the Stack). But > >> when we update a dependency we don't update the central graph, we > >> update the individual Resource so there's no global lock required. > >> > >> [1] > >> https://github.com/zaneb/heat-convergence- > prototype/blob/distributed- > >> graph/converge/stack.py#L166-L168 > >> > >>>>> Also taking care of deleting resources in order will be an issue. > >>>> > >>>> It works fine. > >>>> > >>>>> This implies that there will be different versions of a resource > >>>>> which will even complicate further. > >>>> > >>>> No it doesn't, other than the different versions we already have > >>>> due to UpdateReplace. > >>>> > >>>>>>>> This approach reduces DB queries by waiting for completion > >>>>>>>> notification > >>>>>> on a topic. The drawback I see is that delete stack stream will > >>>>>> be huge as it will have the entire graph. We can always dump such > >>>>>> data in ResourceLock.data Json and pass a simple flag > >>>>>> "load_stream_from_db" to converge RPC call as a workaround for > >>>>>> delete > >>>> operation. > >>>>>>> > >>>>>>> This seems to be essentially equivalent to my 'SyncPoint' > >>>>>>> proposal[1], with > >>>>>> the key difference that the data is stored in-memory in a Heat > >>>>>> engine rather than the database. > >>>>>>> > >>>>>>> I suspect it's probably a mistake to move it in-memory for > >>>>>>> similar reasons to the argument Clint made against synchronising > >>>>>>> the marking off > >>>>>> of dependencies in-memory. The database can handle that and the > >>>>>> problem of making the DB robust against failures of a single > >>>>>> machine has already been solved by someone else. If we do it > >>>>>> in-memory we are just creating a single point of failure for not > >>>>>> much gain. (I guess you could argue it doesn't matter, since if > >>>>>> any Heat engine dies during the traversal then we'll have to kick > >>>>>> off another one anyway, but it does limit our options if that > >>>>>> changes in the > >>>>>> future.) [Murugan, Visnusaran] Resource completes, removes itself > >>>>>> from resource_lock and notifies engine. Engine will acquire > >>>>>> parent lock and initiate parent only if all its children are > >>>>>> satisfied (no child entry in > >>>> resource_lock). > >>>>>> This will come in place of Aggregator. > >>>>>> > >>>>>> Yep, if you s/resource_lock/SyncPoint/ that's more or less > >>>>>> exactly what I > >>>> did. > >>>>>> The three differences I can see are: > >>>>>> > >>>>>> 1) I think you are proposing to create all of the sync points at > >>>>>> the start of the traversal, rather than on an as-needed basis. > >>>>>> This is probably a good idea. I didn't consider it because of the > >>>>>> way my prototype evolved, but there's now no reason I can see not > to do this. > >>>>>> If we could move the data to the Resource table itself then we > >>>>>> could even get it for free from an efficiency point of view. > >>>>> > >>>>> +1. But we will need engine_id to be stored somewhere for recovery > >>>> purpose (easy to be queried format). > >>>> > >>>> Yeah, so I'm starting to think you're right, maybe the/a Lock table > >>>> is the right thing to use there. We could probably do it within the > >>>> resource table using the same select-for-update to set the > >>>> engine_id, but I agree that we might be starting to jam too much into > that one table. > >>>> > >>> > >>> yeah. Unrelated values in resource table. Upon resource completion > >>> we have to unset engine_id as well as compared to dropping a row > >>> from > >> resource lock. > >>> Both are good. Having engine_id in resource_table will reduce db > >>> operaions in half. We should go with just resource table along with > >> engine_id. > >> > >> OK > >> > >>>>> Sync points are created as-needed. Single resource is enough to > >>>>> restart > >>>> that entire stream. > >>>>> I think there is a disconnect in our understanding. I will detail > >>>>> it as well in > >>>> the etherpad. > >>>> > >>>> OK, that would be good. > >>>> > >>>>>> 2) You're using a single list from which items are removed, > >>>>>> rather than two lists (one static, and one to which items are > >>>>>> added) that get > >>>> compared. > >>>>>> Assuming (1) then this is probably a good idea too. > >>>>> > >>>>> Yeah. We have a single list per active stream which work by > >>>>> removing Complete/satisfied resources from it. > >>>> > >>>> I went to change this and then remembered why I did it this way: > >>>> the sync point is also storing data about the resources that are > >>>> triggering it. Part of this is the RefID and attributes, and we > >>>> could replace that by storing that data in the Resource itself and > >>>> querying it rather than having it passed in via the notification. > >>>> But the other part is the ID/key of those resources, which we > >>>> _need_ to know in order to update the requirements in case one of > >>>> them has been replaced and thus the graph doesn't reflect it yet. > >>>> (Or, for that matter, we need it to know where to go looking for > >>>> the RefId and/or attributes if they're in the > >>>> DB.) So we have to store some data, we can't just remove items from > >>>> the required list (although we could do that as well). > >>>> > >>>>>> 3) You're suggesting to notify the engine unconditionally and let > >>>>>> the engine decide if the list is empty. That's probably not a > >>>>>> good idea - not only does it require extra reads, it introduces a > >>>>>> race condition that you then have to solve (it can be solved, > >>>>>> it's just more > >> work). > >>>>>> Since the update to remove a child from the list is atomic, it's > >>>>>> best to just trigger the engine only if the list is now empty. > >>>>>> > >>>>> > >>>>> No. Notify only if stream has something to be processed. The newer > >>>>> Approach based on db lock will be that the last resource will > >>>>> initiate its > >>>> parent. > >>>>> This is opposite to what our Aggregator model had suggested. > >>>> > >>>> OK, I think we're on the same page on this one then. > >>>> > >>> > >>> > >>> Yeah. > >>> > >>>>>>> It's not clear to me how the 'streams' differ in practical terms > >>>>>>> from just passing a serialisation of the Dependencies object, > >>>>>>> other than being incomprehensible to me ;). The current > >>>>>>> Dependencies implementation > >>>>>>> (1) is a very generic implementation of a DAG, (2) works and has > >>>>>>> plenty of > >>>>>> unit tests, (3) has, with I think one exception, a pretty > >>>>>> straightforward API, > >>>>>> (4) has a very simple serialisation, returned by the edges() > >>>>>> method, which can be passed back into the constructor to recreate > >>>>>> it, and (5) has an API that is to some extent relied upon by > >>>>>> resources, and so won't likely be removed outright in any event. > >>>>>>> Whatever code we need to handle dependencies ought to just > build > >>>>>>> on > >>>>>> this existing implementation. > >>>>>>> [Murugan, Visnusaran] Our thought was to reduce payload size > >>>>>> (template/graph). Just planning for worst case scenario (million > >>>>>> resource > >>>>>> stack) We could always dump them in ResourceLock.data to be > >>>>>> loaded by Worker. > >> > >> With the latest updates to the Etherpad, I'm even more confused by > >> streams than I was before. > >> > >> One thing I never understood is why do you need to store the whole > >> path to reach each node in the graph? Surely you only need to know > >> the nodes this one is waiting on, the nodes waiting on this one and > >> the ones those are waiting on, not the entire history up to this > >> point. The size of each stream is theoretically up to O(n^2) and > >> you're storing n of them - that's going to get painful in this million- > resource stack. > >> > >>>>>> If there's a smaller representation of a graph than a list of > >>>>>> edges then I don't know what it is. The proposed stream structure > >>>>>> certainly isn't it, unless you mean as an alternative to storing > >>>>>> the entire graph once for each resource. A better alternative is > >>>>>> to store it once centrally - in my current implementation it is > >>>>>> passed down through the trigger messages, but since only one > >>>>>> traversal can be in progress at a time it could just as easily be > >>>>>> stored in the Stack table of the > >>>> database at the slight cost of an extra write. > >>>>>> > >>>>> > >>>>> Agree that edge is the smallest representation of a graph. But it > >>>>> does not give us a complete picture without doing a DB lookup. Our > >>>>> assumption was to store streams in IN_PROGRESS resource_lock.data > >>>>> column. This could be in resource table instead. > >>>> > >>>> That's true, but I think in practice at any point where we need to > >>>> look at this we will always have already loaded the Stack from the > >>>> DB for some other reason, so we actually can get it for free. (See > >>>> detailed discussion in my reply to Anant.) > >>>> > >>> > >>> Aren't we planning to stop loading stack with all resource objects > >>> in future to Address scalability concerns we currently have? > >> > >> We plan on not loading all of the Resource objects each time we load > >> the Stack object, but I think we will always need to have loaded the > >> Stack object (for example, we'll need to check the current traversal > >> ID, amongst other reasons). So if the serialised dependency graph is > >> stored in the Stack it will be no big deal. > >> > >>>>>> I'm not opposed to doing that, BTW. In fact, I'm really > >>>>>> interested in your input on how that might help make recovery > >>>>>> from failure more robust. I know Anant mentioned that not storing > >>>>>> enough data to recover when a node dies was his big concern with > >>>>>> my current > >> approach. > >>>>>> > >>>>> > >>>>> With streams, We feel recovery will be easier. All we need is a > >>>>> trigger :) > >>>>> > >>>>>> I can see that by both creating all the sync points at the start > >>>>>> of the traversal and storing the dependency graph in the database > >>>>>> instead of letting it flow through the RPC messages, we would be > >>>>>> able to resume a traversal where it left off, though I'm not sure > >>>>>> what that buys > >>>> us. > >>>>>> > >>>>>> And I guess what you're suggesting is that by having an explicit > >>>>>> lock with the engine ID specified, we can detect when a resource > >>>>>> is stuck in IN_PROGRESS due to an engine going down? That's > >>>>>> actually pretty > >>>> interesting. > >>>>>> > >>>>> > >>>>> Yeah :) > >>>>> > >>>>>>> Based on our call on Thursday, I think you're taking the idea of > >>>>>>> the Lock > >>>>>> table too literally. The point of referring to locks is that we > >>>>>> can use the same concepts as the Lock table relies on to do > >>>>>> atomic updates on a particular row of the database, and we can > >>>>>> use those atomic updates to prevent race conditions when > >>>>>> implementing SyncPoints/Aggregators/whatever you want to call > >>>>>> them. It's not that we'd actually use the Lock table itself, > >>>>>> which implements a mutex and therefore offers only a much slower > >>>>>> and more stateful way of doing what we want (lock mutex, change > data, unlock mutex). > >>>>>>> [Murugan, Visnusaran] Are you suggesting something like a > >>>>>>> select-for- > >>>>>> update in resource table itself without having a lock table? > >>>>>> > >>>>>> Yes, that's exactly what I was suggesting. > >>>>> > >>>>> DB is always good for sync. But we need to be careful not to overdo it. > >>>> > >>>> Yeah, I see what you mean now, it's starting to _feel_ like there'd > >>>> be too many things mixed together in the Resource table. Are you > >>>> aware of some concrete harm that might cause though? What happens > >>>> if we overdo it? Is select-for-update on a huge row more expensive > >>>> than the whole overhead of manipulating the Lock? > >>>> > >>>> Just trying to figure out if intuition is leading me astray here. > >>>> > >>> > >>> You are right. There should be no difference apart from little bump > >>> In memory usage. But I think it should be fine. > >>> > >>>>> Will update etherpad by tomorrow. > >>>> > >>>> OK, thanks. > >>>> > >>>> cheers, > >>>> Zane. > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Thanks, Unmesh G. irc: unmeshg From vishvananda at gmail.com Wed Dec 17 18:56:38 2014 From: vishvananda at gmail.com (Vishvananda Ishaya) Date: Wed, 17 Dec 2014 10:56:38 -0800 Subject: [openstack-dev] [qa] How to delete a VM which is in ERROR state? In-Reply-To: References: <548B6844.4020804@gmail.com> <2075A451-DE64-4AEE-9B86-6201A8612C5B@gmail.com> Message-ID: There have been a few, but we were specifically hitting this one: https://www.redhat.com/archives/libvir-list/2014-March/msg00501.html Vish On Dec 17, 2014, at 7:03 AM, Belmiro Moreira wrote: > Hi Vish, > do you have more info about the libvirt deadlocks that you observed? > Maybe I'm observing the same on SLC6 where I can't even "kill" libvirtd process. > > Belmiro > > On Tue, Dec 16, 2014 at 12:01 AM, Vishvananda Ishaya wrote: > I have seen deadlocks in libvirt that could cause this. When you are in this state, check to see if you can do a virsh list on the node. If not, libvirt is deadlocked, and ubuntu may need to pull in a fix/newer version. > > Vish > > On Dec 12, 2014, at 2:12 PM, pcrews wrote: > > > On 12/09/2014 03:54 PM, Ken'ichi Ohmichi wrote: > >> Hi, > >> > >> This case is always tested by Tempest on the gate. > >> > >> https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_delete_server.py#L152 > >> > >> So I guess this problem wouldn't happen on the latest version at least. > >> > >> Thanks > >> Ken'ichi Ohmichi > >> > >> --- > >> > >> 2014-12-10 6:32 GMT+09:00 Joe Gordon : > >>> > >>> > >>> On Sat, Dec 6, 2014 at 5:08 PM, Danny Choi (dannchoi) > >>> wrote: > >>>> > >>>> Hi, > >>>> > >>>> I have a VM which is in ERROR state. > >>>> > >>>> > >>>> +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ > >>>> > >>>> | ID | Name > >>>> | Status | Task State | Power State | Networks | > >>>> > >>>> > >>>> +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------+ > >>>> > >>>> | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | > >>>> cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR | - | NOSTATE > >>>> | | > >>>> > >>>> > >>>> I tried in both CLI ?nova delete? and Horizon ?terminate instance?. > >>>> Both accepted the delete command without any error. > >>>> However, the VM never got deleted. > >>>> > >>>> Is there a way to remove the VM? > >>> > >>> > >>> What version of nova are you using? This is definitely a serious bug, you > >>> should be able to delete an instance in error state. Can you file a bug that > >>> includes steps on how to reproduce the bug along with all relevant logs. > >>> > >>> bugs.launchpad.net/nova > >>> > >>>> > >>>> > >>>> Thanks, > >>>> Danny > >>>> > >>>> _______________________________________________ > >>>> OpenStack-dev mailing list > >>>> OpenStack-dev at lists.openstack.org > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > >>> > >>> > >>> _______________________________________________ > >>> OpenStack-dev mailing list > >>> OpenStack-dev at lists.openstack.org > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > Hi, > > > > I've encountered this in my own testing and have found that it appears to be tied to libvirt. > > > > When I hit this, reset-state as the admin user reports success (and state is set), *but* things aren't really working as advertised and subsequent attempts to do anything with the errant vm's will send them right back into 'FLAIL' / can't delete / endless DELETING mode. > > > > restarting libvirt-bin on my machine fixes this - after restart, the deleting vm's are properly wiped without any further user input to nova/horizon and all seems right in the world. > > > > using: > > devstack > > ubuntu 14.04 > > libvirtd (libvirt) 1.2.2 > > > > triggered via: > > lots of random create/reboot/resize/delete requests of varying validity and sanity. > > > > Am in the process of cleaning up my test code so as not to hurt anyone's brain with the ugly and will file a bug once done, but thought this worth sharing. > > > > Thanks, > > Patrick > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.a.st.pierre at gmail.com Wed Dec 17 19:07:26 2014 From: chris.a.st.pierre at gmail.com (Chris St. Pierre) Date: Wed, 17 Dec 2014 13:07:26 -0600 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: <0FBF5631AB7B504D89C7E6929695B6249302F16F@ORD1EXD02.RACKSPACE.CORP> References: <5490967C.8020801@gmail.com> <5490A5D8.2090804@catalyst.net.nz> <20141216231940.GA61726@HQSML-1081034.cable.comcast.com> <0FBF5631AB7B504D89C7E6929695B6249302F16F@ORD1EXD02.RACKSPACE.CORP> Message-ID: I was assuming atomic increment/decrement operations, in which case I'm not sure I see the race conditions. Or is atomism assuming too much? On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar < nikhil.komawar at rackspace.com> wrote: > That looks like a decent alternative if it works. However, it would be > too racy unless we we implement a test-and-set for such properties or there > is a different job which queues up these requests and perform sequentially > for each tenant. > > Thanks, > -Nikhil > ------------------------------ > *From:* Chris St. Pierre [chris.a.st.pierre at gmail.com] > *Sent:* Wednesday, December 17, 2014 10:23 AM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [glance] Option to skip deleting images in > use? > > That's unfortunately too simple. You run into one of two cases: > > 1. If the job automatically removes the protected attribute when an > image is no longer in use, then you lose the ability to use "protected" on > images that are not in use. I.e., there's no way to say, "nothing is > currently using this image, but please keep it around." (This seems > particularly useful for snapshots, for instance.) > > 2. If the job does not automatically remove the protected attribute, > then an image would be protected if it had ever been in use; to delete an > image, you'd have to manually un-protect it, which is a workflow that quite > explicitly defeats the whole purpose of flagging images as protected when > they're in use. > > It seems like flagging an image as *not* in use is actually a fairly > difficult problem, since it requires consensus among all components that > might be using images. > > The only solution that readily occurs to me would be to add something > like a filesystem link count to images in Glance. Then when Nova spawns an > instance, it increments the usage count; when the instance is destroyed, > the usage count is decremented. And similarly with other components that > use images. An image could only be deleted when its usage count was zero. > > There are ample opportunities to get out of sync there, but it's at > least a sketch of something that might work, and isn't *too* horribly > hackish. Thoughts? > > On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya > wrote: > >> A simple solution that wouldn?t require modification of glance would be a >> cron job >> that lists images and snapshots and marks them protected while they are >> in use. >> >> Vish >> >> On Dec 16, 2014, at 3:19 PM, Collins, Sean < >> Sean_Collins2 at cable.comcast.com> wrote: >> >> > On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote: >> >> No, I'm looking to prevent images that are in use from being deleted. >> "In >> >> use" and "protected" are disjoint sets. >> > >> > I have seen multiple cases of images (and snapshots) being deleted while >> > still in use in Nova, which leads to some very, shall we say, >> > interesting bugs and support problems. >> > >> > I do think that we should try and determine a way forward on this, they >> > are indeed disjoint sets. Setting an image as protected is a proactive >> > measure, we should try and figure out a way to keep tenants from >> > shooting themselves in the foot if possible. >> > >> > -- >> > Sean M. Collins >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Chris St. Pierre > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Chris St. Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From cbkyeoh at gmail.com Wed Dec 17 19:10:40 2014 From: cbkyeoh at gmail.com (Christopher Yeoh) Date: Wed, 17 Dec 2014 19:10:40 +0000 Subject: [openstack-dev] [nova] Complexity check and v2 API References: <5491ACC2.6070207@dektech.com.au> <5491BACE.2010900@dektech.com.au> Message-ID: Hi, Given the timing (no spec approved) it sounds like a v2.1 plus microversions (just merging) with no v2 changes at all. The v2.1 framework is more flexible and you should need no changes to servers.py at all as there are hooks for adding extra parameters in separate plugins. There are examples of this in the v3 directory which is really v2.1 now. Chris On Thu, 18 Dec 2014 at 3:49 am, Pasquale Porreca < pasquale.porreca at dektech.com.au> wrote: > Thank you for the answer. > > my API proposal won't be merged in kilo release since the deadline for > approval is tomorrow, so I may propose the fix to lower the complexity > in another way, what do you think about a bug fix? > > On 12/17/14 18:05, Matthew Gilliard wrote: > > Hello Pasquale > > > > The problem is that you are trying to add a new if/else branch into > > a method which is already ~250 lines long, and has the highest > > complexity of any function in the nova codebase. I assume that you > > didn't contribute much to that complexity, but we've recently added a > > limit to stop it getting any worse. So, regarding your 4 suggestions: > > > > 1/ As I understand it, v2.1 should be the same as v2 at the > > moment, so they need to be kept the same > > 2/ You can't ignore it - it will fail CI > > 3/ No thank you. This limit should only ever be lowered :-) > > 4/ This is 'the right way'. Your suggestion for the refactor does > > sound good. > > > > I suggest a single patch that refactors and lowers the limit in > > tox.ini. Once you've done that then you can add the new parameter in > > a following patch. Please feel free to add me to any patches you > > create. > > > > Matthew > > > > > > > > On Wed, Dec 17, 2014 at 4:18 PM, Pasquale Porreca > > wrote: > >> Hello > >> > >> I am working on an API extension that adds a parameter on create server > >> call; to implement the v2 API I added few lines of code to > >> nova/api/openstack/compute/servers.py > >> > >> In particular just adding something like > >> > >> new_param = None > >> if self.ext_mgr.is_loaded('os-new-param'): > >> new_param = server_dict.get('new_param') > >> > >> leads to a pep8 fail with message 'Controller.create' is too complex > (47) > >> (Note that in tox.ini the max complexity is fixed to 47 and there is a > note > >> specifying 46 is the max complexity present at the moment). > >> > >> It is quite easy to make this test pass creating a new method just to > >> execute these lines of code, anyway all other extensions are handled in > that > >> way and one of most important stylish rule states to be consistent with > >> surrounding code, so I don't think a separate function is the way to go > >> (unless it implies a change in how all other extensions are handled > too). > >> > >> My thoughts on this situation: > >> > >> 1) New extensions should not consider v2 but only v2.1, so that file > should > >> not be touched > >> 2) Ignore this error and go on: if and when the extension will be > merged the > >> complexity in tox.ini will be changed too > >> 3) The complexity in tox.ini should be raised to allow new v2 extensions > >> 4) The code of that module should be refactored to lower the complexity > >> (i.e. move the load of each extension in a separate function) > >> > >> I would like to know if any of my point is close to the correct > solution. > >> > >> -- > >> Pasquale Porreca > >> > >> DEK Technologies > >> Via dei Castelli Romani, 22 > >> 00040 Pomezia (Roma) > >> > >> Mobile +39 3394823805 > >> Skype paskporr > >> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Pasquale Porreca > > DEK Technologies > Via dei Castelli Romani, 22 > 00040 Pomezia (Roma) > > Mobile +39 3394823805 > Skype paskporr > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thingee at gmail.com Wed Dec 17 20:00:34 2014 From: thingee at gmail.com (Mike Perez) Date: Wed, 17 Dec 2014 12:00:34 -0800 Subject: [openstack-dev] [python-cinderclient] Return request ID to caller In-Reply-To: <588CEB80D6885A41804FDE568C7D81BE56F3CAEE@MAIL703.KDS.KEANE.COM> References: <588CEB80D6885A41804FDE568C7D81BE56F3CAEE@MAIL703.KDS.KEANE.COM> Message-ID: <20141217200034.GA5922@gmail.com> On 05:54 Fri 12 Dec , Malawade, Abhijeet wrote: > HI, > > I want your thoughts on blueprint 'Log Request ID Mappings' for cross projects. > BP: https://blueprints.launchpad.net/nova/+spec/log-request-id-mappings > It will enable operators to get request id's mappings easily and will be useful in analysing logs effectively. I've weighed on this question a couple of times now and recently from the Cinder meeting. Solution 1 please. -- Mike Perez From dkranz at redhat.com Wed Dec 17 20:04:03 2014 From: dkranz at redhat.com (David Kranz) Date: Wed, 17 Dec 2014 15:04:03 -0500 (EST) Subject: [openstack-dev] [qa] Please do not merge neutron test changes until "client returns one value" is merged In-Reply-To: <515691591.193375.1418846504121.JavaMail.zimbra@redhat.com> Message-ID: <82992735.195593.1418846643381.JavaMail.zimbra@redhat.com> This https://review.openstack.org/#/c/141152/ gets rid of the useless second return value from neutron client methods according to this spec: https://github.com/openstack/qa-specs/blob/master/specs/clients-return-one-value.rst. Because the client and test changes have to be in the same patch, this one is very large. So please let it merge before any other neutron stuff. Any neutron patches will require the simple change of removing the unused first return value from neutron client methods. Thanks! -David From thomas.maddox at RACKSPACE.COM Wed Dec 17 20:05:31 2014 From: thomas.maddox at RACKSPACE.COM (Thomas Maddox) Date: Wed, 17 Dec 2014 20:05:31 +0000 Subject: [openstack-dev] [Neutron] Looking for feedback: spec for allowing additional IPs to be shared In-Reply-To: Message-ID: Sounds great. I went ahead and set up a Gerrit review here: https://review.openstack.org/#/c/142566/. Thanks for the feedback and your time! -Thomas On 12/17/14, 10:41 AM, "Carl Baldwin" wrote: >On Tue, Dec 16, 2014 at 10:32 AM, Thomas Maddox > wrote: >> Hey all, >> >> It seems I missed the Kilo proposal deadline for Neutron, >>unfortunately, but >> I still wanted to propose this spec for Neutron and get >>feedback/approval, >> sooner rather than later, so I can begin working on an implementation, >>even >> if it can't land in Kilo. I opted to put this in an etherpad for now for >> collaboration due to missing the Kilo proposal deadline. >> >> Spec markdown in etherpad: >> https://etherpad.openstack.org/p/allow-sharing-additional-ips > >Thomas, > >I did a quick look over and made a few comments because this looked >similar to other stuff that I've looked at recently. I'd rather read >and comment on this proposal in gerrit where all other specs are >proposed. > >Carl > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From thingee at gmail.com Wed Dec 17 20:06:17 2014 From: thingee at gmail.com (Mike Perez) Date: Wed, 17 Dec 2014 12:06:17 -0800 Subject: [openstack-dev] [cinder driver] A question about Kilo merge point In-Reply-To: References: Message-ID: <20141217200617.GB5922@gmail.com> On 11:40 Tue 16 Dec , liuxinguo wrote: > If a cinder driver can not be mergerd into Kilo before Kilo-1, does it means that this driver will has very little chance to be merged into Kilo? > And what percentage of drivers will be merged before Kilo-1 according to the whole drivers that will be merged into Kilo at last? > > Thanks! All the details for this is here: http://lists.openstack.org/pipermail/openstack-dev/2014-October/049512.html -- Mike Perez From jp at jamezpolley.com Wed Dec 17 20:21:23 2014 From: jp at jamezpolley.com (James Polley) Date: Wed, 17 Dec 2014 21:21:23 +0100 Subject: [openstack-dev] [all][gerrit] Showing all inline comments from all patch sets In-Reply-To: <549132FB.8090308@vmware.com> References: <54900903.6090008@vmware.com> <20141216135906.GR2497@yuggoth.org> <54904D9B.2000204@vmware.com> <20141216154501.GU2497@yuggoth.org> <549132FB.8090308@vmware.com> Message-ID: But equally I think finding out why the "New Screen" still doesn't do what you want is valuable - it's likely other people want something similar to what you want, so this kind of feedback can be used to decide on future features On Wed, Dec 17, 2014 at 8:38 AM, Radoslav Gerganov wrote: > > I am aware of this "New Screen" but it is not useful to me. I'd like to > see comments grouped by patchset, file and commented line rather than a > flat view mixed with everything else. Anyway, I guess there is no > one-size-fits-all solution for this and everyone has different preferences > which is cool. > > -Rado > > On 12/17/14, 8:58 AM, James Polley wrote: > >> I was looking at the new change screen on https://review.openstack.org >> today[1] and it seems to do something vaguely similar. >> >> Rather than saying "James polley made 4 inline comments", the contents >> of the comments are shown, along with a link to the file so you can see >> the context. >> >> Have you seen this? It seems fairly similar to what you're wanting. >> >> Have >> [1] To activate it, go to >> https://review.openstack.org/#/settings/preferences and set "Change >> view" to "New Screen", then look at a change screen (such as >> https://review.openstack.org/#/c/127283/) >> >> On Tue, Dec 16, 2014 at 4:45 PM, Jeremy Stanley > > wrote: >> >> On 2014-12-16 17:19:55 +0200 (+0200), Radoslav Gerganov wrote: >> > We don't need GoogleAppEngine if we decide that this is useful. We >> > simply need to put the html page which renders the view on >> >https://review.openstack.org. It is all javascript which talks >> > asynchronously to the Gerrit backend. >> > >> > I am using GAE to simply illustrate the idea without having to >> > spin up an entire Gerrit server. >> >> That makes a lot more sense--thanks for the clarification! >> >> > I guess I can also submit a patch to the infra project and see how >> > this works onhttps://review-dev.openstack.org if you want. >> >> If there's a general desire from the developer community for it, >> then that's probably the next step. However, ultimately this seems >> like something better suited as an upstream feature request for >> Gerrit (there may even already be thread-oriented improvements in >> the works for the new change screen--I haven't kept up with their >> progress lately). >> -- >> Jeremy Stanley >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikhil.komawar at RACKSPACE.COM Wed Dec 17 20:33:51 2014 From: nikhil.komawar at RACKSPACE.COM (Nikhil Komawar) Date: Wed, 17 Dec 2014 20:33:51 +0000 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: References: <5490967C.8020801@gmail.com> <5490A5D8.2090804@catalyst.net.nz> <20141216231940.GA61726@HQSML-1081034.cable.comcast.com> <0FBF5631AB7B504D89C7E6929695B6249302F16F@ORD1EXD02.RACKSPACE.CORP>, Message-ID: <0FBF5631AB7B504D89C7E6929695B6249302F2A2@ORD1EXD02.RACKSPACE.CORP> Guess that's a implementation detail. Depends on the way you go about using what's available now, I suppose. Thanks, -Nikhil ________________________________ From: Chris St. Pierre [chris.a.st.pierre at gmail.com] Sent: Wednesday, December 17, 2014 2:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use? I was assuming atomic increment/decrement operations, in which case I'm not sure I see the race conditions. Or is atomism assuming too much? On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar > wrote: That looks like a decent alternative if it works. However, it would be too racy unless we we implement a test-and-set for such properties or there is a different job which queues up these requests and perform sequentially for each tenant. Thanks, -Nikhil ________________________________ From: Chris St. Pierre [chris.a.st.pierre at gmail.com] Sent: Wednesday, December 17, 2014 10:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use? That's unfortunately too simple. You run into one of two cases: 1. If the job automatically removes the protected attribute when an image is no longer in use, then you lose the ability to use "protected" on images that are not in use. I.e., there's no way to say, "nothing is currently using this image, but please keep it around." (This seems particularly useful for snapshots, for instance.) 2. If the job does not automatically remove the protected attribute, then an image would be protected if it had ever been in use; to delete an image, you'd have to manually un-protect it, which is a workflow that quite explicitly defeats the whole purpose of flagging images as protected when they're in use. It seems like flagging an image as *not* in use is actually a fairly difficult problem, since it requires consensus among all components that might be using images. The only solution that readily occurs to me would be to add something like a filesystem link count to images in Glance. Then when Nova spawns an instance, it increments the usage count; when the instance is destroyed, the usage count is decremented. And similarly with other components that use images. An image could only be deleted when its usage count was zero. There are ample opportunities to get out of sync there, but it's at least a sketch of something that might work, and isn't *too* horribly hackish. Thoughts? On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya > wrote: A simple solution that wouldn?t require modification of glance would be a cron job that lists images and snapshots and marks them protected while they are in use. Vish On Dec 16, 2014, at 3:19 PM, Collins, Sean > wrote: > On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote: >> No, I'm looking to prevent images that are in use from being deleted. "In >> use" and "protected" are disjoint sets. > > I have seen multiple cases of images (and snapshots) being deleted while > still in use in Nova, which leads to some very, shall we say, > interesting bugs and support problems. > > I do think that we should try and determine a way forward on this, they > are indeed disjoint sets. Setting an image as protected is a proactive > measure, we should try and figure out a way to keep tenants from > shooting themselves in the foot if possible. > > -- > Sean M. Collins > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Chris St. Pierre _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Chris St. Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Dec 17 20:38:34 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 17 Dec 2014 20:38:34 +0000 Subject: [openstack-dev] [python-*client] py33 jobs seem to be failing In-Reply-To: References: Message-ID: <20141217203834.GD2497@yuggoth.org> On 2014-12-17 11:09:59 -0500 (-0500), Steve Martinelli wrote: [...] > The stack trace leads me to believe that docutils or sphinx is the > culprit, but neither has released a new version in the time the > bug has been around, so I'm not sure what the root cause of the > problem is. It's an unforeseen interaction between new PBR changes to support Setuptools 8 and the way docutils supports Py3K by running 2to3 during installation (entrypoint scanning causes pre-translated docutils to be loaded into the execution space through the egg-info writer PBR grew to be able to record Git SHA details outside of version strings). A solution is currently being developed. -- Jeremy Stanley From jesse.pretorius at gmail.com Wed Dec 17 20:40:53 2014 From: jesse.pretorius at gmail.com (Jesse Pretorius) Date: Wed, 17 Dec 2014 20:40:53 +0000 Subject: [openstack-dev] [Horizon] [UX] Curvature interactive virtual network design References: Message-ID: Yes please. :) On Fri, 7 Nov 2014 at 16:19 John Davidge (jodavidg) wrote: > As discussed in the Horizon contributor meet up, here at Cisco we?re > interested in upstreaming our work on the Curvature dashboard into Horizon. > We think that it can solve a lot of issues around guidance for new users > and generally improving the experience of interacting with Neutron. > Possibly an alternative persona for novice users? > > For reference, see: > > 1. http://youtu.be/oFTmHHCn2-g ? Video Demo > 2. > https://www.openstack.org/summit/portland-2013/session-videos/presentation/interactive-visual-orchestration-with-curvature-and-donabe ? > Portland presentation > 3. https://github.com/CiscoSystems/curvature ? original (Rails based) > code > > We?d like to gauge interest from the community on whether this is > something people want. > > Thanks, > > John, Brad & Sam > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaufer at us.ibm.com Wed Dec 17 21:23:35 2014 From: kaufer at us.ibm.com (Steven Kaufer) Date: Wed, 17 Dec 2014 15:23:35 -0600 Subject: [openstack-dev] [python-cinderclient] Supported client-side sort keys In-Reply-To: <20141217200034.GA5922@gmail.com> References: <588CEB80D6885A41804FDE568C7D81BE56F3CAEE@MAIL703.KDS.KEANE.COM> <20141217200034.GA5922@gmail.com> Message-ID: The cinder client supports passing a sort key via the --sort_key argument. The client restricts the sort keys that the user can supply to the following: https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v2/volumes.py#L28-L29 This list of sort keys is not complete. As far I know, all attributes on this class are valid: https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/models.py#L104 I noticed that the 'name' key is incorrect and it should instead be 'display_name'. Before I create a bug/fix to address this, I have the following questions: Does anyone know the rational behind the client restricting the possible sort keys? Why not allow the user to supply any sort key (assuming that invalid keys are gracefully handled)? Note, if you try this out at home, you'll notice that the client table is not actually sorted, fixed under: https://review.openstack.org/#/c/141964/ Thanks, Steven Kaufer -------------- next part -------------- An HTML attachment was scrubbed... URL: From slukjanov at mirantis.com Wed Dec 17 21:29:11 2014 From: slukjanov at mirantis.com (Sergey Lukjanov) Date: Thu, 18 Dec 2014 01:29:11 +0400 Subject: [openstack-dev] [sahara] team meeting Dec 18 1400 UTC Message-ID: Hi folks, We'll be having the Sahara team meeting in #openstack-meeting-3 channel. Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20141218T14 NOTE: It's a new alternate time slot. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Dec 17 21:53:07 2014 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 17 Dec 2014 15:53:07 -0600 Subject: [openstack-dev] [hacking] proposed rules drop for 1.0 In-Reply-To: <5486DF7F.7080706@dague.net> References: <5486DF7F.7080706@dague.net> Message-ID: <5491FB43.3040307@nemebean.com> For anyone who's interested, the final removals are in a series starting here: https://review.openstack.org/#/c/142585/ On 12/09/2014 05:39 AM, Sean Dague wrote: > I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely. > > 1 - the entire H8* group. This doesn't function on python code, it > functions on git commit message, which makes it tough to run locally. It > also would be a reason to prevent us from not rerunning tests on commit > message changes (something we could do after the next gerrit update). > > 2 - the entire H3* group - because of this - > https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm > > A look at the H3* code shows that it's terribly complicated, and is > often full of bugs (a few bit us last week). I'd rather just delete it > and move on. > > -Sean > From akshik at outlook.com Wed Dec 17 22:29:04 2014 From: akshik at outlook.com (Akshik DBK) Date: Thu, 18 Dec 2014 03:59:04 +0530 Subject: [openstack-dev] HTTPS for spice console Message-ID: Are there any recommended approach to configure spice console proxy on a secure [https], could not find proper documentation for the same.can someone point me to the rigt direction -------------- next part -------------- An HTML attachment was scrubbed... URL: From kprad1 at yahoo.com Wed Dec 17 23:06:15 2014 From: kprad1 at yahoo.com (Padmanabhan Krishnan) Date: Wed, 17 Dec 2014 23:06:15 +0000 (UTC) Subject: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled In-Reply-To: References: Message-ID: <106528797.491287.1418857575930.JavaMail.yahoo@jws10687.mail.bf1.yahoo.com> This means whatever tools the operators are using, it need to make sure the IP address assigned inside the VM matches with Openstack has assigned to the port.Bringing the question that i had in another thread on the same topic: If one wants to use the provider DHCP server and not have Openstack's DHCP or L3 agent/DVR, it may not be possible to do so even with DHCP disabled in Openstack network. Even if the provider DHCP server is configured with the same start/end range in the same subnet, there's no guarantee that it will match with Openstack assigned IP address for bulk VM launches or? when there's a failure case.So, how does one deploy external DHCP with Openstack? If Openstack hasn't assigned a IP address when DHCP is disabled for a network, can't port_update be done with the provider DHCP specified IP address to put the anti-spoofing and security rules?With Openstack assigned IP address, port_update cannot be done since IP address aren't in sync and can overlap. Thanks,Paddu On 12/16/14 4:30 AM, "Pasquale Porreca" wrote: >I understood and I agree that assigning the ip address to the port is >not a bug, however showing it to the user, at least in Horizon dashboard >where it pops up in the main instance screen without a specific search, >can be very confusing. > >On 12/16/14 12:25, Salvatore Orlando wrote: >> In Neutron IP address management and distribution are separated >>concepts. >> IP addresses are assigned to ports even when DHCP is disabled. That IP >> address is indeed used to configure anti-spoofing rules and security >>groups. >> >> It is however understandable that one wonders why an IP address is >>assigned >> to a port if there is no DHCP server to communicate that address. >>Operators >> might decide to use different tools to ensure the IP address is then >> assigned to the instance's ports. On XenServer for instance one could >>use a >> guest agent reading network configuration from XenStore; as another >> example, older versions of Openstack used to inject network >>configuration >> into the instance file system; I reckon that today's configdrive might >>also >> be used to configure instance's networking. >> >> Summarising I don't think this is a bug. Nevertheless if you have any >>idea >> regarding improvements on the API UX feel free to file a bug report. >> >> Salvatore >> >> On 16 December 2014 at 10:41, Pasquale Porreca < >> pasquale.porreca at dektech.com.au> wrote: >>> >>> Is there a specific reason for which a fixed ip is bound to a port on a >>> subnet where dhcp is disabled? it is confusing to have this info shown >>> when the instance doesn't have actually an ip on that port. >>> Should I fill a bug report, or is this a wanted behavior? >>> >>> -- >>> Pasquale Porreca >>> >>> DEK Technologies >>> Via dei Castelli Romani, 22 >>> 00040 Pomezia (Roma) >>> >>> Mobile +39 3394823805 >>> Skype paskporr >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >-- >Pasquale Porreca > >DEK Technologies >Via dei Castelli Romani, 22 >00040 Pomezia (Roma) > >Mobile +39 3394823805 >Skype paskporr > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamielennox at redhat.com Wed Dec 17 23:33:53 2014 From: jamielennox at redhat.com (Jamie Lennox) Date: Wed, 17 Dec 2014 18:33:53 -0500 (EST) Subject: [openstack-dev] [python-cinderclient] Return request ID to caller In-Reply-To: <588CEB80D6885A41804FDE568C7D81BE56F3CAEE@MAIL703.KDS.KEANE.COM> References: <588CEB80D6885A41804FDE568C7D81BE56F3CAEE@MAIL703.KDS.KEANE.COM> Message-ID: <812892362.119043.1418859233339.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Abhijeet Malawade" > To: openstack-dev at lists.openstack.org > Sent: Friday, 12 December, 2014 3:54:04 PM > Subject: [openstack-dev] [python-cinderclient] Return request ID to caller > > > > HI, > > > > I want your thoughts on blueprint 'Log Request ID Mappings? for cross > projects. > > BP: https://blueprints.launchpad.net/nova/+spec/log-request-id-mappings > > It will enable operators to get request id's mappings easily and will be > useful in analysing logs effectively. > > > > For logging 'Request ID Mappings', client needs to return > 'x-openstack-request-id' to the caller. > > Currently python-cinderclient do not return 'x-openstack-request-id' back to > the caller. > > > > As of now, I could think of below two solutions to return 'request-id' back > from cinder-client to the caller. > > > > 1. Return tuple containing response header and response body from all > cinder-client methods. > > (response header contains 'x-openstack-request-id'). > > > > Advantages: > > A. In future, if the response headers are modified then it will be available > to the caller without making any changes to the python-cinderclient code. > > > > Disadvantages: > > A. Affects all services using python-cinderclient library as the return type > of each method is changed to tuple. > > B. Need to refactor all methods exposed by the python-cinderclient library. > Also requires changes in the cross projects wherever python-cinderclient > calls are being made. > > > > Ex. :- > > From Nova, you will need to call cinder-client 'get' method like below :- > > resp_header, volume = cinderclient(context).volumes.get(volume_id) > > > > x-openstack-request-id = resp_header.get('x-openstack-request-id', None) > > > > Here cinder-client will return both response header and volume. From response > header, you can get 'x-openstack-request-id'. > > > > 2. The optional parameter 'return_req_id' of type list will be passed to each > of the cinder-client method. If this parameter is passed then cinder-client > will append ?'x-openstack-request-id' received from cinder api to this list. > > > > This is already implemented in glance-client (for V1 api only) > > Blueprint : > https://blueprints.launchpad.net/python-glanceclient/+spec/return-req-id > > Review link : https://review.openstack.org/#/c/68524/7 > > > > Advantages: > > A. Requires changes in the cross projects only at places wherever > python-cinderclient calls are being made requiring 'x-openstack-request-id?. > > > > Dis-advantages: > > A. Need to refactor all methods exposed by the python-cinderclient library. > > > > Ex. :- > > From Nova, you will need to pass return_req_id parameter as a list. > > kwargs['return_req_id'] = [] > > item = cinderclient(context).volumes.get(volume_id, **kwargs) > > > > if kwargs.get('return_req_id'): > > x-openstack-request-id = kwargs['return_req_id'].pop() > > > > python-cinderclient will add 'x-openstack-request-id' to the 'return_req_id' > list if it is provided in kwargs. > > > > IMO, solution #2 is better than #1 for the reasons quoted above. > > Takashi NATSUME has already proposed a patch for solution #2. Please review > patch https://review.openstack.org/#/c/104482/. > > Would appreciate if you can think of any other better solution than #2. > > > > Thank you. > > Abhijeet So option 1 is a massive compatibility break. There's no way you can pull of a change in the return value like that without a new major version and every getting annoyed. My question is why does it need to be returned to the caller? What is the caller going to do with it other than send it to the debug log? It's an admin who is trying to figure out those logs later that wants the request-id included in that information, not the application at run time. Why not just have cinderclient log it as part of the standard request logging: https://github.com/openstack/python-cinderclient/blob/master/cinderclient/client.py#L170 Jamie > ______________________________________________________________________ > Disclaimer: This email and any attachments are sent in strictest confidence > for the sole use of the addressee and may contain legally privileged, > confidential, and proprietary data. If you are not the intended recipient, > please advise the sender by replying promptly to this email and then delete > and destroy this email and any attachments without any further use, copying > or forwarding. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From smelikyan at mirantis.com Thu Dec 18 00:55:31 2014 From: smelikyan at mirantis.com (Serg Melikyan) Date: Wed, 17 Dec 2014 16:55:31 -0800 Subject: [openstack-dev] [Murano] Question about Murano installation Message-ID: Hi, Raghavendra Given screenshots that you have send, you are using extremely outdated version of murano-dashboard (and probably outdated version of all other components). That is why you may experience issues with manual written for Juno version of Murano. I encourage you to use Juno version of Murano, you can obtain it by checking out sources, e.g. for murano-dashboard: git clone https://github.com/stackforge/murano-dashboard git checkout 2014.2 You also can download tarballs from http://tarballs.openstack.org/: - murano-2014.2.tar.gz (wheel ) - murano-dashboard-2014.2.tar.gz (wheel ) Please, find answers to your questions bellow: >should I use for mysql and the [murano] the localhost:8082, please clarify. Option url in murano section should point to address where murano-api is running. Option connection in database section, should point to address where MySQL is runnings. Unfortunately I don't know your OpenStack deployment scheme, so I can't answer more accurate. >How can I install Murano Agent please provide details? murano-agent is agent which runs on guest VMs and responsible for provisioning application on VM. It is not required, but many existing applications use murano-agent to do application provisioning/configuration. We use Disk Image Builder project to build images with murano-agent installed, please refer to murano-agent ReadMe for details about how to build image with murano-agent. We also have pre-built images with murano-agent for Fedora 17 & Ubuntu 14.04: - http://murano-files.mirantis.com/F17-x86_64-cfntools.qcow2 - http://murano-files.mirantis.com/ubuntu_14_04-murano-agent_stable_juno.qcow2 >How to install dashboard can I follow doc and install using tox? >Where to update the murano_metadata url details? To install murano-dashboard correctly you need just to follow manual and use Murano 2014.2 version. There is no option MURANO_METADATA_URL anymore in murano-dashboard. On Tue, Dec 16, 2014 at 9:13 PM, wrote: > > Hi Serg, > > > > Thank you for your response. > > > > I have the Openstack Ubuntu Juno version on 14.04 LTS. > > > > I am following the below link > > > > https://murano.readthedocs.org/en/latest/install/manual.html > > > > I have attached the error messages with this email. > > > > I am unable to see the Application menu on the Murano dashboard. > > I am unable to install Applications for the Murano. (Please let me know > how we can install the Application packages) > > > > I would like to know about Murano Agent. > > > > Can we spin up a VM from Openstack and install the Murano Agent > components for creating image ? > > If not please provide details. > > > > > > Incase of any doubts please let me know. > > > > Regards, > > Raghavendra Lad > > Mobile: +9198800 40919 > > > > *From:* Serg Melikyan [mailto:smelikyan at mirantis.com] > *Sent:* Wednesday, December 17, 2014 12:58 AM > *To:* Lad, Raghavendra > *Subject:* Murano Mailing-List > > > > Hi, Raghavendra > > > > I would like to mention that we don't use mailing-list on launchpad > anymore, there is no reason to duplicate messages sent to openstack-dev@ > to the murano-all at lists.launchpad.net. > > > > You can also reach team working on Murano using IRC on #murano at FreeNode > > > > -- > > Serg Melikyan, Senior Software Engineer at Mirantis, Inc. > > http://mirantis.com | smelikyan at mirantis.com > > > +7 (495) 640-4904, 0261 > > +7 (903) 156-0836 > > ------------------------------ > > This message is for the designated recipient only and may contain > privileged, proprietary, or otherwise confidential information. If you have > received it in error, please notify the sender immediately and delete the > original. Any other use of the e-mail by you is prohibited. Where allowed > by local law, electronic communications with Accenture and its affiliates, > including e-mail and instant messaging (including content), may be scanned > by our systems for the purposes of information security and assessment of > internal compliance with Accenture policy. > > ______________________________________________________________________________________ > > www.accenture.com > -- Serg Melikyan, Senior Software Engineer at Mirantis, Inc. http://mirantis.com | smelikyan at mirantis.com +7 (495) 640-4904, 0261 +7 (903) 156-0836 -------------- next part -------------- An HTML attachment was scrubbed... URL: From qiaojian at awcloud.com Thu Dec 18 01:30:52 2014 From: qiaojian at awcloud.com (=?utf-8?B?5LmU5bu6?=) Date: Thu, 18 Dec 2014 09:30:52 +0800 Subject: [openstack-dev] [trove] confused about trove-guestagent need nova's auth info Message-ID: When using trove, we need to configure nova?s user information in the configuration file of trove-guestagent, such as l nova_proxy_admin_user l nova_proxy_admin_pass l nova_proxy_admin_tenant_name Is it necessary? In a public cloud environment, It will lead to serious security risks. I traced the code, and noticed that the auth data mentioned above is packaged in a context object, then passed to the trove-conductor via message queue. Is it more suitable for trove-conductor to get the corresponding information from its own conf file? Thanks! qiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Thu Dec 18 02:12:22 2014 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 17 Dec 2014 21:12:22 -0500 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> <548A3FB8.9030007@redhat.com> <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> <548B8480.9010506@redhat.com> <4641310AFBEE10419D0A020273367C140CA3A305@G1W3645.americas.hpqcorp.net> <548FB135.30209@redhat.com> Message-ID: <54923806.6040405@redhat.com> On 17/12/14 13:05, Gurjar, Unmesh wrote: >> I'm storing a tuple of its name and database ID. The data structure is >> resource.GraphKey. I was originally using the name for something, but I >> suspect I could probably drop it now and just store the database ID, but I >> haven't tried it yet. (Having the name in there definitely makes debugging >> more pleasant though ;) >> > > I agree, having name might come in handy while debugging! > >> When I build the traversal graph each node is a tuple of the GraphKey and a >> boolean to indicate whether it corresponds to an update or a cleanup >> operation (both can appear for a single resource in the same graph). > > Just to confirm my understanding, cleanup operation takes care of both: > 1. resources which are deleted as a part of update and > 2. previous versioned resource which was updated by replacing with a new > resource (UpdateReplace scenario) Yes, correct. Also: 3. resource versions which failed to delete for whatever reason on a previous traversal > Also, the cleanup operation is performed after the update completes successfully. NO! They are not separate things! https://github.com/openstack/heat/blob/stable/juno/heat/engine/update.py#L177-L198 >>> If I am correct, you are updating all resources on update regardless >>> of their change which will be inefficient if stack contains a million resource. >> >> I'm calling update() on all resources regardless of change, but update() will >> only call handle_update() if something has changed (unless the plugin has >> overridden Resource._needs_update()). >> >> There's no way to know whether a resource needs to be updated before >> you're ready to update it, so I don't think of this as 'inefficient', just 'correct'. >> >>> We have similar questions regarding other areas in your >>> implementation, which we believe if we understand the outline of your >>> implementation. It is difficult to get a hold on your approach just by looking >> at code. Docs strings / Etherpad will help. >>> >>> >>> About streams, Yes in a million resource stack, the data will be huge, but >> less than template. >> >> No way, it's O(n^3) (cubed!) in the worst case to store streams for each >> resource. >> >>> Also this stream is stored >>> only In IN_PROGRESS resources. >> >> Now I'm really confused. Where does it come from if the resource doesn't >> get it until it's already in progress? And how will that information help it? >> > > When an operation on stack is initiated, the stream will be identified. OK, this may be one of the things I was getting confused about - I though a 'stream' belonged to one particular resource and just contained all of the paths to reaching that resource. But here it seems like you're saying that a 'stream' is a representation of the entire graph? So it's essentially just a gratuitously bloated NIH serialisation of the Dependencies graph? > To begin > the operation, the action is initiated on the leaf (or root) resource(s) and the > stream is stored (only) in this/these IN_PROGRESS resource(s). How does that work? Does it get deleted again when the resource moves to COMPLETE? > The stream should then keep getting passed to the next/previous level of resource(s) as > and when the dependencies for the next/previous level of resource(s) are met. That sounds... identical to the way it's implemented in my prototype (passing a serialisation of the graph down through the notification triggers), except for the part about storing it in the Resource table. Why would we persist to the database data that we only need for the duration that we already have it in memory anyway? If we're going to persist it we should do so once, in the Stack table, at the time that we're preparing to start the traversal. >>> The reason to have entire dependency list to reduce DB queries while a >> stack update. >> >> But we never need to know that. We only need to know what just happened >> and what to do next. >> > > As mentioned earlier, each level of resources in a graph pass on the dependency > list/stream to their next/previous level of resources. This is information should further > be used to determine what is to be done next and drive the operation to completion. Why would we store *and* forward? >>> When you have a singular dependency on each resources similar to your >>> implantation, then we will end up loading Dependencies one at a time and >> altering almost all resource's dependency regardless of their change. >>> >>> Regarding a 2 template approach for delete, it is not actually 2 >>> different templates. Its just that we have a delete stream To be taken up >> post update. >> >> That would be a regression from Heat's current behaviour, where we start >> cleaning up resources as soon as they have nothing depending on them. >> There's not even a reason to make it worse than what we already have, >> because it's actually a lot _easier_ to treat update and clean up as the same >> kind of operation and throw both into the same big graph. The dual >> implementations and all of the edge cases go away and you can just trust in >> the graph traversal to do the Right Thing in the most parallel way possible. >> >>> (Any post operation will be handled as an update) This approach is >>> True when Rollback==True We can always fall back to regular stream >>> (non-delete stream) if Rollback=False >> >> I don't understand what you're saying here. > > Just to elaborate, in case of update with rollback, there will be 2 streams of > operations: There really should not be. > 1. first is the create and update resource stream > 2. second is the stream for deleting resources (which will be taken if the first stream > completes successfully). We don't want to break it up into discrete steps. We want to treat it as one single graph traversal - provided we set up the dependencies correctly then the most optimal behaviour just falls out of our graph traversal algorithm for free. In the existing Heat code I linked above, we use actual heat.engine.resource.Resource objects as nodes in the dependency graph and rely on figuring out whether they came from the old or new stack to differentiate them. However, it's not possible (or desirable) to serialise a graph containing those objects and send it to another process, so in my convergence prototype I use (key, direction) tuples as the nodes so that the same key may appear twice in the graph with different 'directions' (forward=True for update, =False for cleanup - note that the direction is with respect to the template... as far as the graph is concerned it's a single traversal going in one direction). Artificially dividing things into separate update and cleanup phases is both more complicated code to maintain and a major step backwards for our users. I want to be very clear about this: treating the updates and cleanups as separate, serial tasks is a -2 show stopper for any convergence design. We *MUST* NOT do that to our users. > However, in case of an update without rollback, there will be a single stream of > operation (no delete/cleanup stream required). By 'update without rollback' I assume you mean when the user issues an update with disable_rollback=True? Actually it doesn't matter what you mean, because there is no way of interpreting this that could make it correct. We *always* need to check all of the pre-existing resources for clean up. The only exception is on create, and then only because the set of pre-existing resources is empty. If your plan for handling UpdateReplace when rollback is disabled is just to delete the old resource at the same time as creating the new one, then your plan won't work because the dependencies are backwards. And leaving the replaced resources around without even trying to clean them up would be outright unethical, given how much money it would cost our users. So -2 on 'no cleanup when rollback disabled' as well. cheers, Zane. From harlowja at outlook.com Thu Dec 18 03:03:37 2014 From: harlowja at outlook.com (Joshua Harlow) Date: Wed, 17 Dec 2014 19:03:37 -0800 Subject: [openstack-dev] [oslo] [taskflow] sprint review day In-Reply-To: <17283B0A-7AE9-4BC2-853C-7E3EBA4973D5@doughellmann.com> References: <17283B0A-7AE9-4BC2-853C-7E3EBA4973D5@doughellmann.com> Message-ID: Thanks for all those who showed up and helped in the sprint (even those in spirit; due to setuptools issues happening this week)! We knocked out a good number of reviews and hopefully can keep knocking them out as time goes on... Etherpad for those interested: https://etherpad.openstack.org/p/taskflow-kilo-sprint Feel free to keep on helping (it's always appreciated). Thanks agains! -Josh Doug Hellmann wrote: > On Dec 10, 2014, at 2:12 PM, Joshua Harlow wrote: > >> Hi everyone, >> >> The OpenStack oslo team will be hosting a virtual sprint in the >> Freenode IRC channel #openstack-oslo for the taskflow subproject on >> Wednesday 12-17-2014 starting at 16:00 UTC and going for ~8 hours. >> >> The goal of this sprint is to work on any open reviews, documentation >> or any other integration questions, development and so-on, so that we >> can help progress the taskflow subproject forward at a good rate. >> >> Live version of the current documentation is available here: >> >> http://docs.openstack.org/developer/taskflow/ >> >> The code itself lives in the openstack/taskflow respository. >> >> http://git.openstack.org/cgit/openstack/taskflow/tree >> >> Please feel free to join if interested, curious, or able. >> >> Much appreciated, >> >> Joshua Harlow >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Thanks for setting this up, Josh! > > This day works for me. We need to make sure a couple of other Oslo cores can make it that day for the sprint to really be useful, so everyone please let us know if you can make it. > > Doug > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sachi.king at anchor.com.au Thu Dec 18 03:44:04 2014 From: sachi.king at anchor.com.au (Sachi King) Date: Thu, 18 Dec 2014 14:44:04 +1100 Subject: [openstack-dev] [oslo] oslo.service graduating - Primary Maintainers Message-ID: <2991059.oGsdvt8eN9@chiruno> Hi, Oslo service is graduating and is looking for a primary maintainer. The following are the listed maintainers for the submodules that are not orphans. service - Michael Still periodic_task - Michael Still Requestutils - Sandy Walsh systemd - Alan Pevec Would any of you like to take up being the primary maintainer for oslo.service? Additionally do you have any pending work that we should delay graduation for? Further details can be found in the in-progress spec. https://review.openstack.org/#/c/142659/ Cheers, Sachi -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part. URL: From ryu at midokura.com Thu Dec 18 06:29:58 2014 From: ryu at midokura.com (Ryu Ishimoto) Date: Thu, 18 Dec 2014 15:29:58 +0900 Subject: [openstack-dev] [nova] Setting MTU size for tap device Message-ID: Hi All, I noticed that in linux_net.py, the method to create a tap interface[1] does not let you set the MTU size. In other places, I see calls made to set the MTU of the device [2]. I'm wondering if there is any technical reasons to why we can't also set the MTU size when creating tap interfaces for general cases. In certain overlay solutions, this would come in handy. If there isn't any, I would love to submit a patch to accomplish this. Thanks in advance! Ryu [1] https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L1374 [2] https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L1309 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Thu Dec 18 06:33:49 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Thu, 18 Dec 2014 12:33:49 +0600 Subject: [openstack-dev] [Mistral] ActionProvider In-Reply-To: References: Message-ID: Winson, The idea itself makes a lot of sense to me because we?ve had a number of discussions about how we could make action subsystem even more pluggable and flexible. One of the questions that we?d like to solve is to be able to add actions ?on the fly? and at the same time stay safe. I think this whole thing is about specific technical details so I would like to see more of them. Generally speaking, you?re right about actions residing in a database, about 3 months ago we made this refactoring and put all actions into db but it may not be 100% necessary. Btw, we already have a concept of action generator that we use to automatically build OpenStack actions so you can take a look at how they work. Long story short? We?ve already made some steps towards being more flexible and have some facilities that could be further improved. Again, the idea is very interesting to me (and not only to me). Please share the details. Thanks Renat Akhmerov @ Mirantis Inc. > On 17 Dec 2014, at 13:22, W Chan wrote: > > Renat, > > We want to introduce the concept of an ActionProvider to Mistral. We are thinking that with an ActionProvider, a third party system can extend Mistral with it's own action catalog and set of dedicated and specialized action executors. The ActionProvider will return it's own list of actions via an abstract interface. This minimizes the complexity and latency in managing and sync'ing the Action table. In the DSL, we can define provider specific context/configuration separately and apply to all provider specific actions without explicitly passing as inputs. WDYT? > > Winson > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From nkinder at redhat.com Thu Dec 18 06:41:23 2014 From: nkinder at redhat.com (Nathan Kinder) Date: Wed, 17 Dec 2014 22:41:23 -0800 Subject: [openstack-dev] [OSSN 0038] Suds client subject to cache poisoning by local attacker Message-ID: <54927713.70902@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Suds client subject to cache poisoning by local attacker - --- ### Summary ### Suds is a Python SOAP client for consuming Web Services. Its default cache implementation stores pickled objects to a predictable path in /tmp. This can be used by a local attacker to redirect SOAP requests via symlinks or run a privilege escalation or code execution attack. ### Affected Services / Software ### Cinder, Nova, Grizzly, Havana, Icehouse ### Discussion ### The Python 'suds' package is used by oslo.vmware to interface with SOAP service APIs and both Cinder and Nova have dependencies on oslo.vmware when using VMware drivers. By default suds uses an on-disk cache that places pickle files, serialised Python objects, into a known location '/tmp/suds'. A local attacker could use symlinks or place crafted files into this location that will later be deserialised by suds. By manipulating the content of the cached pickle files, an attacker can redirect or modify SOAP requests. Alternatively, pickle may be used to run injected Python code during the deserialisation process. This can allow the spawning of a shell to execute arbitrary OS level commands with the permissions of the service using suds, thus leading to possible privilege escalation. At the time of writing, the suds package appears largely unmaintained upstream. However, vendors have released patched versions that do not suffer from the predictable cache path problem. Ubuntu is known to offer one such patched version (python-suds_0.4.1-2ubuntu1.1). ### Recommended Actions ### The recommended solution to this issue is to disable cache usage in the configuration as shown: 'client.set_options(cache=None)' A fix has been released to oslo.vmware (0.6.0) that disables the use of the disk cache by default. Cinder and Nova have both adjusted their requirements to include this fixed version. Deployers wishing to re-enable the cache should ascertain whether or not their vendor shipped suds package is susceptible and consider the above advice. ### Contacts / References ### This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0038 Original Launchpad Bug : https://bugs.launchpad.net/ossn/+bug/1341954 OpenStack Security ML : openstack-security at lists.openstack.org OpenStack Security Group : https://launchpad.net/~openstack-ossg Suds: https://pypi.python.org/pypi/suds CVE: CVE-2013-2217 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJUkncTAAoJEJa+6E7Ri+EV4sQH/RUgDVqGRs5tdBGApTd3ljq0 ThqY8+5/3dqOYJ767/tTQ7WghGcPouFV8hXeco2ZS7oYS41kcBwQnvTRCol6bRqH ayKjQIiNvaonHsSSwyhB1fMuUTjMzbTDg6w94xfy2Ibl+0XTskXkhQ2qqLB7yG4H 4sDWZNykE5sGcpn7zB2Xr+6IkODqNlPI5AAGmLBM9N1XB/Y98tQ+k8V7T3cvuF6+ 77/o6WiyD5Q5g5s2/yaOuvOhZu4W3bxAXwKskYBvVIoxA90SPu66hQ2BQHPIzSIX pZG0efK25s1slgY8yL8uNAG2GLIhhgvDk8aW5GkV9XJQ4jIh+15TILNmazSq9Q0= =hEO/ -----END PGP SIGNATURE----- From taget at linux.vnet.ibm.com Thu Dec 18 07:34:33 2014 From: taget at linux.vnet.ibm.com (Eli Qiao(Li Yong Qiao)) Date: Thu, 18 Dec 2014 15:34:33 +0800 Subject: [openstack-dev] ask for usage of quota reserve Message-ID: <54928389.1020903@linux.vnet.ibm.com> hi all, can anyone tell if we call quotas.reserve() but never call quotas.commit() or quotas.rollback(). what will happen? for example: 1. when doing resize, we call quotas.reserve() to reservea a delta quota.(new_flavor - old_flavor) 2. for some reasons, nova-compute crashed, and not chance to call quotas.commit() or quotas.rollback() /(called by finish_resize in nova/compute/manager.py)/ 3. next time restart nova-compute server, is the delta quota still reserved , or do we need any other operation on quotas? Thanks in advance -Eli. ps: this is related to patch : Handle RESIZE_PREP status when nova compute do init_instance(https://review.openstack.org/#/c/132827/) -- Thanks Eli Qiao(qiaoly at cn.ibm.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Thu Dec 18 07:45:05 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Thu, 18 Dec 2014 13:45:05 +0600 Subject: [openstack-dev] [Mistral] RFC - Action spec CLI In-Reply-To: References: Message-ID: Hi, > The problem with existing syntax is it is not defined: there is no docs on inlining complex variables [*], and we haven?t tested it for anything more than the simplest cases: https://github.com/stackforge/mistral/blob/master/mistral/tests/unit/workbook/v2/test_dsl_specs_v2.py#L114 . I will be surprised if anyone figured how to provide a complex object as an inline parameter. Documentation is really not complete. It?s one of the major problems we?re to fix. It?s just a matter or resources and priorities, as always. Disagree on testing. We tested it for cases we were interested in. The test you pointed to is not the only one. General comment: If we find that our tests insufficient let?s just go ahead and improve them. > Do you think regex is the right approach for parsing arbitrary key-values where values is arbitrary json structures? Will it work with something like > workflow: wf2 object_list=[ {?url?: ?http://{$hostname}.example.com :8080?x=a&y={$.b}}, 33, null, {{$.key}, [{$.value1}, {$.value2}]} With regular expressions it just works. As it turns out shlex doesn't. What else? The example you provided is a question of limitations that every convenient thing has. These limitations should be recognized and well documented. > How much tests should we write to be confident we covered all cases? I share Lakshmi?s concern it is fragile and maintaining it reliably is difficult. Again, proper documentation and recognition of limitations. > My preference is ?option 3?, ?make it work as is now?. But if it?s too hard I am ok to compromise. https://review.openstack.org/#/c/142452/ . Took fairly reasonable amount of time for Nikolay to fix it. > Option 1 introduce a new syntax; although familiar to CLI users, I think it?s a bit out of place in YAML definition. Yes, agree. > Option 4 is no go :) Out of discussion. Renat Akhmerov @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Thu Dec 18 07:48:20 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Thu, 18 Dec 2014 13:48:20 +0600 Subject: [openstack-dev] [Mistral] RFC - Action spec CLI In-Reply-To: References: Message-ID: Dmitri, Yes, it would be really cool if you could with the documentation. Btw, while doing it you could also think recommendations for others tests that should be added to make sure they provide enough coverage for needed case. Thanks Renat Akhmerov @ Mirantis Inc. > On 17 Dec 2014, at 15:56, Dmitri Zimine wrote: > > The problem with existing syntax is it is not defined: there is no docs on inlining complex variables [*], and we haven?t tested it for anything more than the simplest cases: https://github.com/stackforge/mistral/blob/master/mistral/tests/unit/workbook/v2/test_dsl_specs_v2.py#L114 . I will be surprised if anyone figured how to provide a complex object as an inline parameter. > > Do you think regex is the right approach for parsing arbitrary key-values where values is arbitrary json structures? Will it work with something like > workflow: wf2 object_list=[ {?url?: ?http://{$hostname}.example.com :8080?x=a&y={$.b}}, 33, null, {{$.key}, [{$.value1}, {$.value2}]} > How much tests should we write to be confident we covered all cases? I share Lakshmi?s concern it is fragile and maintaining it reliably is difficult. > > But back to the original question, it?s about requirements, not implementation. > My preference is ?option 3?, ?make it work as is now?. But if it?s too hard I am ok to compromise. > Than option 2 as it resembles option 3 and YAML/JSON conversion makes complete sense. At the expense of quoting the objects. Slight change, not significant. > Option 1 introduce a new syntax; although familiar to CLI users, I think it?s a bit out of place in YAML definition. > Option 4 is no go :) > > DZ. > > [*] ?there is no docs to this? - I subscribe on fixing this. > > On Dec 16, 2014, at 9:48 PM, Renat Akhmerov > wrote: > >> Ok, I would prefer to spend some time and think how to improve the existing reg exp that we use to parse key-value pairs. We definitely can?t just drop support of this syntax and can?t even change it significantly since people already use it. >> >> Renat Akhmerov >> @ Mirantis Inc. >> >> >> >>> On 17 Dec 2014, at 07:28, Lakshmi Kannan > wrote: >>> >>> Apologies for the long email. If this fancy email doesn?t render correctly for you, please read it here: https://gist.github.com/lakshmi-kannan/cf953f66a397b153254a >>> I was looking into fixing bug: https://bugs.launchpad.net/mistral/+bug/1401039 . My idea was to use shlex to parse the string. This actually would work for anything that is supplied in the linux shell syntax. Problem is this craps out when we want to support complex data structures such as arrays and dicts as arguments. I did not think we supported a syntax to take in complex data structures in a one line format. Consider for example: >>> >>> task7: >>> for-each: >>> vm_info: $.vms >>> workflow: wf2 is_true=true object_list=[1, null, "str"] >>> on-complete: >>> - task9 >>> - task10 >>> Specifically >>> >>> wf2 is_true=true object_list=[1, null, "str"] >>> shlex will not handle this correctly because object_list is an array. Same problem with dict. >>> >>> There are 3 potential options here: >>> >>> Option 1 >>> >>> 1) Provide a spec for specifying lists and dicts like so: >>> list_arg=1,2,3 and dict_arg=h1:h2,h3:h4,h5:h6 >>> >>> shlex will handle this fine but there needs to be a code that converts the argument values to appropriate data types based on schema. (ActionSpec should have a parameter schema probably in jsonschema). This is doable. >>> >>> wf2 is_true=true object_list="1,null,"str"" >>> Option 2 >>> >>> 2) Allow JSON strings to be used as arguments so we can json.loads them (if it fails, use them as simple string). For example, with this approach, the line becomes >>> >>> wf2 is_true=true object_list="[1, null, "str"]" >>> This would pretty much resemble http://stackoverflow.com/questions/7625786/type-dict-in-argparse-add-argument >>> Option 3 >>> >>> 3) Keep the spec as such and try to parse it. I have no idea how we can do this reliably. We need a more rigorous lexer. This syntax doesn?t translate well when we want to build a CLI. Linux shells cannot support this syntax natively. This means people would have to use shlex syntax and a translation needs to happen in CLI layer. This will lead to inconsistency. CLI uses some syntax and the action input line in workflow definition will use another. We should try and avoid this. >>> >>> Option 4 >>> >>> 4) Completely drop support for this fancy one line syntax in workflow. This is probably the least desired option. >>> >>> My preference >>> >>> Looking the options, I like option2/option 1/option 4/option 3 in the order of preference. >>> >>> With some documentation, we can tell people why this is hard. People will also grok because they are already familiar with CLI limitations in linux. >>> >>> Thoughts? >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Thu Dec 18 07:53:35 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Thu, 18 Dec 2014 13:53:35 +0600 Subject: [openstack-dev] [pecan] [WSME] Different content-type in request and response In-Reply-To: References: <93DF0931-7060-43B8-8B59-3F9C160B2075@mirantis.com> <9726CFD3-F9C0-4BDF-885B-543F473D1376@doughellmann.com> Message-ID: <1A4A612A-FEBF-40C4-94D6-CA8D2C4BD362@mirantis.com> Doug, Sorry for trying to resurrect this thread again. It seems to be pretty important for us. Do you have some comments on that? Or if you need more context please also let us know. Thanks Renat Akhmerov @ Mirantis Inc. > On 27 Nov 2014, at 17:43, Renat Akhmerov wrote: > > Doug, thanks for your answer! > > My explanations below.. > > >> On 26 Nov 2014, at 21:18, Doug Hellmann > wrote: >> >> >> On Nov 26, 2014, at 3:49 AM, Renat Akhmerov > wrote: >> >>> Hi, >>> >>> I traced the WSME code and found a place [0] where it tries to get arguments from request body based on different mimetype. So looks like WSME supports only json, xml and ?application/x-www-form-urlencoded?. >>> >>> So my question is: Can we fix WSME to also support ?text/plain? mimetype? I think the first snippet that Nikolay provided is valid from WSME standpoint. >> >> WSME is intended for building APIs with structured arguments. It seems like the case of wanting to use text/plain for a single input string argument just hasn?t come up before, so this may be a new feature. >> >> How many different API calls do you have that will look like this? Would this be the only one in the API? Would it make sense to consistently use JSON, even though you only need a single string argument in this case? > > We have 5-6 API calls where we need it. > > And let me briefly explain the context. In Mistral we have a language (we call it DSL) to describe different object types: workflows, workbooks, actions. So currently when we upload say a workbook we run in a command line: > > mistral workbook-create my_wb.yaml > > where my_wb.yaml contains that DSL. The result is a table representation of actually create server side workbook. From technical perspective we now have: > > Request: > > POST /mistral_url/workbooks > > { > ?definition?: ?escaped content of my_wb.yaml" > } > > Response: > > { > ?id?: ?1-2-3-4?, > ?name?: ?my_wb_name?, > ?description?: ?my workbook?, > ... > } > > The point is that if we use, for example, something like ?curl? we every time have to obtain that ?escaped content of my_wb.yaml? and create that, in fact, synthetic JSON to be able to send it to the server side. > > So for us it would be much more convenient if we could just send a plain text but still be able to receive a JSON as response. I personally don?t want to use some other technology because generally WSME does it job and I like this concept of rest resources defined as classes. If it supported text/plain it would be just the best fit for us. > >>> >>> Or if we don?t understand something in WSME philosophy then it?d nice to hear some explanations from WSME team. Will appreciate that. >>> >>> >>> Another issue that previously came across is that if we use WSME then we can?t pass arbitrary set of parameters in a url query string, as I understand they should always correspond to WSME resource structure. So, in fact, we can?t have any dynamic parameters. In our particular use case it?s very inconvenient. Hoping you could also provide some info about that: how it can be achieved or if we can just fix it. >> >> Ceilometer uses an array of query arguments to allow an arbitrary number. >> >> On the other hand, it sounds like perhaps your desired API may be easier to implement using some of the other tools being used, such as JSONSchema. Are you extending an existing API or building something completely new? > > We want to improve our existing Mistral API. Basically, the idea is to be able to apply dynamic filters when we?re requesting a collection of objects using url query string. Yes, we could use JSONSchema if you say it?s absolutely impossible to do and doesn?t follow WSME concepts, that?s fine. But like I said generally I like the approach that WSME takes and don?t feel like jumping to another technology just because of this issue. > > Thanks for mentioning Ceilometer, we?ll look at it and see if that works for us. > > Renat -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at greghaynes.net Thu Dec 18 08:04:48 2014 From: greg at greghaynes.net (Gregory Haynes) Date: Thu, 18 Dec 2014 08:04:48 +0000 Subject: [openstack-dev] [TripleO] Bug Squashing Day In-Reply-To: <1418759063-sup-5722@greghaynes0.opus.gah> References: <1418247379.1887051.201395737.3992C87F@webmail.messagingengine.com> <1418759063-sup-5722@greghaynes0.opus.gah> Message-ID: <1418889564-sup-2547@greghaynes0.opus.gah> Excerpts from Gregory Haynes's message of 2014-12-16 19:47:54 +0000: > > > On Wed, Dec 10, 2014 at 10:36 PM, Gregory Haynes > > > wrote: > > > > > >> A couple weeks ago we discussed having a bug squash day. AFAICT we all > > >> forgot, and we still have a huge bug backlog. I'd like to propose we > > >> make next Wed. (12/17, in whatever 24 window is Wed. in your time zone) > > >> a bug squashing day. Hopefully we can add this as an item to our weekly > > >> meeting on Tues. to help remind everyone the day before. > > Friendly Reminder that tomorrow (or today for some time zones) is our > bug squash day! I hope to see youall in IRC squashing some of our > (least) favorite bugs. > > Random Factoid: We currently have 299 open bugs. Thanks to everyone who participated in our bug squash day! We are now down to 264 open bugs (down from 299). There was also a fair number of bugs filed today as part of our (anti) bug squashing efforts, bringing our total bugs operated on today to >50. Thanks, again! Cheers, Greg From obondarev at mirantis.com Thu Dec 18 08:19:44 2014 From: obondarev at mirantis.com (Oleg Bondarev) Date: Thu, 18 Dec 2014 11:19:44 +0300 Subject: [openstack-dev] Topic: Reschedule Router to a different agent with multiple external networks. In-Reply-To: <4094DC7712AF5D488899847517A3C5B064E717A0@G4W3299.americas.hpqcorp.net> References: <4094DC7712AF5D488899847517A3C5B064E717A0@G4W3299.americas.hpqcorp.net> Message-ID: Hi Swaminathan Vasudevan, please check the following docstring of L3_NAT_dbonly_mixin._check_router_needs_rescheduling: * def _check_router_needs_rescheduling(self, context, router_id, gw_info):* * """Checks whether router's l3 agent can handle the given network* * When external_network_bridge is set, each L3 agent can be associated* * with at most one external network. If router's new external gateway* * is on other network then the router needs to be rescheduled to the* * proper l3 agent.* * If external_network_bridge is not set then the agent* * can support multiple external networks and rescheduling is not needed* So there can still be agents which can handle only one ext net - for such agents resheduling is needed. Thanks, Oleg On Wed, Dec 17, 2014 at 8:56 PM, Vasudevan, Swaminathan (PNB Roseville) < swaminathan.vasudevan at hp.com> wrote: > > Hi Folks, > > > > *Reschedule router if new external gateway is on other network* > > An L3 agent may be associated with just one external network. > > If router's new external gateway is on other network then the router > > needs to be rescheduled to the proper l3 agent > > > > This patch was introduced when there was no support for L3-agent to handle > multiple external networks. > > > > Do we think we should still retain this original behavior even if we have > support for multiple external networks by single L3-agent. > > > > Can anyone comment on this. > > > > Thanks > > > > Swaminathan Vasudevan > > Systems Software Engineer (TC) > > > > > > HP Networking > > Hewlett-Packard > > 8000 Foothills Blvd > > M/S 5541 > > Roseville, CA - 95747 > > tel: 916.785.0937 > > fax: 916.785.1815 > > email: swaminathan.vasudevan at hp.com > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eduard.matei at cloudfounders.com Thu Dec 18 08:27:18 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Thu, 18 Dec 2014 10:27:18 +0200 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> References: <5486D947.4090209@hp.com> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> Message-ID: Hi, Seems i can't install using puppet on the jenkins master using install_master.sh from https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh because it's running Ubuntu 11.10 and it appears unsupported. I managed to install puppet manually on master and everything else fails So i'm trying to manually install zuul and nodepool and jenkins job builder, see where i end up. The slave looks complete, got some errors on running install_slave so i ran parts of the script manually, changing some params and it appears installed but no way to test it without the master. Any ideas welcome. Thanks, Eduard On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy wrote: > Manually running the script requires a few environment settings. Take a > look at the README here: > > https://github.com/openstack-infra/devstack-gate > > > > Regarding cinder, I?m using this repo to run our cinder jobs (fork from > jaypipes). > > https://github.com/rasselin/os-ext-testing > > > > Note that this solution doesn?t use the Jenkins gerrit trigger pluggin, > but zuul. > > > > There?s a sample job for cinder here. It?s in Jenkins Job Builder format. > > > https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample > > > > You can ask more questions in IRC freenode #openstack-cinder. (irc# > asselin) > > > > Ramy > > > > *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] > *Sent:* Tuesday, December 16, 2014 12:41 AM > *To:* Bailey, Darragh > *Cc:* OpenStack Development Mailing List (not for usage questions); > OpenStack > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Hi, > > > > Can someone point me to some working documentation on how to setup third > party CI? (joinfu's instructions don't seem to work, and manually running > devstack-gate scripts fails: > > Running gate_hook > > Job timeout set to: 163 minutes > > timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory > > ERROR: the main setup script run by this job failed - exit code: 127 > > please look at the relevant log files to determine the root cause > > Cleaning up host > > ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) > > Build step 'Execute shell' marked build as failure. > > > > I have a working Jenkins slave with devstack and our internal libraries, i > have Gerrit Trigger Plugin working and triggering on patches created, i > just need the actual job contents so that it can get to comment with the > test results. > > > > Thanks, > > > > Eduard > > > > On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei < > eduard.matei at cloudfounders.com> wrote: > > Hi Darragh, thanks for your input > > > > I double checked the job settings and fixed it: > > - build triggers is set to Gerrit event > > - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin > and tested separately) > > - Trigger on: Patchset Created > > - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: > Type: Path, Pattern: ** (was Type Plain on both) > > Now the job is triggered by commit on openstack-dev/sandbox :) > > > > Regarding the Query and Trigger Gerrit Patches, i found my patch using > query: status:open project:openstack-dev/sandbox change:139585 and i can > trigger it manually and it executes the job. > > > > But i still have the problem: what should the job do? It doesn't actually > do anything, it doesn't run tests or comment on the patch. > > Do you have an example of job? > > > > Thanks, > > Eduard > > > > On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh wrote: > > Hi Eduard, > > > I would check the trigger settings in the job, particularly which "type" > of pattern matching is being used for the branches. Found it tends to be > the spot that catches most people out when configuring jobs with the > Gerrit Trigger plugin. If you're looking to trigger against all branches > then you would want "Type: Path" and "Pattern: **" appearing in the UI. > > If you have sufficient access using the 'Query and Trigger Gerrit > Patches' page accessible from the main view will make it easier to > confirm that your Jenkins instance can actually see changes in gerrit > for the given project (which should mean that it can see the > corresponding events as well). Can also use the same page to re-trigger > for PatchsetCreated events to see if you've set the patterns on the job > correctly. > > Regards, > Darragh Bailey > > "Nothing is foolproof to a sufficiently talented fool" - Unknown > > On 08/12/14 14:33, Eduard Matei wrote: > > Resending this to dev ML as it seems i get quicker response :) > > > > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: > > Patchset Created", chose as server the configured Gerrit server that > > was previously tested, then added the project openstack-dev/sandbox > > and saved. > > I made a change on dev sandbox repo but couldn't trigger my job. > > > > Any ideas? > > > > Thanks, > > Eduard > > > > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei > > > > wrote: > > > > Hello everyone, > > > > Thanks to the latest changes to the creation of service accounts > > process we're one step closer to setting up our own CI platform > > for Cinder. > > > > So far we've got: > > - Jenkins master (with Gerrit plugin) and slave (with DevStack and > > our storage solution) > > - Service account configured and tested (can manually connect to > > review.openstack.org and get events > > and publish comments) > > > > Next step would be to set up a job to do the actual testing, this > > is where we're stuck. > > Can someone please point us to a clear example on how a job should > > look like (preferably for testing Cinder on Kilo)? Most links > > we've found are broken, or tools/scripts are no longer working. > > Also, we cannot change the Jenkins master too much (it's owned by > > Ops team and they need a list of tools/scripts to review before > > installing/running so we're not allowed to experiment). > > > > Thanks, > > Eduard > > > > -- > > > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > > intended solely for the use of the individual or entity to whom > > they are addressed.If you are not the named addressee or an > > employee or agent responsible for delivering this message to the > > named addressee, you are hereby notified that you are not > > authorized to read, print, retain, copy or disseminate this > > message or any part of it. If you have received this email in > > error we request you to notify us by reply e-mail and to delete > > all electronic files of the message. If you are not the intended > > recipient you are notified that disclosing, copying, distributing > > or taking any action in reliance on the contents of this > > information is strictly prohibited. E-mail transmission cannot be > > guaranteed to be secure or error free as information could be > > intercepted, corrupted, lost, destroyed, arrive late or > > incomplete, or contain viruses. The sender therefore does not > > accept liability for any errors or omissions in the content of > > this message, and shall have no liability for any loss or damage > > suffered by the user, which arise as a result of e-mail transmission. > > > > > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > > intended solely for the use of the individual or entity to whom they > > are addressed.If you are not the named addressee or an employee or > > agent responsible for delivering this message to the named addressee, > > you are hereby notified that you are not authorized to read, print, > > retain, copy or disseminate this message or any part of it. If you > > have received this email in error we request you to notify us by reply > > e-mail and to delete all electronic files of the message. If you are > > not the intended recipient you are notified that disclosing, copying, > > distributing or taking any action in reliance on the contents of this > > information is strictly prohibited. E-mail transmission cannot be > > guaranteed to be secure or error free as information could be > > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or > > contain viruses. The sender therefore does not accept liability for > > any errors or omissions in the content of this message, and shall have > > no liability for any loss or damage suffered by the user, which arise > > as a result of e-mail transmission. > > > > > > > _______________________________________________ > > OpenStack-Infra mailing list > > OpenStack-Infra at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From taget at linux.vnet.ibm.com Thu Dec 18 08:33:07 2014 From: taget at linux.vnet.ibm.com (Eli Qiao(Li Yong Qiao)) Date: Thu, 18 Dec 2014 16:33:07 +0800 Subject: [openstack-dev] [nova][resend with correct subject prefix] ask for usage of quota reserve Message-ID: <54929143.2070903@linux.vnet.ibm.com> hi all, can anyone tell if we call quotas.reserve() but never call quotas.commit() or quotas.rollback(). what will happen? for example: 1. when doing resize, we call quotas.reserve() to reservea a delta quota.(new_flavor - old_flavor) 2. for some reasons, nova-compute crashed, and not chance to call quotas.commit() or quotas.rollback() /(called by finish_resize in nova/compute/manager.py)/ 3. next time restart nova-compute server, is the delta quota still reserved , or do we need any other operation on quotas? Thanks in advance -Eli. ps: this is related to patch : Handle RESIZE_PREP status when nova compute do init_instance(https://review.openstack.org/#/c/132827/) -- Thanks Eli Qiao(qiaoly at cn.ibm.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pasquale.porreca at dektech.com.au Thu Dec 18 08:33:37 2014 From: pasquale.porreca at dektech.com.au (Pasquale Porreca) Date: Thu, 18 Dec 2014 09:33:37 +0100 Subject: [openstack-dev] [nova] Complexity check and v2 API In-Reply-To: References: <5491ACC2.6070207@dektech.com.au> <5491BACE.2010900@dektech.com.au> Message-ID: <54929161.8080305@dektech.com.au> Yes, for v2.1 there is not this problem, moreover v2.1 corresponding server.py has much lower complexity than v2 one. On 12/17/14 20:10, Christopher Yeoh wrote: > Hi, > > Given the timing (no spec approved) it sounds like a v2.1 plus > microversions (just merging) with no v2 changes at all. > > The v2.1 framework is more flexible and you should need no changes to > servers.py at all as there are hooks for adding extra parameters in > separate plugins. There are examples of this in the v3 directory which > is really v2.1 now. > > Chris > On Thu, 18 Dec 2014 at 3:49 am, Pasquale Porreca > > wrote: > > Thank you for the answer. > > my API proposal won't be merged in kilo release since the deadline for > approval is tomorrow, so I may propose the fix to lower the complexity > in another way, what do you think about a bug fix? > > On 12/17/14 18:05, Matthew Gilliard wrote: > > Hello Pasquale > > > > The problem is that you are trying to add a new if/else branch > into > > a method which is already ~250 lines long, and has the highest > > complexity of any function in the nova codebase. I assume that you > > didn't contribute much to that complexity, but we've recently > added a > > limit to stop it getting any worse. So, regarding your 4 > suggestions: > > > > 1/ As I understand it, v2.1 should be the same as v2 at the > > moment, so they need to be kept the same > > 2/ You can't ignore it - it will fail CI > > 3/ No thank you. This limit should only ever be lowered :-) > > 4/ This is 'the right way'. Your suggestion for the refactor > does > > sound good. > > > > I suggest a single patch that refactors and lowers the limit in > > tox.ini. Once you've done that then you can add the new > parameter in > > a following patch. Please feel free to add me to any patches you > > create. > > > > Matthew > > > > > > > > On Wed, Dec 17, 2014 at 4:18 PM, Pasquale Porreca > > > wrote: > >> Hello > >> > >> I am working on an API extension that adds a parameter on > create server > >> call; to implement the v2 API I added few lines of code to > >> nova/api/openstack/compute/servers.py > >> > >> In particular just adding something like > >> > >> new_param = None > >> if self.ext_mgr.is_loaded('os-new-param'): > >> new_param = server_dict.get('new_param') > >> > >> leads to a pep8 fail with message 'Controller.create' is too > complex (47) > >> (Note that in tox.ini the max complexity is fixed to 47 and > there is a note > >> specifying 46 is the max complexity present at the moment). > >> > >> It is quite easy to make this test pass creating a new method > just to > >> execute these lines of code, anyway all other extensions are > handled in that > >> way and one of most important stylish rule states to be > consistent with > >> surrounding code, so I don't think a separate function is the > way to go > >> (unless it implies a change in how all other extensions are > handled too). > >> > >> My thoughts on this situation: > >> > >> 1) New extensions should not consider v2 but only v2.1, so that > file should > >> not be touched > >> 2) Ignore this error and go on: if and when the extension will > be merged the > >> complexity in tox.ini will be changed too > >> 3) The complexity in tox.ini should be raised to allow new v2 > extensions > >> 4) The code of that module should be refactored to lower the > complexity > >> (i.e. move the load of each extension in a separate function) > >> > >> I would like to know if any of my point is close to the correct > solution. > >> > >> -- > >> Pasquale Porreca > >> > >> DEK Technologies > >> Via dei Castelli Romani, 22 > >> 00040 Pomezia (Roma) > >> > >> Mobile +39 3394823805 > >> Skype paskporr > >> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Pasquale Porreca > > DEK Technologies > Via dei Castelli Romani, 22 > 00040 Pomezia (Roma) > > Mobile +39 3394823805 > Skype paskporr > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven at wedontsleep.org Thu Dec 18 08:48:15 2014 From: steven at wedontsleep.org (Steve Kowalik) Date: Thu, 18 Dec 2014 19:48:15 +1100 Subject: [openstack-dev] [TripleO] How do the CI clouds work? Message-ID: <549294CF.2040307@wedontsleep.org> Hai, I am finding myself at a loss at explaining how the CI clouds that run the tripleo jobs work from end-to-end. I am clear that we have a tripleo deployment running on those racks, with a seed, a HA undercloud and overcloud, but then I'm left with a number of questions, such as: How do we run the testenv images on the overcloud? How do the testenv images interact with the nova-compute machines in the overcloud? Are the machines running the testenv images meant to be long-running, or are they recycled after n number of runs? Cheers, -- Steve In the beginning was the word, and the word was content-type: text/plain From eduard.matei at cloudfounders.com Thu Dec 18 08:56:17 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Thu, 18 Dec 2014 10:56:17 +0200 Subject: [openstack-dev] [OpenStack-dev][Cinder] Driver stats return value: infinite or unavailable Message-ID: Hi everyone, We're in a bit of a predicament regarding review: https://review.openstack.org/#/c/130733/ Two days ago it got a -1 from John G asking to change infinite to unavailable although the docs clearly say that "If the driver is unable to provide a value for free_capacity_gb or total_capacity_gb, keywords can be provided instead. Please use ?unknown? if the array cannot report the value or ?infinite? if the array has no upper limit." ( http://docs.openstack.org/developer/cinder/devref/drivers.html) After i changed it, came Walter A. Boring IV and gave another -1 saying we should return infinite. Since we use S3 as a backend and it has no upper limit (technically there is a limit but for the purposes of our driver there's no limit as the backend is "elastic") we could return infinite. Anyway, the problem is that now we missed the K-1 merge window although the driver passed all tests (including cert tests). So please can someone decide which is the correct value so we can use that and get the patched approved (unless there are other issues). Thanks, Eduard -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jordan.pittier at scality.com Thu Dec 18 09:18:43 2014 From: jordan.pittier at scality.com (Jordan Pittier) Date: Thu, 18 Dec 2014 10:18:43 +0100 Subject: [openstack-dev] HTTPS for spice console In-Reply-To: References: Message-ID: Hi, You'll need a recent version of spice-html5. Because this commit here http://cgit.freedesktop.org/spice/spice-html5/commit/?id=293d405e15a4499219fe81e830862cc2b1518e3e is recent. Jordan On Wed, Dec 17, 2014 at 11:29 PM, Akshik DBK wrote: > > Are there any recommended approach to configure spice console proxy on a > secure [https], could not find proper documentation for the same. > > can someone point me to the rigt direction > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From keshava.a at hp.com Thu Dec 18 09:31:26 2014 From: keshava.a at hp.com (A, Keshava) Date: Thu, 18 Dec 2014 09:31:26 +0000 Subject: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration In-Reply-To: <54903638.20608@orange.com> References: <9EDBC83C95615E4A97C964A30A3E18AC2F68B9FB@ORD1EXD01.RACKSPACE.CORP> <891761EAFA335D44AD1FFDB9B4A8C063DA025B@G9W0733.americas.hpqcorp.net> <54903638.20608@orange.com> Message-ID: <891761EAFA335D44AD1FFDB9B4A8C063DB8635@G9W0733.americas.hpqcorp.net> Hi Thomas, Basically as per your thought, extend the 'vpn-label' to OVS itself. So that, when MPLS-over-GRE packet comes from OVS , use that incoming label to index respective VPN table at DC-Edge side ? Question: 1. Who tells which label to use to OVS ? You are thinking to have BGP-VPN session between DC-Edge to Compute Node(OVS) ? So that there it self-look at the BGP-VPN table and based on destination add that VPN label as MPLS label in OVS ? OR ODL or OpenStack controller will dictate which VPN label to use to both DC-Edge and CN(ovs)? 2. How much will be the gain/advantage by generating the mpls from OVS ? (compare the terminating VxLAN on DC-edge and then originating the mpls from there ?) keshava -----Original Message----- From: Thomas Morin [mailto:thomas.morin at orange.com] Sent: Tuesday, December 16, 2014 7:10 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration Hi Keshava, 2014-12-15 11:52, A, Keshava : > I have been thinking of "Starting MPLS right from CN" for L2VPN/EVPN scenario also. > > Below are my queries w.r.t supporting MPLS from OVS : > 1. MPLS will be used even for VM-VM traffic across CNs generated by OVS ? If E-VPN is used only to interconnect outside of a Neutron domain, then MPLS does not have to be used for traffic between VMs. If E-VPN is used inside one DC for VM-VM traffic, then MPLS is *one* of the possible encapsulation only: E-VPN specs have been defined to use VXLAN (handy because there is native kernel support), MPLS/GRE or MPLS/UDP are other possibilities. > 2. MPLS will be originated right from OVS and will be mapped at Gateway (it may be NN/Hardware router ) to SP network ? > So MPLS will carry 2 Labels ? (one for hop-by-hop, and other one > for end to identify network ?) On "will carry 2 Labels ?" : this would be one possibility, but not the one we target. We would actually favor MPLS/GRE (GRE used instead of what you call the MPLS "hop-by-hop" label) inside the DC -- this requires only one label. At the DC edge gateway, depending on the interconnection techniques to connect the WAN, different options can be used (RFC4364 section 10): Option A with back-to-back VRFs (no MPLS label, but typically VLANs), or option B (with one MPLS label), a mix of A/B is also possible and sometimes called option D (one label) ; option C also exists, but is not a good fit here. Inside one DC, if vswitches see each other across an Ethernet segment, we can also use MPLS with just one label (the VPN label) without a GRE encap. In a way, you can say that in Option B, the label are "mapped" at the DC/WAN gateway(s), but this is really just MPLS label swaping, not to be misunderstood as mapping a DC label space to a WAN label space (see below, the label space is local to each device). > 3. MPLS will go over even the "network physical infrastructure" also ? The use of MPLS/GRE means we are doing an overlay, just like your typical VXLAN-based solution, and the network physical infrastructure does not need to be MPLS-aware (it just needs to be able to carry IP traffic) > 4. How the Labels will be mapped a/c virtual and physical world ? (I don't get the question, I'm not sure what you mean by "mapping labels") > 5. Who manages the label space ? Virtual world or physical world or > both ? (OpenStack + ODL ?) In MPLS*, the label space is local to each device : a label is "downstream-assigned", i.e. allocated by the receiving device for a specific purpose (e.g. forwarding in a VRF). It is then (typically) avertized in a routing protocol; the sender device will use this label to send traffic to the receiving device for this specific purpose. As a result a sender device may then use label 42 to forward traffic in the context of VPN X to a receiving device A, and the same label 42 to forward traffic in the context of another VPN Y to another receiving device B, and locally use label 42 to receive traffic for VPN Z. There is no global label space to manage. So, while you can design a solution where the label space is managed in a centralized fashion, this is not required. You could design an SDN controller solution where the controller would manage one label space common to all nodes, or all the label spaces of all forwarding devices, but I think its hard to derive any interesting property from such a design choice. In our BaGPipe distributed design (and this is also true in OpenContrail for instance) the label space is managed locally on each compute node (or network node if the BGP speaker is on a network node). More precisely in VPN implementation. If you take a step back, the only naming space that has to be "managed" in BGP VPNs is the Route Target space. This is only in the control plane. It is a very large space (48 bits), and it is structured (each AS has its own 32 bit space, and there are private AS numbers). The mapping to the dataplane to MPLS labels is per-device and purely local. (*: MPLS also allows "upstream-assigned" labels, it is more recent and only used in specific cases where downstream assigned does not work well) > 6. The labels are nested (i.e. Like L3 VPN end to end MPLS connectivity ) will be established ? In solutions where MPLS/GRE is used the label stack typically has only one label (the VPN label). > 7. Or it will be label stitching between Virtual-Physical network ? > How the end-to-end path will be setup ? > > Let me know your opinion for the same. > How the end-to-end path is setup may depend on interconnection choice. With an inter-AS option B or A+B, you would have the following: - ingress DC overlay: one MPLS-over-GRE hop from vswitch to DC edge Keshava: Label coming from vSwitch is considered to select the respective VPN instance. But someone should tell which label to use to which VPN instance at OVS side right ? - ingress DC edge to WAN: one MPLS label (VPN label advertised by eBGP) - inside the WAN: (typically) two labels (e.g. LDP label to reach remote edge, and VPN label advertised via iBGP) - WAN to edgress DC edge: one MPLS label (VPN label advertised by eBGP) - egress DC overlay: one MPLS-over-GRE hop from DC edge to vswitch Not sure how the above answers your questions; please keep asking if it does not ! ;) -Thomas > -----Original Message----- > From: Mathieu Rohon [mailto:mathieu.rohon at gmail.com] > Sent: Monday, December 15, 2014 3:46 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration > > Hi Ryan, > > We have been working on similar Use cases to announce /32 with the Bagpipe BGPSpeaker that supports EVPN. > Please have a look at use case B in [1][2]. > Note also that the L2population Mechanism driver for ML2, that is compatible with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN, and I'm sure it could help in your use case > > [1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe > [2]https://www.youtube.com/watch?v=q5z0aPrUZYc&sns > [3]https://blueprints.launchpad.net/neutron/+spec/l2-population > > Mathieu > > On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger wrote: >> Hi, >> >> At Rackspace, we have a need to create a higher level networking >> service primarily for the purpose of creating a Floating IP solution >> in our environment. The current solutions for Floating IPs, being tied >> to plugin implementations, does not meet our needs at scale for the following reasons: >> >> 1. Limited endpoint H/A mainly targeting failover only and not >> multi-active endpoints, 2. Lack of noisy neighbor and DDOS mitigation, >> 3. IP fragmentation (with cells, public connectivity is terminated >> inside each cell leading to fragmentation and IP stranding when cell >> CPU/Memory use doesn't line up with allocated IP blocks. Abstracting >> public connectivity away from nova installations allows for much more >> efficient use of those precious IPv4 blocks). >> 4. Diversity in transit (multiple encapsulation and transit types on a >> per floating ip basis). >> >> We realize that network infrastructures are often unique and such a >> solution would likely diverge from provider to provider. However, we >> would love to collaborate with the community to see if such a project >> could be built that would meet the needs of providers at scale. We >> believe that, at its core, this solution would boil down to >> terminating north<->south traffic temporarily at a massively >> horizontally scalable centralized core and then encapsulating traffic >> east<->west to a specific host based on the association setup via the current L3 router's extension's 'floatingips' >> resource. >> >> Our current idea, involves using Open vSwitch for header rewriting and >> tunnel encapsulation combined with a set of Ryu applications for management: >> >> https://i.imgur.com/bivSdcC.png >> >> The Ryu application uses Ryu's BGP support to announce up to the >> Public Routing layer individual floating ips (/32's or /128's) which >> are then summarized and announced to the rest of the datacenter. If a >> particular floating ip is experiencing unusually large traffic (DDOS, >> slashdot effect, etc.), the Ryu application could change the >> announcements up to the Public layer to shift that traffic to >> dedicated hosts setup for that purpose. It also announces a single /32 >> "Tunnel Endpoint" ip downstream to the TunnelNet Routing system which >> provides transit to and from the cells and their hypervisors. Since >> traffic from either direction can then end up on any of the FLIP >> hosts, a simple flow table to modify the MAC and IP in either the SRC >> or DST fields (depending on traffic direction) allows the system to be >> completely stateless. We have proven this out (with static routing and >> flows) to work reliably in a small lab setup. >> >> On the hypervisor side, we currently plumb networks into separate OVS >> bridges. Another Ryu application would control the bridge that handles >> overlay networking to selectively divert traffic destined for the >> default gateway up to the FLIP NAT systems, taking into account any >> configured logical routing and local L2 traffic to pass out into the >> existing overlay fabric undisturbed. >> >> Adding in support for L2VPN EVPN >> (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN >> Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) >> to the Ryu BGP speaker will allow the hypervisor side Ryu application >> to advertise up to the FLIP system reachability information to take >> into account VM failover, live-migrate, and supported encapsulation >> types. We believe that decoupling the tunnel endpoint discovery from >> the control plane >> (Nova/Neutron) will provide for a more robust solution as well as >> allow for use outside of openstack if desired. >> >> _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jichenjc at cn.ibm.com Thu Dec 18 09:33:04 2014 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Thu, 18 Dec 2014 17:33:04 +0800 Subject: [openstack-dev] [nova][resend with correct subject prefix] ask for usage of quota reserve In-Reply-To: <54929143.2070903@linux.vnet.ibm.com> References: <54929143.2070903@linux.vnet.ibm.com> Message-ID: AFAIK, quota will expire in 24 hours cfg.IntOpt('reservation_expire', default=86400, help='Number of seconds until a reservation expires'), Best Regards! Kevin (Chen) Ji ? ? Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82454158 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: "Eli Qiao(Li Yong Qiao)" To: "OpenStack Development Mailing List (not for usage questions)" Date: 12/18/2014 04:34 PM Subject: [openstack-dev] [nova][resend with correct subject prefix] ask for usage of quota reserve hi all, can anyone tell if we call quotas.reserve() but never call quotas.commit() or quotas.rollback(). what will happen? for example: 1. when doing resize, we call quotas.reserve() to reservea a delta quota.(new_flavor - old_flavor) 2. for some reasons, nova-compute crashed, and not chance to call quotas.commit() or quotas.rollback() (called by finish_resize in nova/compute/manager.py) 3. next time restart nova-compute server, is the delta quota still reserved , or do we need any other operation on quotas? Thanks in advance -Eli. ps: this is related to patch : Handle RESIZE_PREP status when nova compute do init_instance(https://review.openstack.org/#/c/132827/) -- Thanks Eli Qiao(qiaoly at cn.ibm.com) _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From jkirnosova at mirantis.com Thu Dec 18 09:56:54 2014 From: jkirnosova at mirantis.com (Julia Aranovich) Date: Thu, 18 Dec 2014 10:56:54 +0100 Subject: [openstack-dev] [Fuel] Support of warnings in Fuel UI Message-ID: Hi All, First of all, I would like to inform you that support of warnings was added on Settings tab in Fuel UI. Now you can add 'message' attribute to setting restriction and it will be displayed as a tooltip on the tab if restriction condition is satisfied. So, setting restriction should have the following format in openstack.yaml file: restrictions: - condition: "settings:common.libvirt_type.value != 'kvm'" message: "KVM only is supported" This format is also eligible for setting group restrictions and restrictions of setting values (for setting with 'radio' type). Please also note that message attribute can be also added to role restrictions and will be displayed as a tooltip on Add Nodes screen. And the second goal of my letter is to ask you to go through openstack.yaml file and add an appropriate messages for restrictions. It will make Fuel UI more clear and informative. Thank you in advance! Julia -- Kind Regards, Julia Aranovich, Software Engineer, Mirantis, Inc +7 (905) 388-82-61 (cell) Skype: juliakirnosova www.mirantis.ru jaranovich at mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From anil.venkata at enovance.com Thu Dec 18 10:44:29 2014 From: anil.venkata at enovance.com (Anil Venkata) Date: Thu, 18 Dec 2014 11:44:29 +0100 (CET) Subject: [openstack-dev] [third party][neutron] - OpenDaylight CI failing for past 6 days Message-ID: <895258305.824569.1418899469666.JavaMail.zimbra@enovance.com> Hi All Last successful build on OpenDaylight CI( https://jenkins.opendaylight.org/ovsdb/job/openstack-gerrit/ ) was 6 days back. After that, OpenDaylight CI Jenkins job is failing for all the patches. Can we remove the voting rights for the OpenDaylight CI until it is fixed? Thanks Anil.Venakata From punith.s at cloudbyte.com Thu Dec 18 11:12:10 2014 From: punith.s at cloudbyte.com (Punith S) Date: Thu, 18 Dec 2014 16:42:10 +0530 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: References: <5486D947.4090209@hp.com> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> Message-ID: Hi Eduard we tried running https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh on ubuntu master 12.04, and it appears to be working fine on 12.04. thanks On Thu, Dec 18, 2014 at 1:57 PM, Eduard Matei < eduard.matei at cloudfounders.com> wrote: > > Hi, > Seems i can't install using puppet on the jenkins master using > install_master.sh from > https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh > because it's running Ubuntu 11.10 and it appears unsupported. > I managed to install puppet manually on master and everything else fails > So i'm trying to manually install zuul and nodepool and jenkins job > builder, see where i end up. > > The slave looks complete, got some errors on running install_slave so i > ran parts of the script manually, changing some params and it appears > installed but no way to test it without the master. > > Any ideas welcome. > > Thanks, > > Eduard > > On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy > wrote: > >> Manually running the script requires a few environment settings. Take a >> look at the README here: >> >> https://github.com/openstack-infra/devstack-gate >> >> >> >> Regarding cinder, I?m using this repo to run our cinder jobs (fork from >> jaypipes). >> >> https://github.com/rasselin/os-ext-testing >> >> >> >> Note that this solution doesn?t use the Jenkins gerrit trigger pluggin, >> but zuul. >> >> >> >> There?s a sample job for cinder here. It?s in Jenkins Job Builder format. >> >> >> https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample >> >> >> >> You can ask more questions in IRC freenode #openstack-cinder. (irc# >> asselin) >> >> >> >> Ramy >> >> >> >> *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] >> *Sent:* Tuesday, December 16, 2014 12:41 AM >> *To:* Bailey, Darragh >> *Cc:* OpenStack Development Mailing List (not for usage questions); >> OpenStack >> *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need >> help setting up CI >> >> >> >> Hi, >> >> >> >> Can someone point me to some working documentation on how to setup third >> party CI? (joinfu's instructions don't seem to work, and manually running >> devstack-gate scripts fails: >> >> Running gate_hook >> >> Job timeout set to: 163 minutes >> >> timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory >> >> ERROR: the main setup script run by this job failed - exit code: 127 >> >> please look at the relevant log files to determine the root cause >> >> Cleaning up host >> >> ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) >> >> Build step 'Execute shell' marked build as failure. >> >> >> >> I have a working Jenkins slave with devstack and our internal libraries, >> i have Gerrit Trigger Plugin working and triggering on patches created, i >> just need the actual job contents so that it can get to comment with the >> test results. >> >> >> >> Thanks, >> >> >> >> Eduard >> >> >> >> On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei < >> eduard.matei at cloudfounders.com> wrote: >> >> Hi Darragh, thanks for your input >> >> >> >> I double checked the job settings and fixed it: >> >> - build triggers is set to Gerrit event >> >> - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger >> Plugin and tested separately) >> >> - Trigger on: Patchset Created >> >> - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: >> Type: Path, Pattern: ** (was Type Plain on both) >> >> Now the job is triggered by commit on openstack-dev/sandbox :) >> >> >> >> Regarding the Query and Trigger Gerrit Patches, i found my patch using >> query: status:open project:openstack-dev/sandbox change:139585 and i can >> trigger it manually and it executes the job. >> >> >> >> But i still have the problem: what should the job do? It doesn't actually >> do anything, it doesn't run tests or comment on the patch. >> >> Do you have an example of job? >> >> >> >> Thanks, >> >> Eduard >> >> >> >> On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh wrote: >> >> Hi Eduard, >> >> >> I would check the trigger settings in the job, particularly which "type" >> of pattern matching is being used for the branches. Found it tends to be >> the spot that catches most people out when configuring jobs with the >> Gerrit Trigger plugin. If you're looking to trigger against all branches >> then you would want "Type: Path" and "Pattern: **" appearing in the UI. >> >> If you have sufficient access using the 'Query and Trigger Gerrit >> Patches' page accessible from the main view will make it easier to >> confirm that your Jenkins instance can actually see changes in gerrit >> for the given project (which should mean that it can see the >> corresponding events as well). Can also use the same page to re-trigger >> for PatchsetCreated events to see if you've set the patterns on the job >> correctly. >> >> Regards, >> Darragh Bailey >> >> "Nothing is foolproof to a sufficiently talented fool" - Unknown >> >> On 08/12/14 14:33, Eduard Matei wrote: >> > Resending this to dev ML as it seems i get quicker response :) >> > >> > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: >> > Patchset Created", chose as server the configured Gerrit server that >> > was previously tested, then added the project openstack-dev/sandbox >> > and saved. >> > I made a change on dev sandbox repo but couldn't trigger my job. >> > >> > Any ideas? >> > >> > Thanks, >> > Eduard >> > >> > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei >> > > > > wrote: >> > >> > Hello everyone, >> > >> > Thanks to the latest changes to the creation of service accounts >> > process we're one step closer to setting up our own CI platform >> > for Cinder. >> > >> > So far we've got: >> > - Jenkins master (with Gerrit plugin) and slave (with DevStack and >> > our storage solution) >> > - Service account configured and tested (can manually connect to >> > review.openstack.org and get events >> > and publish comments) >> > >> > Next step would be to set up a job to do the actual testing, this >> > is where we're stuck. >> > Can someone please point us to a clear example on how a job should >> > look like (preferably for testing Cinder on Kilo)? Most links >> > we've found are broken, or tools/scripts are no longer working. >> > Also, we cannot change the Jenkins master too much (it's owned by >> > Ops team and they need a list of tools/scripts to review before >> > installing/running so we're not allowed to experiment). >> > >> > Thanks, >> > Eduard >> > >> > -- >> > >> > *Eduard Biceri Matei, Senior Software Developer* >> > www.cloudfounders.com >> > | eduard.matei at cloudfounders.com >> > >> > >> > >> > >> > *CloudFounders, The Private Cloud Software Company* >> > >> > Disclaimer: >> > This email and any files transmitted with it are confidential and >> > intended solely for the use of the individual or entity to whom >> > they are addressed.If you are not the named addressee or an >> > employee or agent responsible for delivering this message to the >> > named addressee, you are hereby notified that you are not >> > authorized to read, print, retain, copy or disseminate this >> > message or any part of it. If you have received this email in >> > error we request you to notify us by reply e-mail and to delete >> > all electronic files of the message. If you are not the intended >> > recipient you are notified that disclosing, copying, distributing >> > or taking any action in reliance on the contents of this >> > information is strictly prohibited. E-mail transmission cannot be >> > guaranteed to be secure or error free as information could be >> > intercepted, corrupted, lost, destroyed, arrive late or >> > incomplete, or contain viruses. The sender therefore does not >> > accept liability for any errors or omissions in the content of >> > this message, and shall have no liability for any loss or damage >> > suffered by the user, which arise as a result of e-mail >> transmission. >> > >> > >> > >> > >> > -- >> > *Eduard Biceri Matei, Senior Software Developer* >> > www.cloudfounders.com >> > | eduard.matei at cloudfounders.com >> > >> > >> > >> > >> > *CloudFounders, The Private Cloud Software Company* >> > >> > Disclaimer: >> > This email and any files transmitted with it are confidential and >> > intended solely for the use of the individual or entity to whom they >> > are addressed.If you are not the named addressee or an employee or >> > agent responsible for delivering this message to the named addressee, >> > you are hereby notified that you are not authorized to read, print, >> > retain, copy or disseminate this message or any part of it. If you >> > have received this email in error we request you to notify us by reply >> > e-mail and to delete all electronic files of the message. If you are >> > not the intended recipient you are notified that disclosing, copying, >> > distributing or taking any action in reliance on the contents of this >> > information is strictly prohibited. E-mail transmission cannot be >> > guaranteed to be secure or error free as information could be >> > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or >> > contain viruses. The sender therefore does not accept liability for >> > any errors or omissions in the content of this message, and shall have >> > no liability for any loss or damage suffered by the user, which arise >> > as a result of e-mail transmission. >> > >> > >> >> > _______________________________________________ >> > OpenStack-Infra mailing list >> > OpenStack-Infra at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra >> >> >> >> >> >> -- >> >> *Eduard Biceri Matei, Senior Software Developer* >> >> www.cloudfounders.com >> >> | eduard.matei at cloudfounders.com >> >> >> >> >> >> >> >> *CloudFounders, The Private Cloud Software Company* >> >> >> >> Disclaimer: >> >> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. >> If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. >> E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. >> >> >> >> >> >> -- >> >> *Eduard Biceri Matei, Senior Software Developer* >> >> www.cloudfounders.com >> >> | eduard.matei at cloudfounders.com >> >> >> >> >> >> >> >> *CloudFounders, The Private Cloud Software Company* >> >> >> >> Disclaimer: >> >> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. >> If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. >> E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- regards, punith s cloudbyte.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekh at redhat.com Thu Dec 18 11:25:40 2014 From: derekh at redhat.com (Derek Higgins) Date: Thu, 18 Dec 2014 11:25:40 +0000 Subject: [openstack-dev] [TripleO] Bug squashing followup Message-ID: <5492B9B4.8010606@redhat.com> While bug squashing yesterday, I went through quite a lot of bugs closing those that were already fixed or no longer relevant, closing around 40 bugs. I eventually ran out of time, but I'm pretty sure if we split the task up between us we could weed out a lot more. What I'd like to do is, as a once off, randomly split up all the bugs to a group of volunteers (hopefully a large number of people), each person gets assigned X number of bugs and is then responsible for just deciding if it is still a relevant bug (or finding somebody who can help decide) and closing if necessary. Nothing needs to get fixed here we just need to make sure people are have a uptodate list of relevant bugs. So who wants to volunteer? We probably need about 15+ people for this to be split into manageable chunks. If your willing to help out just add your name to this list https://etherpad.openstack.org/p/tripleo-bug-weeding If we get enough people I'll follow up by splitting out the load and assigning to people. The bug squashing day yesterday put a big dent in these, but wasn't entirely focused on weeding out stale bugs, some people probably got caught up fixing individual bugs and it wasn't helped by a temporary failure of our CI jobs (provoked by a pbr update and we were building pbr when we didn't need to be). thanks, Derek. From kuvaja at hp.com Thu Dec 18 11:27:00 2014 From: kuvaja at hp.com (Kuvaja, Erno) Date: Thu, 18 Dec 2014 11:27:00 +0000 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: <0FBF5631AB7B504D89C7E6929695B6249302F2A2@ORD1EXD02.RACKSPACE.CORP> References: <5490967C.8020801@gmail.com> <5490A5D8.2090804@catalyst.net.nz> <20141216231940.GA61726@HQSML-1081034.cable.comcast.com> <0FBF5631AB7B504D89C7E6929695B6249302F16F@ORD1EXD02.RACKSPACE.CORP>, <0FBF5631AB7B504D89C7E6929695B6249302F2A2@ORD1EXD02.RACKSPACE.CORP> Message-ID: I think that's horrible idea. How do we do that store independent with the linking dependencies? We should not depend universal use case like this on limited subset of backends, specially non-OpenStack ones. Glance (nor Nova) should never depend having direct access to the actual medium where the images are stored. I think this is school book example for something called database. Well arguable if this should be tracked at Glance or Nova, but definitely not a dirty hack expecting specific backend characteristics. As mentioned before the protected image property is to ensure that the image does not get deleted, that is also easy to track when the images are queried. Perhaps the record needs to track the original state of protected flag, image id and use count. 3 column table and couple of API calls. Lets not at least make it any more complicated than it needs to be if such functionality is desired. - Erno From: Nikhil Komawar [mailto:nikhil.komawar at RACKSPACE.COM] Sent: 17 December 2014 20:34 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use? Guess that's a implementation detail. Depends on the way you go about using what's available now, I suppose. Thanks, -Nikhil ________________________________ From: Chris St. Pierre [chris.a.st.pierre at gmail.com] Sent: Wednesday, December 17, 2014 2:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use? I was assuming atomic increment/decrement operations, in which case I'm not sure I see the race conditions. Or is atomism assuming too much? On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar > wrote: That looks like a decent alternative if it works. However, it would be too racy unless we we implement a test-and-set for such properties or there is a different job which queues up these requests and perform sequentially for each tenant. Thanks, -Nikhil ________________________________ From: Chris St. Pierre [chris.a.st.pierre at gmail.com] Sent: Wednesday, December 17, 2014 10:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use? That's unfortunately too simple. You run into one of two cases: 1. If the job automatically removes the protected attribute when an image is no longer in use, then you lose the ability to use "protected" on images that are not in use. I.e., there's no way to say, "nothing is currently using this image, but please keep it around." (This seems particularly useful for snapshots, for instance.) 2. If the job does not automatically remove the protected attribute, then an image would be protected if it had ever been in use; to delete an image, you'd have to manually un-protect it, which is a workflow that quite explicitly defeats the whole purpose of flagging images as protected when they're in use. It seems like flagging an image as *not* in use is actually a fairly difficult problem, since it requires consensus among all components that might be using images. The only solution that readily occurs to me would be to add something like a filesystem link count to images in Glance. Then when Nova spawns an instance, it increments the usage count; when the instance is destroyed, the usage count is decremented. And similarly with other components that use images. An image could only be deleted when its usage count was zero. There are ample opportunities to get out of sync there, but it's at least a sketch of something that might work, and isn't *too* horribly hackish. Thoughts? On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya > wrote: A simple solution that wouldn't require modification of glance would be a cron job that lists images and snapshots and marks them protected while they are in use. Vish On Dec 16, 2014, at 3:19 PM, Collins, Sean > wrote: > On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote: >> No, I'm looking to prevent images that are in use from being deleted. "In >> use" and "protected" are disjoint sets. > > I have seen multiple cases of images (and snapshots) being deleted while > still in use in Nova, which leads to some very, shall we say, > interesting bugs and support problems. > > I do think that we should try and determine a way forward on this, they > are indeed disjoint sets. Setting an image as protected is a proactive > measure, we should try and figure out a way to keep tenants from > shooting themselves in the foot if possible. > > -- > Sean M. Collins > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Chris St. Pierre _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Chris St. Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkolesni at redhat.com Thu Dec 18 12:06:08 2014 From: mkolesni at redhat.com (Mike Kolesnik) Date: Thu, 18 Dec 2014 07:06:08 -0500 (EST) Subject: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution In-Reply-To: <1480815754.245262.1418883815933.JavaMail.zimbra@redhat.com> References: <1480815754.245262.1418883815933.JavaMail.zimbra@redhat.com> Message-ID: <346648320.392086.1418904368217.JavaMail.zimbra@redhat.com> Hi Neutron community members. I wanted to query the community about a proposal of how to fix HA routers not working with L2Population (bug 1365476[1]). This bug is important to fix especially if we want to have HA routers and DVR routers working together. [1] https://bugs.launchpad.net/neutron/+bug/1365476 What's happening now? * HA routers use distributed ports, i.e. the port with the same IP & MAC details is applied on all nodes where an L3 agent is hosting this router. * Currently, the port details have a binding pointing to an arbitrary node and this is not updated. * L2pop takes this "potentially stale" information and uses it to create: 1. A tunnel to the node. 2. An FDB entry that directs traffic for that port to that node. 3. If ARP responder is on, ARP requests will not traverse the network. * Problem is, the master router wouldn't necessarily be running on the reported agent. This means that traffic would not reach the master node but some arbitrary node where the router master might be running, but might be in another state (standby, fail). What is proposed? Basically the idea is not to do L2Pop for HA router ports that reside on the tenant network. Instead, we would create a tunnel to each node hosting the HA router so that the normal learning switch functionality would take care of switching the traffic to the master router. This way no matter where the master router is currently running, the data plane would know how to forward traffic to it. This solution requires changes on the controller only. What's to gain? * Data plane only solution, independent of the control plane. * Lowest failover time (same as HA routers today). * High backport potential: * No APIs changed/added. * No configuration changes. * No DB changes. * Changes localized to a single file and limited in scope. What's the alternative? An alternative solution would be to have the controller update the port binding on the single port so that the plain old L2Pop happens and notifies about the location of the master router. This basically negates all the benefits of the proposed solution, but is wider. This solution depends on the report-ha-router-master spec which is currently in the implementation phase. It's important to note that these two solutions don't collide and could be done independently. The one I'm proposing just makes more sense from an HA viewpoint because of it's benefits which fit the HA methodology of being fast & having as little outside dependency as possible. It could be done as an initial solution which solves the bug for mechanism drivers that support normal learning switch (OVS), and later kept as an optimization to the more general, controller based, solution which will solve the issue for any mechanism driver working with L2Pop (Linux Bridge, possibly others). Would love to hear your thoughts on the subject. Regards, Mike From derekh at redhat.com Thu Dec 18 12:30:57 2014 From: derekh at redhat.com (Derek Higgins) Date: Thu, 18 Dec 2014 12:30:57 +0000 Subject: [openstack-dev] [TripleO] How do the CI clouds work? In-Reply-To: <549294CF.2040307@wedontsleep.org> References: <549294CF.2040307@wedontsleep.org> Message-ID: <5492C901.7070001@redhat.com> On 18/12/14 08:48, Steve Kowalik wrote: > Hai, > > I am finding myself at a loss at explaining how the CI clouds that run > the tripleo jobs work from end-to-end. I am clear that we have a tripleo > deployment running on those racks, with a seed, a HA undercloud and > overcloud, but then I'm left with a number of questions, such as: Yup, this is correct, from a CI point of view all that is relevant is the overcloud and a set of baremetal test env hosts. The seed and undercloud are there because we used tripleo to deploy the thing in the first place. > > How do we run the testenv images on the overcloud? nodepool talks to our overcloud to create an instance where the jenkins jobs run. This "jenkins node" is where we build the images, jenkins doesn't manage and isn't aware of the testenvs hosts. The entry point for jenkins to run tripleo ci is toci_gate_test.sh, at the end of this script you'll see a call to testenv-client[1] testenv-client talks to gearman (an instance on our overcloud, a different gearman instance to what infra have), gearman responds with a json file representing one of the the testenv's that have been registered with it. testenv-client then runs the command "./toci_devtest.sh" and passes in the json file (via $TE_DATAFILE). To prevent 2 CI jobs using the same testenv, the testenv is now "locked" until toci_devtest exits. The jenkins node now has all the relevant IPs and MAC addresses to talk to the testenv. > > How do the testenv images interact with the nova-compute machines in > the overcloud? The images are built on instances in this cloud. The MAC address of eth1 on the seed in for the testenv has been registered with neutron on the overcloud, so its IP is known (its in the json file we got in $TE_DATAFILE). All traffic to the other instances in the CI testenv is routed though the seed its eth2 shares a ovs bridge with eth1 from the other VM's in the same testenv. > > Are the machines running the testenv images meant to be long-running, > or are they recycled after n number of runs? They are long running and in theory shouldn't need to be recycled, in practice they get recycled sometimes for one of 2 reason 1. The image needs to be updated (e.g. to increase the amount of RAM on the vibvirt domains they host) 2. If one is experiencing a problem, I usually do a "nova rebuild" on it, this doesn't happen very frequently, we currently have 15 TE hosts on rh1 7 have an uptime over 80 days, while the others are new HW that was added last week. But problems we have encountered in the passed causing a rebuild include a TE Host loosing its IP or https://bugs.launchpad.net/tripleo/+bug/1335926 https://bugs.launchpad.net/tripleo/+bug/1314709 > > Cheers, No problem I tried to document this at one stage here[2] but feel free to add more or point out where its lacking or ask questions here and I'll attempt to answer. thanks, Derek. [1] http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/toci_gate_test.sh?id=3d86dd4c885a68eabddb7f73a6dbe6f3e75fde64#n69 [2] http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/docs/TripleO-ci.rst From gkotton at vmware.com Thu Dec 18 12:47:36 2014 From: gkotton at vmware.com (Gary Kotton) Date: Thu, 18 Dec 2014 12:47:36 +0000 Subject: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution Message-ID: On 12/18/14, 2:06 PM, "Mike Kolesnik" wrote: >Hi Neutron community members. > >I wanted to query the community about a proposal of how to fix HA routers >not >working with L2Population (bug 1365476[1]). >This bug is important to fix especially if we want to have HA routers and >DVR >routers working together. > >[1] https://bugs.launchpad.net/neutron/+bug/1365476 > >What's happening now? >* HA routers use distributed ports, i.e. the port with the same IP & MAC > details is applied on all nodes where an L3 agent is hosting this >router. >* Currently, the port details have a binding pointing to an arbitrary node > and this is not updated. >* L2pop takes this "potentially stale" information and uses it to create: > 1. A tunnel to the node. > 2. An FDB entry that directs traffic for that port to that node. > 3. If ARP responder is on, ARP requests will not traverse the network. >* Problem is, the master router wouldn't necessarily be running on the > reported agent. > This means that traffic would not reach the master node but some >arbitrary > node where the router master might be running, but might be in another > state (standby, fail). > >What is proposed? >Basically the idea is not to do L2Pop for HA router ports that reside on >the >tenant network. >Instead, we would create a tunnel to each node hosting the HA router so >that >the normal learning switch functionality would take care of switching the >traffic to the master router. In Neutron we just ensure that the MAC address is unique per network. Could a duplicate MAC address cause problems here? >This way no matter where the master router is currently running, the data >plane would know how to forward traffic to it. >This solution requires changes on the controller only. > >What's to gain? >* Data plane only solution, independent of the control plane. >* Lowest failover time (same as HA routers today). >* High backport potential: > * No APIs changed/added. > * No configuration changes. > * No DB changes. > * Changes localized to a single file and limited in scope. > >What's the alternative? >An alternative solution would be to have the controller update the port >binding >on the single port so that the plain old L2Pop happens and notifies about >the >location of the master router. >This basically negates all the benefits of the proposed solution, but is >wider. >This solution depends on the report-ha-router-master spec which is >currently in >the implementation phase. > >It's important to note that these two solutions don't collide and could >be done >independently. The one I'm proposing just makes more sense from an HA >viewpoint >because of it's benefits which fit the HA methodology of being fast & >having as >little outside dependency as possible. >It could be done as an initial solution which solves the bug for mechanism >drivers that support normal learning switch (OVS), and later kept as an >optimization to the more general, controller based, solution which will >solve >the issue for any mechanism driver working with L2Pop (Linux Bridge, >possibly >others). > >Would love to hear your thoughts on the subject. > >Regards, >Mike > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From vkramskikh at mirantis.com Thu Dec 18 13:14:18 2014 From: vkramskikh at mirantis.com (Vitaly Kramskikh) Date: Thu, 18 Dec 2014 14:14:18 +0100 Subject: [openstack-dev] [Fuel] Support of warnings in Fuel UI In-Reply-To: References: Message-ID: I also want to add that there is also a short form for this: restrictions: - "settings:common.libvirt_type.value != 'kvm'": "KVM only is supported" There are also a few restrictions in existing openstack.yaml like this: volumes_lvm: label: "Cinder LVM over iSCSI for volumes" restrictions: - "settings:storage.volumes_ceph.value == true or settings:common.libvirt_type.value == 'vcenter'" The restriction above is actually 2 restrictions for 2 unrelated things and it should be separated like this: restrictions: - "settings:storage.volumes_ceph.value == true": "This stuff cannot be used with Ceph" - "settings:common.libvirt_type.value == 'vcenter'": "This stuff cannot be used with vCenter" So please add these messages for your features to improve Fuel UX. 2014-12-18 10:56 GMT+01:00 Julia Aranovich : > > Hi All, > > First of all, I would like to inform you that support of warnings was > added on Settings tab in Fuel UI. > Now you can add 'message' attribute to setting restriction and it will be > displayed as a tooltip on the tab > if restriction > condition is satisfied. > > So, setting restriction should have the following format in openstack.yaml > > file: > > restrictions: > - condition: "settings:common.libvirt_type.value != 'kvm'" > message: "KVM only is supported" > > This format is also eligible for setting group restrictions and > restrictions of setting values (for setting with 'radio' type). > > Please also note that message attribute can be also added to role > restrictions and will be displayed as a tooltip on Add Nodes screen. > > > > And the second goal of my letter is to ask you to go through > openstack.yaml > file > and add an appropriate messages for restrictions. It will make Fuel UI more > clear and informative. > > Thank you in advance! > > Julia > > -- > Kind Regards, > Julia Aranovich, > Software Engineer, > Mirantis, Inc > +7 (905) 388-82-61 (cell) > Skype: juliakirnosova > www.mirantis.ru > jaranovich at mirantis.com > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pasquale.porreca at dektech.com.au Thu Dec 18 13:32:32 2014 From: pasquale.porreca at dektech.com.au (Pasquale Porreca) Date: Thu, 18 Dec 2014 14:32:32 +0100 Subject: [openstack-dev] [nova] Complexity check and v2 API In-Reply-To: <54929161.8080305@dektech.com.au> References: <5491ACC2.6070207@dektech.com.au> <5491BACE.2010900@dektech.com.au> <54929161.8080305@dektech.com.au> Message-ID: <5492D770.70308@dektech.com.au> I created a bug report and proposed a fix for this issue: https://bugs.launchpad.net/nova/+bug/1403586 @Matthew Gilliard: I added you as reviewer for my patch, since you asked for it. Thanks to anyone that will want to review the bug report and the patch. On 12/18/14 09:33, Pasquale Porreca wrote: > Yes, for v2.1 there is not this problem, moreover v2.1 corresponding > server.py has much lower complexity than v2 one. > > On 12/17/14 20:10, Christopher Yeoh wrote: >> Hi, >> >> Given the timing (no spec approved) it sounds like a v2.1 plus >> microversions (just merging) with no v2 changes at all. >> >> The v2.1 framework is more flexible and you should need no changes to >> servers.py at all as there are hooks for adding extra parameters in >> separate plugins. There are examples of this in the v3 directory >> which is really v2.1 now. >> >> Chris >> On Thu, 18 Dec 2014 at 3:49 am, Pasquale Porreca >> > > wrote: >> >> Thank you for the answer. >> >> my API proposal won't be merged in kilo release since the >> deadline for >> approval is tomorrow, so I may propose the fix to lower the >> complexity >> in another way, what do you think about a bug fix? >> >> On 12/17/14 18:05, Matthew Gilliard wrote: >> > Hello Pasquale >> > >> > The problem is that you are trying to add a new if/else >> branch into >> > a method which is already ~250 lines long, and has the highest >> > complexity of any function in the nova codebase. I assume that you >> > didn't contribute much to that complexity, but we've recently >> added a >> > limit to stop it getting any worse. So, regarding your 4 >> suggestions: >> > >> > 1/ As I understand it, v2.1 should be the same as v2 at the >> > moment, so they need to be kept the same >> > 2/ You can't ignore it - it will fail CI >> > 3/ No thank you. This limit should only ever be lowered :-) >> > 4/ This is 'the right way'. Your suggestion for the >> refactor does >> > sound good. >> > >> > I suggest a single patch that refactors and lowers the limit in >> > tox.ini. Once you've done that then you can add the new >> parameter in >> > a following patch. Please feel free to add me to any patches you >> > create. >> > >> > Matthew >> > >> > >> > >> > On Wed, Dec 17, 2014 at 4:18 PM, Pasquale Porreca >> > > > wrote: >> >> Hello >> >> >> >> I am working on an API extension that adds a parameter on >> create server >> >> call; to implement the v2 API I added few lines of code to >> >> nova/api/openstack/compute/servers.py >> >> >> >> In particular just adding something like >> >> >> >> new_param = None >> >> if self.ext_mgr.is_loaded('os-new-param'): >> >> new_param = server_dict.get('new_param') >> >> >> >> leads to a pep8 fail with message 'Controller.create' is too >> complex (47) >> >> (Note that in tox.ini the max complexity is fixed to 47 and >> there is a note >> >> specifying 46 is the max complexity present at the moment). >> >> >> >> It is quite easy to make this test pass creating a new method >> just to >> >> execute these lines of code, anyway all other extensions are >> handled in that >> >> way and one of most important stylish rule states to be >> consistent with >> >> surrounding code, so I don't think a separate function is the >> way to go >> >> (unless it implies a change in how all other extensions are >> handled too). >> >> >> >> My thoughts on this situation: >> >> >> >> 1) New extensions should not consider v2 but only v2.1, so >> that file should >> >> not be touched >> >> 2) Ignore this error and go on: if and when the extension will >> be merged the >> >> complexity in tox.ini will be changed too >> >> 3) The complexity in tox.ini should be raised to allow new v2 >> extensions >> >> 4) The code of that module should be refactored to lower the >> complexity >> >> (i.e. move the load of each extension in a separate function) >> >> >> >> I would like to know if any of my point is close to the >> correct solution. >> >> >> >> -- >> >> Pasquale Porreca >> >> >> >> DEK Technologies >> >> Via dei Castelli Romani, 22 >> >> 00040 Pomezia (Roma) >> >> >> >> Mobile +39 3394823805 >> >> Skype paskporr >> >> >> >> >> >> _______________________________________________ >> >> OpenStack-dev mailing list >> >> OpenStack-dev at lists.openstack.org >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -- >> Pasquale Porreca >> >> DEK Technologies >> Via dei Castelli Romani, 22 >> 00040 Pomezia (Roma) >> >> Mobile +39 3394823805 >> Skype paskporr >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Pasquale Porreca > > DEK Technologies > Via dei Castelli Romani, 22 > 00040 Pomezia (Roma) > > Mobile +39 3394823805 > Skype paskporr > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at sheep.art.pl Thu Dec 18 13:58:07 2014 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Thu, 18 Dec 2014 14:58:07 +0100 Subject: [openstack-dev] [horizon] static files handling, bower/ Message-ID: <5492DD6F.2020604@sheep.art.pl> Hello, revisiting the package management for the Horizon's static files again, I would like to propose a particular solution. Hopefully it will allow us to both simplify the whole setup, and use the popular tools for the job, without losing too much of benefits of our current process. The changes we would need to make are as follows: * get rid of XStatic entirely; * add to the repository a configuration file for Bower, with all the required bower packages listed and their versions specified; * add to the repository a static_settings.py file, with a single variable defined, STATICFILES_DIRS. That variable would be initialized to a list of pairs mapping filesystem directories to URLs within the /static tree. By default it would only have a single mapping, pointing to where Bower installs all the stuff by default; * add a line "from static_settings import STATICFILES_DIRS" to the settings.py file; * add jobs both to run_tests.sh and any gate scripts, that would run Bower; * add a check on the gate that makes sure that all direct and indirect dependencies of all required Bower packages are listed in its configuration files (pretty much what we have for requirements.txt now); That's all. Now, how that would be used. 1. The developers will just use Bower the way they would normally use it, being able to install and test any of the libraries in any versions they like. The only additional thing is that they would need to add any additional libraries or changed versions to the Bower configuration file before they push their patch for review and merge. 2. The packagers can read the list of all required packages from the Bower configuration file, and make sure they have all the required libraries packages in the required versions. Next, they replace the static_settings.py file with one they have prepared manually or automatically. The file lists the locations of all the library directories, and, in the case when the directory structure differs from what Bower provides, even mappings between subdirectories and individual files. 3. Security patches need to go into the Bower packages directly, which is good for the whole community. 4. If we aver need a library that is not packaged for Bower, we will package it just as we had with the XStatic packages, only for Bower, which has much larger user base and more chance of other projects also using that package and helping with its testing. What do you think? Do you see any disastrous problems with this system? -- Radomir Dopieralski From Yuriy.Babenko at telekom.de Thu Dec 18 14:05:03 2014 From: Yuriy.Babenko at telekom.de (Yuriy.Babenko at telekom.de) Date: Thu, 18 Dec 2014 15:05:03 +0100 Subject: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework In-Reply-To: <891761EAFA335D44AD1FFDB9B4A8C063D8F9EE@G4W3216.americas.hpqcorp.net> References: <891761EAFA335D44AD1FFDB9B4A8C063D8F9EE@G4W3216.americas.hpqcorp.net> Message-ID: <45286C18B80CE54EAE4BDD29C033604E7FF6C96C5E@HE111647.EMEA1.CDS.T-INTERNAL.COM> Hi, in the IRC meeting yesterday we agreed to work on the use-case for service function chaining as it seems to be important for a lot of participants [1]. We will prepare the first draft and share it in the TelcoWG Wiki for discussion. There is one blueprint in openstack on that in [2] [1] http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-17-14.01.txt [2] https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining Kind regards/Mit freundlichen Gr??en Yuriy Babenko Von: A, Keshava [mailto:keshava.a at hp.com] Gesendet: Mittwoch, 10. Dezember 2014 19:06 An: stephen.kf.wong at gmail.com; OpenStack Development Mailing List (not for usage questions) Betreff: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi Murali, There are many unknows w.r.t ?Service-VM? and how it should from NFV perspective. In my opinion it was not decided how the Service-VM framework should be. Depending on this we at OpenStack also will have impact for ?Service Chaining?. Please find the mail attached w.r.t that discussion with NFV for ?Service-VM + Openstack OVS related discussion?. Regards, keshava From: Stephen Wong [mailto:stephen.kf.wong at gmail.com] Sent: Wednesday, December 10, 2014 10:03 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi Murali, There is already a ServiceVM project (Tacker), currently under development on stackforge: https://wiki.openstack.org/wiki/ServiceVM If you are interested in this topic, please take a look at the wiki page above and see if the project's goals align with yours. If so, you are certainly welcome to join the IRC meeting and start to contribute to the project's direction and design. Thanks, - Stephen On Wed, Dec 10, 2014 at 7:01 AM, Murali B > wrote: Hi keshava, We would like contribute towards service chain and NFV Could you please share the document if you have any related to service VM The service chain can be achieved if we able to redirect the traffic to service VM using ovs-flows in this case we no need to have routing enable on the service VM(traffic is redirected at L2). All the tenant VM's in cloud could use this service VM services by adding the ovs-rules in OVS Thanks -Murali _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbelamaric at infoblox.com Thu Dec 18 14:23:06 2014 From: jbelamaric at infoblox.com (John Belamaric) Date: Thu, 18 Dec 2014 14:23:06 +0000 Subject: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled In-Reply-To: <106528797.491287.1418857575930.JavaMail.yahoo@jws10687.mail.bf1.yahoo.com> References: <106528797.491287.1418857575930.JavaMail.yahoo@jws10687.mail.bf1.yahoo.com> Message-ID: Hi Paddu, Take a look at what we are working on in Kilo [1] for external IPAM. While this does not address DHCP specifically, it does allow you to use an external source to allocate the IP that OpenStack uses, which may solve your problem. Another solution to your question is to invert the logic - you need to take the IP allocated by OpenStack and program the DHCP server to provide a fixed IP for that MAC. You may be interested in looking at this Etherpad [2] that Don Kehn put together gathering all the various DHCP blueprints and related info, and also at this BP [3] for including a DHCP relay so we can utilize external DHCP more easily. [1] https://blueprints.launchpad.net/neutron/+spec/neutron-ipam [2] https://etherpad.openstack.org/p/neutron-dhcp-org [3] https://blueprints.launchpad.net/neutron/+spec/dhcp-relay John From: Padmanabhan Krishnan > Reply-To: Padmanabhan Krishnan >, "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, December 17, 2014 at 6:06 PM To: "openstack-dev at lists.openstack.org" > Subject: Re: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled This means whatever tools the operators are using, it need to make sure the IP address assigned inside the VM matches with Openstack has assigned to the port. Bringing the question that i had in another thread on the same topic: If one wants to use the provider DHCP server and not have Openstack's DHCP or L3 agent/DVR, it may not be possible to do so even with DHCP disabled in Openstack network. Even if the provider DHCP server is configured with the same start/end range in the same subnet, there's no guarantee that it will match with Openstack assigned IP address for bulk VM launches or when there's a failure case. So, how does one deploy external DHCP with Openstack? If Openstack hasn't assigned a IP address when DHCP is disabled for a network, can't port_update be done with the provider DHCP specified IP address to put the anti-spoofing and security rules? With Openstack assigned IP address, port_update cannot be done since IP address aren't in sync and can overlap. Thanks, Paddu On 12/16/14 4:30 AM, "Pasquale Porreca" > wrote: >I understood and I agree that assigning the ip address to the port is >not a bug, however showing it to the user, at least in Horizon dashboard >where it pops up in the main instance screen without a specific search, >can be very confusing. > >On 12/16/14 12:25, Salvatore Orlando wrote: >> In Neutron IP address management and distribution are separated >>concepts. >> IP addresses are assigned to ports even when DHCP is disabled. That IP >> address is indeed used to configure anti-spoofing rules and security >>groups. >> >> It is however understandable that one wonders why an IP address is >>assigned >> to a port if there is no DHCP server to communicate that address. >>Operators >> might decide to use different tools to ensure the IP address is then >> assigned to the instance's ports. On XenServer for instance one could >>use a >> guest agent reading network configuration from XenStore; as another >> example, older versions of Openstack used to inject network >>configuration >> into the instance file system; I reckon that today's configdrive might >>also >> be used to configure instance's networking. >> >> Summarising I don't think this is a bug. Nevertheless if you have any >>idea >> regarding improvements on the API UX feel free to file a bug report. >> >> Salvatore >> >> On 16 December 2014 at 10:41, Pasquale Porreca < >> pasquale.porreca at dektech.com.au> wrote: >>> >>> Is there a specific reason for which a fixed ip is bound to a port on a >>> subnet where dhcp is disabled? it is confusing to have this info shown >>> when the instance doesn't have actually an ip on that port. >>> Should I fill a bug report, or is this a wanted behavior? >>> >>> -- >>> Pasquale Porreca >>> >>> DEK Technologies >>> Via dei Castelli Romani, 22 >>> 00040 Pomezia (Roma) >>> >>> Mobile +39 3394823805 >>> Skype paskporr >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >-- >Pasquale Porreca > >DEK Technologies >Via dei Castelli Romani, 22 >00040 Pomezia (Roma) > >Mobile +39 3394823805 >Skype paskporr > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mestery at mestery.com Thu Dec 18 14:35:02 2014 From: mestery at mestery.com (Kyle Mestery) Date: Thu, 18 Dec 2014 08:35:02 -0600 Subject: [openstack-dev] [third party][neutron] - OpenDaylight CI failing for past 6 days In-Reply-To: <895258305.824569.1418899469666.JavaMail.zimbra@enovance.com> References: <895258305.824569.1418899469666.JavaMail.zimbra@enovance.com> Message-ID: On Thu, Dec 18, 2014 at 4:44 AM, Anil Venkata wrote: > > Hi All > > Last successful build on OpenDaylight CI( > https://jenkins.opendaylight.org/ovsdb/job/openstack-gerrit/ ) was 6 days > back. > After that, OpenDaylight CI Jenkins job is failing for all the patches. > > Can we remove the voting rights for the OpenDaylight CI until it is fixed? > > I am working to disable this now. The OpenDaylight team has been working to get this running but I think they need a few more days. Thanks, Kyle > Thanks > Anil.Venakata > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstackgerrit at gmail.com Thu Dec 18 14:49:31 2014 From: openstackgerrit at gmail.com (Michael) Date: Thu, 18 Dec 2014 20:19:31 +0530 Subject: [openstack-dev] [All] Who needs a pair of hands to write tons of python code Message-ID: Hi all, I am looking to write tons of code in python and looking for guidance. There are a lot of projects in openstack but it is hard to choose one. It also becomes harder when some of the components are aiming to become more stable instead of adding new feature. Regards, Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From xyzjerry at gmail.com Thu Dec 18 14:50:06 2014 From: xyzjerry at gmail.com (Jerry Zhao) Date: Thu, 18 Dec 2014 06:50:06 -0800 Subject: [openstack-dev] [Neutron] vm can't get ipv6 address in ra mode:slaac + address mode: slaac In-Reply-To: <5492CDB1.3080004@gmail.com> References: <5492CDB1.3080004@gmail.com> Message-ID: <5492E99E.1090307@gmail.com> It seems that radvd was not spawned successfully in l3-agent log: Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: Stderr: '/usr/bin/neutron-rootwrap: Unauthorized command: ip netns exec qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7 radvd -C /var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf -p /var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd (no filter matched)\n' Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Traceback (most recent call last): Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent File "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/utils.py", line 341, in call Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent return func(*args, **kwargs) Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent File "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l3_agent.py", line 902, in process_router Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent self.root_helper) Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent File "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py", line 111, in enable_ipv6_ra Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent _spawn_radvd(router_id, radvd_conf, router_ns, root_helper) Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent File "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py", line 95, in _spawn_radvd Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent radvd.enable(callback, True) Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent File "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/external_process.py", line 77, in enable Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent ip_wrapper.netns.execute(cmd, addl_env=self.cmd_addl_env) Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent File "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 554, in execute Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent check_exit_code=check_exit_code, extra_ok_codes=extra_ok_codes) Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent File "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 82, in execute Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent raise RuntimeError(m) Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent RuntimeError: Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7', 'radvd', '-C', '/var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf', '-p', '/var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd'] Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Exit code: 99 Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Stdout: '' Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Stderr: '/usr/bin/neutron-rootwrap: Unauthorized command: ip netns exec qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7 radvd -C /var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf -p /var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd (no filter matched)\n' On 12/18/2014 04:50 AM, Jerry Zhao wrote: > Hi > I have configured a provider flat network with ipv6 subnet in ra mode > slaac and address mode slaac. However, when i launched a ubuntu trusty > VM, it couldn't get the ipv6 address but ipv4 only. I am running the > trunk code BTW. The command used are: > > neutron net-create --provider:network_type=flat > --provider:physical_network=datacentre --router:external=true > provider-net > neutron subnet-create --ip-version=6 --name=ipv6 > --ipv6-address-mode=slaac --ipv6-ra-mode=slaac provider-net > 2001:470:1f0e:cb4::0/64 --allocation-pool > start=2001:470:1f0e:cb4::20,end=2001:470:1f0e:cb4::fffe --gateway > 2001:470:1f0e:cb4::3 > neutron subnet-create --ip-version=4 --name=ipv4 provider-net > 162.3.122.0/24 --allocation-pool start=162.3.122.4,end=162.3.122.253 > neutron router-interface-add default-router ipv6 > neutron router-interface-add default-router ipv4 > > The vm is reachable when i configured the ipv6 address calculated by > neutron manually on the nic. > How can i get the auto configuration to work on the VM? > Thanks! From boris at pavlovic.me Thu Dec 18 14:59:24 2014 From: boris at pavlovic.me (Boris Pavlovic) Date: Thu, 18 Dec 2014 18:59:24 +0400 Subject: [openstack-dev] [All] Who needs a pair of hands to write tons of python code In-Reply-To: References: Message-ID: Michael, Rally project (https://github.com/stackforge/rally) needs hands! We have a billions of interesting, simple and complex tasks. Please join us at #openstack-rally IRC chat Thanks! Best regards, Boris Pavlovic On Thu, Dec 18, 2014 at 6:49 PM, Michael wrote: > > Hi all, > > I am looking to write tons of code in python and looking for guidance. > There are a lot of projects in openstack but it is hard to choose one. It > also becomes harder when some of the components are aiming to become more > stable instead of adding new feature. > > Regards, > Michael > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xyzjerry at gmail.com Thu Dec 18 15:02:19 2014 From: xyzjerry at gmail.com (Jerry Zhao) Date: Thu, 18 Dec 2014 07:02:19 -0800 Subject: [openstack-dev] [Neutron] vm can't get ipv6 address in ra mode:slaac + address mode: slaac In-Reply-To: <5492E99E.1090307@gmail.com> References: <5492CDB1.3080004@gmail.com> <5492E99E.1090307@gmail.com> Message-ID: <5492EC7B.5030001@gmail.com> I couldn't see anything wrong. in my l3.filters: [Filters] # arping arping: CommandFilter, arping, root # l3_agent sysctl: CommandFilter, sysctl, root route: CommandFilter, route, root radvd: CommandFilter, radvd, root # metadata proxy metadata_proxy: CommandFilter, neutron-ns-metadata-proxy, root # If installed from source (say, by devstack), the prefix will be # /usr/local instead of /usr/bin. metadata_proxy_local: CommandFilter, /usr/local/bin/neutron-ns-metadata-proxy, root # RHEL invocation of the metadata proxy will report /opt/stack/venvs/openstack/bin/python kill_metadata: KillFilter, root, python, -9 kill_metadata7: KillFilter, root, python2.7, -9 kill_radvd_usr: KillFilter, root, /usr/sbin/radvd, -9, -HUP kill_radvd: KillFilter, root, /sbin/radvd, -9, -HUP # ip_lib ip: IpFilter, ip, root ip_exec: IpNetnsExecFilter, ip, root On 12/18/2014 06:50 AM, Jerry Zhao wrote: > It seems that radvd was not spawned successfully > in l3-agent log: > > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: Stderr: '/usr/bin/neutron-rootwrap: Unauthorized > command: ip netns exec qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7 > radvd -C > /var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf -p > /var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd > (no filter matched)\n' > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent Traceback (most recent call last): > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent File > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/utils.py", > line 341, in call > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent return func(*args, **kwargs) > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent File > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l3_agent.py", > line 902, in process_router > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent self.root_helper) > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent File > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py", > line 111, in enable_ipv6_ra > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent _spawn_radvd(router_id, radvd_conf, > router_ns, root_helper) > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent File > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py", > line 95, in _spawn_radvd > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent radvd.enable(callback, True) > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent File > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/external_process.py", > line 77, in enable > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent ip_wrapper.netns.execute(cmd, > addl_env=self.cmd_addl_env) > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent File > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", > line 554, in execute > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent check_exit_code=check_exit_code, > extra_ok_codes=extra_ok_codes) > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent File > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py", > line 82, in execute > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent raise RuntimeError(m) > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent RuntimeError: > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent Command: ['sudo', '/usr/bin/neutron-rootwrap', > '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', > 'qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7', 'radvd', '-C', > '/var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf', > '-p', > '/var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd'] > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent Exit code: 99 > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent Stdout: '' > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent Stderr: '/usr/bin/neutron-rootwrap: > Unauthorized command: ip netns exec > qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7 radvd -C > /var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf -p > /var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd > (no filter matched)\n' > > > On 12/18/2014 04:50 AM, Jerry Zhao wrote: >> Hi >> I have configured a provider flat network with ipv6 subnet in ra mode >> slaac and address mode slaac. However, when i launched a ubuntu >> trusty VM, it couldn't get the ipv6 address but ipv4 only. I am >> running the trunk code BTW. The command used are: >> >> neutron net-create --provider:network_type=flat >> --provider:physical_network=datacentre --router:external=true >> provider-net >> neutron subnet-create --ip-version=6 --name=ipv6 >> --ipv6-address-mode=slaac --ipv6-ra-mode=slaac provider-net >> 2001:470:1f0e:cb4::0/64 --allocation-pool >> start=2001:470:1f0e:cb4::20,end=2001:470:1f0e:cb4::fffe --gateway >> 2001:470:1f0e:cb4::3 >> neutron subnet-create --ip-version=4 --name=ipv4 provider-net >> 162.3.122.0/24 --allocation-pool start=162.3.122.4,end=162.3.122.253 >> neutron router-interface-add default-router ipv6 >> neutron router-interface-add default-router ipv4 >> >> The vm is reachable when i configured the ipv6 address calculated by >> neutron manually on the nic. >> How can i get the auto configuration to work on the VM? >> Thanks! > From ihrachys at redhat.com Thu Dec 18 15:03:23 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 18 Dec 2014 16:03:23 +0100 Subject: [openstack-dev] [Neutron] vm can't get ipv6 address in ra mode:slaac + address mode: slaac In-Reply-To: <5492E99E.1090307@gmail.com> References: <5492CDB1.3080004@gmail.com> <5492E99E.1090307@gmail.com> Message-ID: <5492ECBB.1000500@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 I suspect that's some Red Hat distro, and radvd lacks SELinux context set to allow neutron l3 agent to spawn it. On 18/12/14 15:50, Jerry Zhao wrote: > It seems that radvd was not spawned successfully in l3-agent log: > > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: Stderr: '/usr/bin/neutron-rootwrap: Unauthorized > command: ip netns exec qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7 > radvd -C > /var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf > -p > /var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd > > (no filter matched)\n' > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent Traceback (most recent call last): Dec 18 > 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: > 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent File > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/utils.py", > > line 341, in call > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent return func(*args, **kwargs) Dec 18 > 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: > 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent File > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l3_agent.py", > > line 902, in process_router > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent self.root_helper) Dec 18 11:23:34 > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 > 11:23:34.611 18015 TRACE neutron.agent.l3_agent File > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py", > > line 111, in enable_ipv6_ra > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent _spawn_radvd(router_id, radvd_conf, > router_ns, root_helper) Dec 18 11:23:34 > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 > 11:23:34.611 18015 TRACE neutron.agent.l3_agent File > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py", > > line 95, in _spawn_radvd > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent radvd.enable(callback, True) Dec 18 11:23:34 > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 > 11:23:34.611 18015 TRACE neutron.agent.l3_agent File > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/external_process.py", > > line 77, in enable > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent ip_wrapper.netns.execute(cmd, > addl_env=self.cmd_addl_env) Dec 18 11:23:34 > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 > 11:23:34.611 18015 TRACE neutron.agent.l3_agent File > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", > > line 554, in execute > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent check_exit_code=check_exit_code, > extra_ok_codes=extra_ok_codes) Dec 18 11:23:34 > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 > 11:23:34.611 18015 TRACE neutron.agent.l3_agent File > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py", > > line 82, in execute > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent raise RuntimeError(m) Dec 18 11:23:34 > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 > 11:23:34.611 18015 TRACE neutron.agent.l3_agent RuntimeError: Dec > 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent Command: ['sudo', > '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', > 'netns', 'exec', 'qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7', > 'radvd', '-C', > '/var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf', > > '-p', > '/var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd'] > > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > neutron.agent.l3_agent Exit code: 99 Dec 18 11:23:34 > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 > 11:23:34.611 18015 TRACE neutron.agent.l3_agent Stdout: '' Dec 18 > 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: > 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Stderr: > '/usr/bin/neutron-rootwrap: Unauthorized command: ip netns exec > qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7 radvd -C > /var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf > -p > /var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd > > (no filter matched)\n' > > > On 12/18/2014 04:50 AM, Jerry Zhao wrote: >> Hi I have configured a provider flat network with ipv6 subnet in >> ra mode slaac and address mode slaac. However, when i launched a >> ubuntu trusty VM, it couldn't get the ipv6 address but ipv4 only. >> I am running the trunk code BTW. The command used are: >> >> neutron net-create --provider:network_type=flat >> --provider:physical_network=datacentre --router:external=true >> provider-net neutron subnet-create --ip-version=6 --name=ipv6 >> --ipv6-address-mode=slaac --ipv6-ra-mode=slaac provider-net >> 2001:470:1f0e:cb4::0/64 --allocation-pool >> start=2001:470:1f0e:cb4::20,end=2001:470:1f0e:cb4::fffe >> --gateway 2001:470:1f0e:cb4::3 neutron subnet-create >> --ip-version=4 --name=ipv4 provider-net 162.3.122.0/24 >> --allocation-pool start=162.3.122.4,end=162.3.122.253 neutron >> router-interface-add default-router ipv6 neutron >> router-interface-add default-router ipv4 >> >> The vm is reachable when i configured the ipv6 address calculated >> by neutron manually on the nic. How can i get the auto >> configuration to work on the VM? Thanks! > > > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUkuy7AAoJEC5aWaUY1u57RLwIAKayW3wgCoyw4Qh06jRoK8Bx 7qBCbTKiyi2DdjiYXEyDMZc3wnm7j1pvpikaByNCOA2ybXj8uFfnQiwsoFYRTxPD PLwvYsm+Afv3Bwaz7FSj1LKA8NmxNaz0ZxqBai/6aC17HjJyNfRxxCt2ZUG+WeP/ Yj9/0jUIoOVwOGspTcAXPQ1eaFHbs2nH0afD6aX7s4/g2i7vnQgJOOLrgRuetInN oR/DtZ81XJFyN3q1hl6Pv5k6TO0sTbeECV1OwOjQ2wJwCCarTAZJbW1s7fF8LCFm 0m04XGuZuWxNeSDYoamdF7a21bml1DvWJ5XHHvnblewZrK+01TUmMqAOW6KAWDo= =//1f -----END PGP SIGNATURE----- From fungi at yuggoth.org Thu Dec 18 15:14:13 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 18 Dec 2014 15:14:13 +0000 Subject: [openstack-dev] [All] Who needs a pair of hands to write tons of python code In-Reply-To: References: Message-ID: <20141218151413.GG2497@yuggoth.org> On 2014-12-18 20:19:31 +0530 (+0530), Michael wrote: > I am looking to write tons of code in python and looking for > guidance. There are a lot of projects in openstack but it is hard > to choose one. It also becomes harder when some of the components > are aiming to become more stable instead of adding new feature. As you've noticed, OpenStack already has "tons of code in Python" and what we really need is help refining/fixing it rather than piling more on top of it. Where most projects could _actually_ benefit from help is to investigate open bugs and review proposed changes. Also here's a link to our Developer's Guide to get you started: http://docs.openstack.org/infra/manual/developers.html Hope that helps, and welcome aboard! -- Jeremy Stanley From krotscheck at gmail.com Thu Dec 18 15:30:33 2014 From: krotscheck at gmail.com (Michael Krotscheck) Date: Thu, 18 Dec 2014 15:30:33 +0000 Subject: [openstack-dev] [All] Who needs a pair of hands to write tons of python code References: Message-ID: StoryBoard is always looking for help, and we've got a nice roadmap that you can pull a feature from if you're so inclined: https://wiki.openstack.org/wiki/StoryBoard/Roadmap Come hang out on #storyboard and #openstack-infra :) Michael On Thu Dec 18 2014 at 6:52:28 AM Michael wrote: > Hi all, > > I am looking to write tons of code in python and looking for guidance. > There are a lot of projects in openstack but it is hard to choose one. It > also becomes harder when some of the components are aiming to become more > stable instead of adding new feature. > > Regards, > Michael > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From keshava.a at hp.com Thu Dec 18 15:38:46 2014 From: keshava.a at hp.com (A, Keshava) Date: Thu, 18 Dec 2014 15:38:46 +0000 Subject: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework In-Reply-To: <45286C18B80CE54EAE4BDD29C033604E7FF6C96C5E@HE111647.EMEA1.CDS.T-INTERNAL.COM> References: <891761EAFA335D44AD1FFDB9B4A8C063D8F9EE@G4W3216.americas.hpqcorp.net> <45286C18B80CE54EAE4BDD29C033604E7FF6C96C5E@HE111647.EMEA1.CDS.T-INTERNAL.COM> Message-ID: <891761EAFA335D44AD1FFDB9B4A8C063DBA053@G9W0733.americas.hpqcorp.net> Hi Yuriy Babenko, I am little worried about the direction we need to think about Service Chaining . OpenStack will focus on Service Chaining of its own internal features (like FWaaS, LBaaS, VPNasS , L2 Gateway Aas .. ? ) OR will it consider Service Chaining of ?Service-VM? also ? A. If we are considering ?Service-VM? service Chaining I have below points to mention .. 1. Does the OpenStack needs to worry about Service-VM capability ? 2. Does OpenStack worry if Service-VM also have OVS or not ? 3. Does OpenStack worry if Service-VM has its own routing instance running in that ? Which can reconfigures the OVS flow . 4. Can Service-VM configure OpenStack infrastructure OVS ? 5. Can Service-VM have multiple features in it ? (Example DPI + FW + NAT ) ? Is Service-VM is = vNFVC ? B. If we are thinking of Service-chaining of ?OpenStack only Services? : Then have below points For a Tennant: 1. Can Services be binded to a particular Compute node(CN) ? 2. A Tennant may not want to run/enable all the Services on all CN?s. Tennant may want to run FWaaS, VPNaaS on different CNs so that Tenant get better infrastructure performance. Then are we considering chaining of Services per Tennant ? 3. If so how to control this ? (Please consider that tenants VM can get migrated to different CNs) Let me know others opinion. keshava From: Yuriy.Babenko at telekom.de [mailto:Yuriy.Babenko at telekom.de] Sent: Thursday, December 18, 2014 7:35 PM To: openstack-dev at lists.openstack.org; stephen.kf.wong at gmail.com Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi, in the IRC meeting yesterday we agreed to work on the use-case for service function chaining as it seems to be important for a lot of participants [1]. We will prepare the first draft and share it in the TelcoWG Wiki for discussion. There is one blueprint in openstack on that in [2] [1] http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-17-14.01.txt [2] https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining Kind regards/Mit freundlichen Gr??en Yuriy Babenko Von: A, Keshava [mailto:keshava.a at hp.com] Gesendet: Mittwoch, 10. Dezember 2014 19:06 An: stephen.kf.wong at gmail.com; OpenStack Development Mailing List (not for usage questions) Betreff: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi Murali, There are many unknows w.r.t ?Service-VM? and how it should from NFV perspective. In my opinion it was not decided how the Service-VM framework should be. Depending on this we at OpenStack also will have impact for ?Service Chaining?. Please find the mail attached w.r.t that discussion with NFV for ?Service-VM + Openstack OVS related discussion?. Regards, keshava From: Stephen Wong [mailto:stephen.kf.wong at gmail.com] Sent: Wednesday, December 10, 2014 10:03 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi Murali, There is already a ServiceVM project (Tacker), currently under development on stackforge: https://wiki.openstack.org/wiki/ServiceVM If you are interested in this topic, please take a look at the wiki page above and see if the project's goals align with yours. If so, you are certainly welcome to join the IRC meeting and start to contribute to the project's direction and design. Thanks, - Stephen On Wed, Dec 10, 2014 at 7:01 AM, Murali B > wrote: Hi keshava, We would like contribute towards service chain and NFV Could you please share the document if you have any related to service VM The service chain can be achieved if we able to redirect the traffic to service VM using ovs-flows in this case we no need to have routing enable on the service VM(traffic is redirected at L2). All the tenant VM's in cloud could use this service VM services by adding the ovs-rules in OVS Thanks -Murali _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From chhagarw at in.ibm.com Thu Dec 18 15:54:48 2014 From: chhagarw at in.ibm.com (Chhavi Agarwal) Date: Thu, 18 Dec 2014 21:24:48 +0530 Subject: [openstack-dev] [cinder] Multiple Backend for different volume_types Message-ID: Hi All, As per the below link multi-backend support :- https://wiki.openstack.org/wiki/Cinder-multi-backend Its mentioned that we currently only support passing a multi backend provider per volume_type. "There can be > 1 backend per volume_type, and the capacity scheduler kicks in and keeps the backends of a particular volume_type " Is there a way to support multi backend provider across different volume_type. For eg if I want my volume_type to have both SVC and LVM drivers to be passed as my backend provider. Thanks & Regards, Chhavi Agarwal Cloud System Software Group. From unmesh.gurjar at hp.com Thu Dec 18 15:55:53 2014 From: unmesh.gurjar at hp.com (Gurjar, Unmesh) Date: Thu, 18 Dec 2014 15:55:53 +0000 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <54923806.6040405@redhat.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> <548A3FB8.9030007@redhat.com> <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> <548B8480.9010506@redhat.com> <4641310AFBEE10419D0A020273367C140CA3A305@G1W3645.americas.hpqcorp.net> <548FB135.30209@redhat.com> <54923806.6040405@redhat.com> Message-ID: > -----Original Message----- > From: Zane Bitter [mailto:zbitter at redhat.com] > Sent: Thursday, December 18, 2014 7:42 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept > showdown > > On 17/12/14 13:05, Gurjar, Unmesh wrote: > >> I'm storing a tuple of its name and database ID. The data structure > >> is resource.GraphKey. I was originally using the name for something, > >> but I suspect I could probably drop it now and just store the > >> database ID, but I haven't tried it yet. (Having the name in there > >> definitely makes debugging more pleasant though ;) > >> > > > > I agree, having name might come in handy while debugging! > > > >> When I build the traversal graph each node is a tuple of the GraphKey > >> and a boolean to indicate whether it corresponds to an update or a > >> cleanup operation (both can appear for a single resource in the same > graph). > > > > Just to confirm my understanding, cleanup operation takes care of both: > > 1. resources which are deleted as a part of update and 2. previous > > versioned resource which was updated by replacing with a new resource > > (UpdateReplace scenario) > > Yes, correct. Also: > > 3. resource versions which failed to delete for whatever reason on a previous > traversal > > > Also, the cleanup operation is performed after the update completes > successfully. > > NO! They are not separate things! > > https://github.com/openstack/heat/blob/stable/juno/heat/engine/update. > py#L177-L198 > > >>> If I am correct, you are updating all resources on update regardless > >>> of their change which will be inefficient if stack contains a million > resource. > >> > >> I'm calling update() on all resources regardless of change, but > >> update() will only call handle_update() if something has changed > >> (unless the plugin has overridden Resource._needs_update()). > >> > >> There's no way to know whether a resource needs to be updated before > >> you're ready to update it, so I don't think of this as 'inefficient', just > 'correct'. > >> > >>> We have similar questions regarding other areas in your > >>> implementation, which we believe if we understand the outline of > >>> your implementation. It is difficult to get a hold on your approach > >>> just by looking > >> at code. Docs strings / Etherpad will help. > >>> > >>> > >>> About streams, Yes in a million resource stack, the data will be > >>> huge, but > >> less than template. > >> > >> No way, it's O(n^3) (cubed!) in the worst case to store streams for > >> each resource. > >> > >>> Also this stream is stored > >>> only In IN_PROGRESS resources. > >> > >> Now I'm really confused. Where does it come from if the resource > >> doesn't get it until it's already in progress? And how will that information > help it? > >> > > > > When an operation on stack is initiated, the stream will be identified. > > OK, this may be one of the things I was getting confused about - I though a > 'stream' belonged to one particular resource and just contained all of the > paths to reaching that resource. But here it seems like you're saying that a > 'stream' is a representation of the entire graph? > So it's essentially just a gratuitously bloated NIH serialisation of the > Dependencies graph? > > > To begin > > the operation, the action is initiated on the leaf (or root) > > resource(s) and the stream is stored (only) in this/these IN_PROGRESS > resource(s). > > How does that work? Does it get deleted again when the resource moves to > COMPLETE? > Yes, IMO, upon resource completion, the stream can be deleted. I do not foresee any situation where-in the storing the stream is required. Since, when another operation is initiated on the stack, that template should be parsed and the new stream should be identified and used. > > The stream should then keep getting passed to the next/previous level > > of resource(s) as and when the dependencies for the next/previous level > of resource(s) are met. > > That sounds... identical to the way it's implemented in my prototype (passing > a serialisation of the graph down through the notification triggers), except for > the part about storing it in the Resource table. > Why would we persist to the database data that we only need for the > duration that we already have it in memory anyway? > Earlier we thought of passing it along while initiating the next level of resource(s). However, for the million resource stack, it will be quite large and passing it around will be inefficient. So, we intended to have it stored in database. Also, it can be used for resuming a stack operation when the processing engine goes down and another engine has to resume that stack operation. > If we're going to persist it we should do so once, in the Stack table, at the > time that we're preparing to start the traversal. > > >>> The reason to have entire dependency list to reduce DB queries while > >>> a > >> stack update. > >> > >> But we never need to know that. We only need to know what just > >> happened and what to do next. > >> > > > > As mentioned earlier, each level of resources in a graph pass on the > > dependency list/stream to their next/previous level of resources. This > > is information should further be used to determine what is to be done next > and drive the operation to completion. > > Why would we store *and* forward? > Sorry for causing the confusion. By pass on, I meant setting/storing the stream in database for the next level of resource. > >>> When you have a singular dependency on each resources similar to > >>> your implantation, then we will end up loading Dependencies one at a > >>> time and > >> altering almost all resource's dependency regardless of their change. > >>> > >>> Regarding a 2 template approach for delete, it is not actually 2 > >>> different templates. Its just that we have a delete stream To be > >>> taken up > >> post update. > >> > >> That would be a regression from Heat's current behaviour, where we > >> start cleaning up resources as soon as they have nothing depending on > them. > >> There's not even a reason to make it worse than what we already have, > >> because it's actually a lot _easier_ to treat update and clean up as > >> the same kind of operation and throw both into the same big graph. > >> The dual implementations and all of the edge cases go away and you > >> can just trust in the graph traversal to do the Right Thing in the most > parallel way possible. > >> > >>> (Any post operation will be handled as an update) This approach is > >>> True when Rollback==True We can always fall back to regular stream > >>> (non-delete stream) if Rollback=False > >> > >> I don't understand what you're saying here. > > > > Just to elaborate, in case of update with rollback, there will be 2 > > streams of > > operations: > > There really should not be. > > > 1. first is the create and update resource stream 2. second is the > > stream for deleting resources (which will be taken if the first stream > > completes successfully). > > We don't want to break it up into discrete steps. We want to treat it as one > single graph traversal - provided we set up the dependencies correctly then > the most optimal behaviour just falls out of our graph traversal algorithm for > free. > Yes, I agree. > In the existing Heat code I linked above, we use actual > heat.engine.resource.Resource objects as nodes in the dependency graph > and rely on figuring out whether they came from the old or new stack to > differentiate them. However, it's not possible (or desirable) to serialise a > graph containing those objects and send it to another process, so in my > convergence prototype I use (key, direction) tuples as the nodes so that the > same key may appear twice in the graph with different 'directions' > (forward=True for update, =False for cleanup - note that the direction is with > respect to the template... as far as the graph is concerned it's a single > traversal going in one direction). > > Artificially dividing things into separate update and cleanup phases is both > more complicated code to maintain and a major step backwards for our > users. > > I want to be very clear about this: treating the updates and cleanups as > separate, serial tasks is a -2 show stopper for any convergence design. > We *MUST* NOT do that to our users. > > > However, in case of an update without rollback, there will be a single > > stream of operation (no delete/cleanup stream required). > > By 'update without rollback' I assume you mean when the user issues an > update with disable_rollback=True? > > Actually it doesn't matter what you mean, because there is no way of > interpreting this that could make it correct. We *always* need to check all of > the pre-existing resources for clean up. The only exception is on create, and > then only because the set of pre-existing resources is empty. > > If your plan for handling UpdateReplace when rollback is disabled is just to > delete the old resource at the same time as creating the new one, then your > plan won't work because the dependencies are backwards. > And leaving the replaced resources around without even trying to clean > them up would be outright unethical, given how much money it would cost > our users. So -2 on 'no cleanup when rollback disabled' as well. > > cheers, > Zane. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kprad1 at yahoo.com Thu Dec 18 15:58:59 2014 From: kprad1 at yahoo.com (Padmanabhan Krishnan) Date: Thu, 18 Dec 2014 15:58:59 +0000 (UTC) Subject: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled In-Reply-To: References: Message-ID: <1451335840.835432.1418918340043.JavaMail.yahoo@jws10635.mail.bf1.yahoo.com> Hi John,Thanks for the pointers. I shall take a look and get back. Regards,Paddu On Thursday, December 18, 2014 6:23 AM, John Belamaric wrote: Hi Paddu, Take a look at what we are working on in Kilo [1] for external IPAM. While this does not address DHCP specifically, it does allow you to use an external source to allocate the IP that OpenStack uses, which may solve your problem. Another solution to your question is to invert the logic - you need to take the IP allocated by OpenStack and program the DHCP server to provide a fixed IP for that MAC. You may be interested in looking at this Etherpad [2] that Don Kehn put together gathering all the various DHCP blueprints and related info, and also at this BP [3] for including a DHCP relay so we can utilize external DHCP more easily. [1] https://blueprints.launchpad.net/neutron/+spec/neutron-ipam[2]?https://etherpad.openstack.org/p/neutron-dhcp-org[3]?https://blueprints.launchpad.net/neutron/+spec/dhcp-relay John From: Padmanabhan Krishnan Reply-To: Padmanabhan Krishnan , "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, December 17, 2014 at 6:06 PM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled This means whatever tools the operators are using, it need to make sure the IP address assigned inside the VM matches with Openstack has assigned to the port.Bringing the question that i had in another thread on the same topic: If one wants to use the provider DHCP server and not have Openstack's DHCP or L3 agent/DVR, it may not be possible to do so even with DHCP disabled in Openstack network. Even if the provider DHCP server is configured with the same start/end range in the same subnet, there's no guarantee that it will match with Openstack assigned IP address for bulk VM launches or? when there's a failure case.So, how does one deploy external DHCP with Openstack? If Openstack hasn't assigned a IP address when DHCP is disabled for a network, can't port_update be done with the provider DHCP specified IP address to put the anti-spoofing and security rules?With Openstack assigned IP address, port_update cannot be done since IP address aren't in sync and can overlap. Thanks,Paddu On 12/16/14 4:30 AM, "Pasquale Porreca" wrote: >I understood and I agree that assigning the ip address to the port is >not a bug, however showing it to the user, at least in Horizon dashboard >where it pops up in the main instance screen without a specific search, >can be very confusing. > >On 12/16/14 12:25, Salvatore Orlando wrote: >> In Neutron IP address management and distribution are separated >>concepts. >> IP addresses are assigned to ports even when DHCP is disabled. That IP >> address is indeed used to configure anti-spoofing rules and security >>groups. >> >> It is however understandable that one wonders why an IP address is >>assigned >> to a port if there is no DHCP server to communicate that address. >>Operators >> might decide to use different tools to ensure the IP address is then >> assigned to the instance's ports. On XenServer for instance one could >>use a >> guest agent reading network configuration from XenStore; as another >> example, older versions of Openstack used to inject network >>configuration >> into the instance file system; I reckon that today's configdrive might >>also >> be used to configure instance's networking. >> >> Summarising I don't think this is a bug. Nevertheless if you have any >>idea >> regarding improvements on the API UX feel free to file a bug report. >> >> Salvatore >> >> On 16 December 2014 at 10:41, Pasquale Porreca < >> pasquale.porreca at dektech.com.au> wrote: >>> >>> Is there a specific reason for which a fixed ip is bound to a port on a >>> subnet where dhcp is disabled? it is confusing to have this info shown >>> when the instance doesn't have actually an ip on that port. >>> Should I fill a bug report, or is this a wanted behavior? >>> >>> -- >>> Pasquale Porreca >>> >>> DEK Technologies >>> Via dei Castelli Romani, 22 >>> 00040 Pomezia (Roma) >>> >>> Mobile +39 3394823805 >>> Skype paskporr >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >-- >Pasquale Porreca > >DEK Technologies >Via dei Castelli Romani, 22 >00040 Pomezia (Roma) > >Mobile +39 3394823805 >Skype paskporr > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.mitchell at rackspace.com Thu Dec 18 16:40:20 2014 From: kevin.mitchell at rackspace.com (Kevin L. Mitchell) Date: Thu, 18 Dec 2014 10:40:20 -0600 Subject: [openstack-dev] ask for usage of quota reserve In-Reply-To: <54928389.1020903@linux.vnet.ibm.com> References: <54928389.1020903@linux.vnet.ibm.com> Message-ID: <1418920820.5523.5.camel@einstein.kev> On Thu, 2014-12-18 at 15:34 +0800, Eli Qiao(Li Yong Qiao) wrote: > can anyone tell if we call quotas.reserve() but never call > quotas.commit() or quotas.rollback(). > what will happen? A reservation is always created with an expiration time; by default, this expiration time is 86400 seconds (1 day) after the time at which the reservation is created. Expired reservations are deleted by the _expire_reservations() periodic task, which is defined on the scheduler. Thus, if a resource is reserved, but never committed or rolled back, it should continue to affect quota requests for approximately one day, then be automatically rolled back by the scheduler. -- Kevin L. Mitchell Rackspace From mathieu.rohon at gmail.com Thu Dec 18 16:43:17 2014 From: mathieu.rohon at gmail.com (Mathieu Rohon) Date: Thu, 18 Dec 2014 17:43:17 +0100 Subject: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution In-Reply-To: References: Message-ID: Hi mike, thanks for working on this bug : On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton wrote: > > > On 12/18/14, 2:06 PM, "Mike Kolesnik" wrote: > >>Hi Neutron community members. >> >>I wanted to query the community about a proposal of how to fix HA routers >>not >>working with L2Population (bug 1365476[1]). >>This bug is important to fix especially if we want to have HA routers and >>DVR >>routers working together. >> >>[1] https://bugs.launchpad.net/neutron/+bug/1365476 >> >>What's happening now? >>* HA routers use distributed ports, i.e. the port with the same IP & MAC >> details is applied on all nodes where an L3 agent is hosting this >>router. >>* Currently, the port details have a binding pointing to an arbitrary node >> and this is not updated. >>* L2pop takes this "potentially stale" information and uses it to create: >> 1. A tunnel to the node. >> 2. An FDB entry that directs traffic for that port to that node. >> 3. If ARP responder is on, ARP requests will not traverse the network. >>* Problem is, the master router wouldn't necessarily be running on the >> reported agent. >> This means that traffic would not reach the master node but some >>arbitrary >> node where the router master might be running, but might be in another >> state (standby, fail). >> >>What is proposed? >>Basically the idea is not to do L2Pop for HA router ports that reside on >>the >>tenant network. >>Instead, we would create a tunnel to each node hosting the HA router so >>that >>the normal learning switch functionality would take care of switching the >>traffic to the master router. > > In Neutron we just ensure that the MAC address is unique per network. > Could a duplicate MAC address cause problems here? gary, AFAIU, from a Neutron POV, there is only one port, which is the router Port, which is plugged twice. One time per port. I think that the capacity to bind a port to several host is also a prerequisite for a clean solution here. This will be provided by patches to this bug : https://bugs.launchpad.net/neutron/+bug/1367391 >>This way no matter where the master router is currently running, the data >>plane would know how to forward traffic to it. >>This solution requires changes on the controller only. >> >>What's to gain? >>* Data plane only solution, independent of the control plane. >>* Lowest failover time (same as HA routers today). >>* High backport potential: >> * No APIs changed/added. >> * No configuration changes. >> * No DB changes. >> * Changes localized to a single file and limited in scope. >> >>What's the alternative? >>An alternative solution would be to have the controller update the port >>binding >>on the single port so that the plain old L2Pop happens and notifies about >>the >>location of the master router. >>This basically negates all the benefits of the proposed solution, but is >>wider. >>This solution depends on the report-ha-router-master spec which is >>currently in >>the implementation phase. >> >>It's important to note that these two solutions don't collide and could >>be done >>independently. The one I'm proposing just makes more sense from an HA >>viewpoint >>because of it's benefits which fit the HA methodology of being fast & >>having as >>little outside dependency as possible. >>It could be done as an initial solution which solves the bug for mechanism >>drivers that support normal learning switch (OVS), and later kept as an >>optimization to the more general, controller based, solution which will >>solve >>the issue for any mechanism driver working with L2Pop (Linux Bridge, >>possibly >>others). >> >>Would love to hear your thoughts on the subject. You will have to clearly update the doc to mention that deployment with Linuxbridge+l2pop are not compatible with HA. Moreover, this solution is downgrading the l2pop solution, by disabling the ARP-responder when VMs want to talk to a HA router. This means that ARP requests will be duplicated to every overlay tunnel to feed the OVS Mac learning table. This is something that we were trying to avoid with l2pop. But may be this is acceptable. I know that ofagent is also using l2pop, I would like to know if ofagent deployment will be compatible with the workaround that you are proposing. My concern is that, with DVR, there are at least two major features that are not compatible with Linuxbridge. Linuxbridge is not running in the gate. I don't know if anybody is running a 3rd party testing with Linuxbridge deployments. If anybody does, it would be great to have it voting on gerrit! But I really wonder what is the future of linuxbridge compatibility? should we keep on improving OVS solution without taking into account the linuxbridge implementation? Regards, Mathieu >> >>Regards, >>Mike >> >>_______________________________________________ >>OpenStack-dev mailing list >>OpenStack-dev at lists.openstack.org >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Thu Dec 18 16:59:24 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 18 Dec 2014 11:59:24 -0500 Subject: [openstack-dev] [pecan] [WSME] Different content-type in request and response In-Reply-To: <1A4A612A-FEBF-40C4-94D6-CA8D2C4BD362@mirantis.com> References: <93DF0931-7060-43B8-8B59-3F9C160B2075@mirantis.com> <9726CFD3-F9C0-4BDF-885B-543F473D1376@doughellmann.com> <1A4A612A-FEBF-40C4-94D6-CA8D2C4BD362@mirantis.com> Message-ID: <834EF7CE-4EC1-496F-BFB1-464CEFD440A6@doughellmann.com> On Dec 18, 2014, at 2:53 AM, Renat Akhmerov wrote: > Doug, > > Sorry for trying to resurrect this thread again. It seems to be pretty important for us. Do you have some comments on that? Or if you need more context please also let us know. WSME has separate handlers for JSON and XML now. You could look into adding one for YAML. I think you?d want to start looking in http://git.openstack.org/cgit/stackforge/wsme/tree/wsme/rest By default WSME is going to want to encode the response in the same format as the inputs, because it?s going to expect the clients to want that. I?m not sure how hard it would be to change that assumption, or whether the other WSME developers would really think it?s a good idea. Doug > > Thanks > > Renat Akhmerov > @ Mirantis Inc. > > > >> On 27 Nov 2014, at 17:43, Renat Akhmerov wrote: >> >> Doug, thanks for your answer! >> >> My explanations below.. >> >> >>> On 26 Nov 2014, at 21:18, Doug Hellmann wrote: >>> >>> >>> On Nov 26, 2014, at 3:49 AM, Renat Akhmerov wrote: >>> >>>> Hi, >>>> >>>> I traced the WSME code and found a place [0] where it tries to get arguments from request body based on different mimetype. So looks like WSME supports only json, xml and ?application/x-www-form-urlencoded?. >>>> >>>> So my question is: Can we fix WSME to also support ?text/plain? mimetype? I think the first snippet that Nikolay provided is valid from WSME standpoint. >>> >>> WSME is intended for building APIs with structured arguments. It seems like the case of wanting to use text/plain for a single input string argument just hasn?t come up before, so this may be a new feature. >>> >>> How many different API calls do you have that will look like this? Would this be the only one in the API? Would it make sense to consistently use JSON, even though you only need a single string argument in this case? >> >> We have 5-6 API calls where we need it. >> >> And let me briefly explain the context. In Mistral we have a language (we call it DSL) to describe different object types: workflows, workbooks, actions. So currently when we upload say a workbook we run in a command line: >> >> mistral workbook-create my_wb.yaml >> >> where my_wb.yaml contains that DSL. The result is a table representation of actually create server side workbook. From technical perspective we now have: >> >> Request: >> >> POST /mistral_url/workbooks >> >> { >> ?definition?: ?escaped content of my_wb.yaml" >> } >> >> Response: >> >> { >> ?id?: ?1-2-3-4?, >> ?name?: ?my_wb_name?, >> ?description?: ?my workbook?, >> ... >> } >> >> The point is that if we use, for example, something like ?curl? we every time have to obtain that ?escaped content of my_wb.yaml? and create that, in fact, synthetic JSON to be able to send it to the server side. >> >> So for us it would be much more convenient if we could just send a plain text but still be able to receive a JSON as response. I personally don?t want to use some other technology because generally WSME does it job and I like this concept of rest resources defined as classes. If it supported text/plain it would be just the best fit for us. >> >>>> >>>> Or if we don?t understand something in WSME philosophy then it?d nice to hear some explanations from WSME team. Will appreciate that. >>>> >>>> >>>> Another issue that previously came across is that if we use WSME then we can?t pass arbitrary set of parameters in a url query string, as I understand they should always correspond to WSME resource structure. So, in fact, we can?t have any dynamic parameters. In our particular use case it?s very inconvenient. Hoping you could also provide some info about that: how it can be achieved or if we can just fix it. >>> >>> Ceilometer uses an array of query arguments to allow an arbitrary number. >>> >>> On the other hand, it sounds like perhaps your desired API may be easier to implement using some of the other tools being used, such as JSONSchema. Are you extending an existing API or building something completely new? >> >> We want to improve our existing Mistral API. Basically, the idea is to be able to apply dynamic filters when we?re requesting a collection of objects using url query string. Yes, we could use JSONSchema if you say it?s absolutely impossible to do and doesn?t follow WSME concepts, that?s fine. But like I said generally I like the approach that WSME takes and don?t feel like jumping to another technology just because of this issue. >> >> Thanks for mentioning Ceilometer, we?ll look at it and see if that works for us. >> >> Renat > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From edgar.magana at workday.com Thu Dec 18 17:17:08 2014 From: edgar.magana at workday.com (Edgar Magana) Date: Thu, 18 Dec 2014 17:17:08 +0000 Subject: [openstack-dev] #PERSONAL# : Git checkout command for Blueprints submission In-Reply-To: References: Message-ID: It is git checkout -b bp/ Edgar From: Swati Shukla1 > Reply-To: "openstack-dev at lists.openstack.org" > Date: Tuesday, December 16, 2014 at 10:53 PM To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] #PERSONAL# : Git checkout command for Blueprints submission Hi All, Generally, for bug submissions, we use ""git checkout -b bug/"" What is the similar 'git checkout' command for blueprints submission? Swati Shukla Tata Consultancy Services Mailto: swati.shukla1 at tcs.com Website: http://www.tcs.com ____________________________________________ Experience certainty. IT Services Business Solutions Consulting ____________________________________________ =====-----=====-----===== Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From thinrichs at vmware.com Thu Dec 18 17:24:59 2014 From: thinrichs at vmware.com (Tim Hinrichs) Date: Thu, 18 Dec 2014 17:24:59 +0000 Subject: [openstack-dev] [Congress] Re: Placement and Scheduling via Policy In-Reply-To: <6084_1418799460_54912964_6084_19713_1_CCD65D20E73C3348AA859183AFEB3ECF14209493@PEXCVZYM12.corporate.adroot.infra.ftgroup> References: <6841EB3A-4304-4FA7-B5A2-3AB92AFAF89D@vmware.com> <2B1037E6-105A-47FE-A7CE-6E6EE9CEAB90@vmware.com> <018209AD-5009-4873-905D-8A7AE9F4A2C8@vmware.com> <98E78314-3F6D-4F06-A6EF-6BDA94E54B4B@vmware.com> <2c1d29c9ed6c41aabc8ae2efa964b9aa@BRMWP-EXMB12.corp.brocade.com> <6084_1418799460_54912964_6084_19713_1_CCD65D20E73C3348AA859183AFEB3ECF14209493@PEXCVZYM12.corporate.adroot.infra.ftgroup> Message-ID: Hi all, Responses inline. On Dec 16, 2014, at 10:57 PM, > > wrote: Hi Tim & All @Tim: I did not reply to openstack-dev. Do you think we could have an openstack list specific for ?congress? to which anybody may subscribe? Sending to openstack-dev is the right thing, as long as we put [Congress] in the subject. Everyone I know sets up filters on openstack-dev so they only get the mail they care about. I think you?re the only one in the group who isn?t subscribed to that list. 1) Enforcement: By this we mean ?how will the actions computed by the policy engine be executed by the concerned OpenStack functional module?. In this case, it is better to first work this out for a ?simpler? case, e.g. your running example concerning the network/groups. Note: some actions concern only some data base (e.g. insert the user within some group). 2) From Prabhakar?s mail ?Enforcement. That is with a large number of constraints in place for placement and scheduling, how does the policy engine communicate and enforce the placement constraints to nova scheduler. ? Nova scheduler (current): It assigns VMs to servers based on the policy set by the administrator (through filters and host aggregates). The administrator also configures a scheduling heuristic (implemented as a driver), for example ?round-robin? driver. Then the computed assignment is sent back to the requestor (API server) that interacts with nova-compute to provision the VM. The current nova-scheduler has another function: It updates the allocation status of each compute node on the DB (through another indirection called nova-conductor) So it is correct to re-interpret your statement as follows: - What is the entity with which the policy engine interacts for either proactive or reactive placement management? - How will the output from the policy engine (for example the placement matrix) be communicated back? o Proactive: this gives the mapping of VM to host o Reactive: this gives the new mapping of running VMs to hosts - How starting from the placement matrix, the correct migration plan will be executed? (for reactive case) 3) Currently openstack does not have ?automated management of reactive placement?: Hence if the policy engine is used for reactive placement, then there is a need for another ?orchestrator? that can interpret the new proposed placement configuration (mapping of VM to servers) and execute the reconfiguration workflow. 4) So with a policy-based ?placement engine? that is integrated with external solvers, then this engine will replace nova-scheduler? Could we converge on this? The notes from Yathiraj say that there is already a policy-based Nova scheduler we can use. I suggest we look into that. It could potentially simplify our problem to the point where we need only figure out how to convert a fragment of the Congress policy language into their policy language. But those of you who are experts in placement will know better. https://github.com/stackforge/nova-solver-scheduler Tim Regards Ruby De : Tim Hinrichs [mailto:thinrichs at vmware.com] Envoy? : mardi 16 d?cembre 2014 19:25 ? : Prabhakar Kudva Cc : KRISHNASWAMY Ruby IMT/OLPS; Ramki Krishnan (ramk at Brocade.com); Gokul B Kandiraju; openstack-dev Objet : [Congress] Re: Placement and Scheduling via Policy [Adding openstack-dev to this thread. For those of you just joining? We started kicking around ideas for how we might integrate a special-purpose VM placement engine into Congress.] Kudva: responses inline. On Dec 16, 2014, at 6:25 AM, Prabhakar Kudva > wrote: Hi, I am very interested in this. So, it looks like there are two parts to this: 1. Policy analysis when there are a significant mix of logical and builtin predicates (i.e., runtime should identify a solution space when there are arithmetic operators). This will require linear programming/ILP type solvers. There might be a need to have a function in runtime.py that specifically deals with this (Tim?) I think it?s right that we expect there to be a mix of builtins and standard predicates. But what we?re considering here is having the linear solver be treated as if it were a domain-specific policy engine. So that solver wouldn?t be embedded into the runtime.py necessarily. Rather, we?d delegate part of the policy to that domain-specific policy engine. 2. Enforcement. That is with a large number of constraints in place for placement and scheduling, how does the policy engine communicate and enforce the placement constraints to nova scheduler. I would imagine that we could delegate either enforcement or monitoring or both. Eventually we want enforcement here, but monitoring could be useful too. And yes you?re asking the right questions. I was trying to break the problem down into pieces in my bullet (1) below. But I think there is significant overlap in the questions we need to answer whether we?re delegating monitoring or enforcement. Both of these require some form of mathematical analysis. Would be happy and interested to discuss more on these lines. Maybe take a look at how I tried to breakdown the problem into separate questions in bullet (1) below and see if that makes sense. Tim Prabhakar From: Tim Hinrichs > To: "ruby.krishnaswamy at orange.com" > Cc: "Ramki Krishnan (ramk at Brocade.com)" >, Gokul B Kandiraju/Watson/IBM at IBMUS, Prabhakar Kudva/Watson/IBM at IBMUS Date: 12/15/2014 12:09 PM Subject: Re: Placement and Scheduling via Policy ________________________________ [Adding Prabhakar and Gokul, in case they are interested.] 1) Ruby, thinking about the solver as taking 1 matrix of [vm, server] and returning another matrix helps me understand what we?re talking about?thanks. I think you?re right that once we move from placement to optimization problems in general we?ll need to figure out how to deal with actions. But if it?s a placement-specific policy engine, then we can build VM-migration into it. It seems to me that the only part left is figuring out how to take an arbitrary policy, carve off the placement-relevant portion, and create the inputs the solver needs to generate that new matrix. Some thoughts... - My gut tells me that the placement-solver should basically say ?I enforce policies having to do with the schema nova:location.? This way the Congress policy engine knows to give it policies relevant to nova:location (placement). If we do that, I believe we can carve off the right sub theory. - That leaves taking a Datalog policy where we know nova:location is important and converting it to the input language required by a linear solver. We need to remember that the Datalog rules may reference tables from other services like Neutron, Ceilometer, etc. I think the key will be figuring out what class of policies we can actually do that for reliably. Cool?a concrete question. 2) We can definitely wait until January on this. I?ll be out of touch starting Friday too; it seems we all get back early January, which seems like the right time to resume our discussions. We have some concrete questions to answer, which was what I was hoping to accomplish before we all went on holiday. Happy Holidays! Tim On Dec 15, 2014, at 5:53 AM, > > wrote: Hi Tim ?Questions: 1) Is there any more data the solver needs? Seems like it needs something about CPU-load for each VM. 2) Which solver should we be using? What does the linear program that we feed it look like? How do we translate the results of the linear solver into a collection of ?migrate_VM? API calls?? Question (2) seems to me the first to address, in particular: ?how to prepare the input (variables, constraints, goal) and invoke the solver? => We need rules that represent constraints to give the solver (e.g. a technical constraint that a VM should not be assigned to more than one server or that more than maximum resource (cpu / mem ?) of a server cannot be assigned. ?how to translate the results of the linear solver into a collection of API calls?: => The output from the ?solver? will give the new placement plan (respecting the constraints in input)? o E.g. a table of [vm, server, true/false] => Then this depends on how ?action? is going to be implemented in Congress (whether an external solver is used or not) o Is the action presented as the ?final? DB rows that the system must produce as a result of the actions? o E.g. if current vm table is [vm3, host4] and the recomputed row says [vm3, host6], then the action is to move vm3 to host6? ?how will the solver be invoked?? => When will the optimization call be invoked? => Is it ?batched?, e.g. periodically invoke Congress to compute new assignments? Which solver to use: http://www.coin-or.org/projects/ and http://www.coin-or.org/projects/PuLP.xml I think it may be useful to pass through an interface (e.g. LP modeler to generate LP files in standard formats accepted by prevalent solvers) The mathematical program: We can (Orange) contribute to writing down in an informal way the program for this precise use case, if this can wait until January. Perhaps the objective is to may be ?minimize the number of servers whose usage is less than 50%?, since the original policy ?Not more than 1 server of type1 to have a load under 50%? need not necessarily have a solution. This may help to derive the ?mappings? from Congress (rules to program equations, intermediary tables to program variables)? For ?migration? use case: it may be useful to add some constraint representing cost of migration, such that the solver computes the new assignment plan such that the maximum migration cost is not exceeded. To start with, perhaps number of migrations? I will be away from the end of the week until 5th January. I will also discuss with colleagues to see how we can formalize contribution (congress+nfv poc). Rgds Ruby De : Tim Hinrichs [mailto:thinrichs at vmware.com] Envoy? : vendredi 12 d?cembre 2014 19:41 ? : KRISHNASWAMY Ruby IMT/OLPS Cc : Ramki Krishnan (ramk at Brocade.com) Objet : Re: Placement and Scheduling via Policy There?s a ton of good stuff here! So if we took Ramki?s initial use case and combined it with Ruby?s HA constraint, we?d have something like the following policy. // anti-affinity error (server, VM1, VM2) :- same_ha_group(VM1, VM2), nova:location(VM1, server), nova:location(VM2, server) // server-utilization error(server) :- type1_server(server), ceilometer:average_utilization(server, ?cpu-util?, avg), avg < 50 As a start, this seems plenty complex to me. anti-affinity is great b/c it DOES NOT require a sophisticated solver; server-utilization is great because it DOES require a linear solver. Data the solver needs: - Ceilometer: cpu-utilization for all the servers - Nova: data as to where each VM is located - Policy: high-availability groups Questions: 1) Is there any more data the solver needs? Seems like it needs something about CPU-load for each VM. 2) Which solver should we be using? What does the linear program that we feed it look like? How do we translate the results of the linear solver into a collection of ?migrate_VM? API calls? Maybe another few emails and then we set up a phone call. Tim On Dec 11, 2014, at 1:33 AM, > > wrote: Hello A) First a small extension to the use case that Ramki proposes - Add high availability constraint. - Assuming server-a and server-b are of same size and same failure model. [Later: Assumption of identical failure rates can be loosened. Instead of considering only servers as failure domains, can introduce other failure domains ==> not just an anti-affinity policy but a calculation from 99,99.. requirement to VM placements, e.g. ] - For an exemplary maximum usage scenario, 53 physical servers could be under peak utilization (100%), 1 server (server-a) could be under partial utilization (50%) with 2 instances of type large.3 and 1 instance of type large.2, and 1 server (server-b) could be under partial utilization (37.5%) with 3 instances of type large.2. Call VM.one.large2 as the large2 VM in server-a Call VM.two.large2 as one of the large2 VM in server-b - VM.one.large2 and VM.two.large2 - When one of the large.3 instances mapped to server-a is deleted from physical server type 1, Policy 1 will be violated, since the overall utilization of server-a falls to 37,5%. - Various new placements(s) are described below VM.two.large2 must not be moved. Moving VM.two.large2 breaks non-affinity constraint. error (server, VM1, VM2) :- node (VM1, server1), node (VM2, server2), same_ha_group(VM1, VM2), equal(server1, server2); 1) New placement 1: Move 2 instances of large.2 to server-a. Overall utilization of server-a - 50%. Overall utilization of server-b - 12.5%. 2) New placement 2: Move 1 instance of large.3 to server-b. Overall utilization of server-a - 0%. Overall utilization of server-b - 62.5%. 3) New placement 3: Move 3 instances of large.2 to server-a. Overall utilization of server-a - 62.5%. Overall utilization of server-b - 0%. New placements 2 and 3 could be considered optimal, since they achieve maximal bin packing and open up the door for turning off server-a or server-b and maximizing energy efficiency. But new placement 3 breaks client policy. BTW: what happens if a given situation does not allow the policy violation to be removed? B) Ramki?s original use case can itself be extended: Adding additional constraints to the previous use case due to cases such as: - Server heterogeneity - CPU ?pinning? - ?VM groups? (and allocation - Application interference - Refining on the statement ?instantaneous energy consumption can be approximately measured using an overall utilization metric, which is a combination of CPU utilization, memory usage, I/O usage, and network usage? Let me know if this will interest you. Some (e.g. application interference) will need some time. E.G; benchmarking / profiling to class VMs etc. C) New placement plan execution - In Ramki?s original use case, violation is detected at events such as VM delete. While certainly this by itself is sufficiently complex, we may need to consider other triggering cases (periodic or when multiple VMs are deleted/added) - In this case, it may not be sufficient to compute the new placement plan that brings the system to a configuration that does not break policy, but also add other goals D) Let me know if a use case such as placing ?video conferencing servers? (geographically distributed clients) would suit you (multi site scenario) => Or is it too premature? Ruby De : Tim Hinrichs [mailto:thinrichs at vmware.com] Envoy? : mercredi 10 d?cembre 2014 19:44 ? : KRISHNASWAMY Ruby IMT/OLPS Cc : Ramki Krishnan (ramk at Brocade.com) Objet : Re: Placement and Scheduling via Policy Hi Ruby, Whatever information you think is important for the use case is good. Section 3 from one of the docs Ramki sent you covers his use case. https://datatracker.ietf.org/doc/draft-krishnan-nfvrg-policy-based-rm-nfviaas/?include_text=1 >From my point of view, the keys things for the use case are? - The placement policy (i.e. the conditions under which VMs require migration). - A description of how we want to compute what specific migrations should be performed (a sketch of (i) the information that we need about current placements, policy violations, etc., (2) what systems/algorithms/etc. can utilize that input to figure out what migrations to perform. I think we want to focus on the end-user/customer experience (write a policy, and watch the VMs move around to obey that policy in response to environment changes) and then work out the details of how to implement that experience. That?s why I didn?t include things like delays, asynchronous/synchronous, architecture, applications, etc. in my 2 bullets above. Tim On Dec 10, 2014, at 8:55 AM, > > wrote: Hi Ramki, Tim By a ?format? for describing use cases, I meant to ask what sets of information to provide, for example, - what granularity in description of use case? - a specific placement policy (and perhaps citing reasons for needing such policy)? - Specific applications - Requirements on the placement manager itself (delay, ?)? o Architecture as well - Specific services from the placement manager (using Congress), such as, o Violation detection (load, security, ?) - Adapting (e.g. context-aware) of policies used In any case I will read the documents that Ramki has sent to not resend similar things. Regards Ruby De : Ramki Krishnan [mailto:ramk at Brocade.com] Envoy? : mercredi 10 d?cembre 2014 16:59 ? : Tim Hinrichs; KRISHNASWAMY Ruby IMT/OLPS Cc : Norival Figueira; Pierre Ettori; Alex Yip; dilikris at in.ibm.com Objet : RE: Placement and Scheduling via Policy Hi Tim, This sounds like a plan. It would be great if you could add the links below to the Congress wiki. I am all for discussing this in the openstack-dev mailing list and at this point this discussion is completely open. IRTF NFVRG Research Group: https://trac.tools.ietf.org/group/irtf/trac/wiki/nfvrg IRTF NFVRG draft on NFVIaaS placement/scheduling (includes system analysis for the PoC we are thinking): https://datatracker.ietf.org/doc/draft-krishnan-nfvrg-policy-based-rm-nfviaas/?include_text=1 IRTF NFVRG draft on Policy Architecture and Framework (looking forward to your comments and thoughts): https://datatracker.ietf.org/doc/draft-norival-nfvrg-nfv-policy-arch/?include_text=1 Hi Ruby, Looking forward to your use cases. Thanks, Ramki _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkolesni at redhat.com Thu Dec 18 17:28:21 2014 From: mkolesni at redhat.com (Mike Kolesnik) Date: Thu, 18 Dec 2014 12:28:21 -0500 (EST) Subject: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution In-Reply-To: References: Message-ID: <1921367349.598949.1418923701155.JavaMail.zimbra@redhat.com> Hi Mathieu, Thanks for the quick reply, some comments inline.. Regards, Mike ----- Original Message ----- > Hi mike, > > thanks for working on this bug : > > On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton wrote: > > > > > > On 12/18/14, 2:06 PM, "Mike Kolesnik" wrote: > > > >>Hi Neutron community members. > >> > >>I wanted to query the community about a proposal of how to fix HA routers > >>not > >>working with L2Population (bug 1365476[1]). > >>This bug is important to fix especially if we want to have HA routers and > >>DVR > >>routers working together. > >> > >>[1] https://bugs.launchpad.net/neutron/+bug/1365476 > >> > >>What's happening now? > >>* HA routers use distributed ports, i.e. the port with the same IP & MAC > >> details is applied on all nodes where an L3 agent is hosting this > >>router. > >>* Currently, the port details have a binding pointing to an arbitrary node > >> and this is not updated. > >>* L2pop takes this "potentially stale" information and uses it to create: > >> 1. A tunnel to the node. > >> 2. An FDB entry that directs traffic for that port to that node. > >> 3. If ARP responder is on, ARP requests will not traverse the network. > >>* Problem is, the master router wouldn't necessarily be running on the > >> reported agent. > >> This means that traffic would not reach the master node but some > >>arbitrary > >> node where the router master might be running, but might be in another > >> state (standby, fail). > >> > >>What is proposed? > >>Basically the idea is not to do L2Pop for HA router ports that reside on > >>the > >>tenant network. > >>Instead, we would create a tunnel to each node hosting the HA router so > >>that > >>the normal learning switch functionality would take care of switching the > >>traffic to the master router. > > > > In Neutron we just ensure that the MAC address is unique per network. > > Could a duplicate MAC address cause problems here? > > gary, AFAIU, from a Neutron POV, there is only one port, which is the > router Port, which is plugged twice. One time per port. > I think that the capacity to bind a port to several host is also a > prerequisite for a clean solution here. This will be provided by > patches to this bug : > https://bugs.launchpad.net/neutron/+bug/1367391 > > > >>This way no matter where the master router is currently running, the data > >>plane would know how to forward traffic to it. > >>This solution requires changes on the controller only. > >> > >>What's to gain? > >>* Data plane only solution, independent of the control plane. > >>* Lowest failover time (same as HA routers today). > >>* High backport potential: > >> * No APIs changed/added. > >> * No configuration changes. > >> * No DB changes. > >> * Changes localized to a single file and limited in scope. > >> > >>What's the alternative? > >>An alternative solution would be to have the controller update the port > >>binding > >>on the single port so that the plain old L2Pop happens and notifies about > >>the > >>location of the master router. > >>This basically negates all the benefits of the proposed solution, but is > >>wider. > >>This solution depends on the report-ha-router-master spec which is > >>currently in > >>the implementation phase. > >> > >>It's important to note that these two solutions don't collide and could > >>be done > >>independently. The one I'm proposing just makes more sense from an HA > >>viewpoint > >>because of it's benefits which fit the HA methodology of being fast & > >>having as > >>little outside dependency as possible. > >>It could be done as an initial solution which solves the bug for mechanism > >>drivers that support normal learning switch (OVS), and later kept as an > >>optimization to the more general, controller based, solution which will > >>solve > >>the issue for any mechanism driver working with L2Pop (Linux Bridge, > >>possibly > >>others). > >> > >>Would love to hear your thoughts on the subject. > > You will have to clearly update the doc to mention that deployment > with Linuxbridge+l2pop are not compatible with HA. Yes this should be added and this is already the situation right now. However if anyone would like to work on a LB fix (the general one or some specific one) I would gladly help with reviewing it. > > Moreover, this solution is downgrading the l2pop solution, by > disabling the ARP-responder when VMs want to talk to a HA router. > This means that ARP requests will be duplicated to every overlay > tunnel to feed the OVS Mac learning table. > This is something that we were trying to avoid with l2pop. But may be > this is acceptable. Yes basically you're correct, however this would be only limited to those tunnels that connect to the nodes where the HA router is hosted, so we would still limit the amount of traffic that is sent across the underlay. Also bear in mind that ARP is actually good (at least in OVS case) since it helps the VM locate on which tunnel the master is, so once it receives the ARP response it records a flow that directs the traffic to the correct tunnel, so we just get hit by the one ARP broadcast but it's sort of a necessary evil in order to locate the master.. > > I know that ofagent is also using l2pop, I would like to know if > ofagent deployment will be compatible with the workaround that you are > proposing. I would like to know that too, hopefully someone from OFagent can shed some light. > > My concern is that, with DVR, there are at least two major features > that are not compatible with Linuxbridge. > Linuxbridge is not running in the gate. I don't know if anybody is > running a 3rd party testing with Linuxbridge deployments. If anybody > does, it would be great to have it voting on gerrit! > > But I really wonder what is the future of linuxbridge compatibility? > should we keep on improving OVS solution without taking into account > the linuxbridge implementation? I don't know actually, but my capability is to fix it for OVS the best way possible. As I said the situation for LB won't become worse than it already is, legacy routers would till function as always.. This fix also will not block fixing LB in any other way since it can be easily adjusted (if necessary) to work only for supporting mechanisms (OVS AFAIK). Also if anyone is willing to pick up the glove and implement the general controller based fix, or something more focused on LB I will happily help review what I can. > > Regards, > > Mathieu > > >> > >>Regards, > >>Mike > >> > >>_______________________________________________ > >>OpenStack-dev mailing list > >>OpenStack-dev at lists.openstack.org > >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thinrichs at vmware.com Thu Dec 18 17:33:49 2014 From: thinrichs at vmware.com (Tim Hinrichs) Date: Thu, 18 Dec 2014 17:33:49 +0000 Subject: [openstack-dev] [Congress] Re: Placement and Scheduling via Policy In-Reply-To: References: Message-ID: <7589085A-9080-4584-95D6-D1EED927A565@vmware.com> Hi Yathi, Thanks for the reminder about the nova solver scheduler. It?s definitely a cool idea to look at integrating the two systems! Ramki is definitely involved in this discussion. We thought placement was a good first example of a broad class of problems that a linear solver could help address, esp. in the NFV context. I like the idea of integrating Congress and the Nova solver scheduler and then generalizing what we learned to handle other kinds of optimization problems. So that?s what I?m thinking long term. Tim On Dec 16, 2014, at 11:28 AM, Yathiraj Udupi (yudupi) > wrote: To add to what I mentioned below? We from the Solver Scheduler team are a small team here at Cisco, trying to drive this project and slowly adding more complex use cases for scheduling and policy?driven placements. We would really love to have some real contributions from everyone in the community and build this the right way. If it may interest ? some interesting scheduler use cases are here based on one of our community meetings in IRC - https://etherpad.openstack.org/p/SchedulerUseCases This could apply to Congress driving some of this too. I am leading the effort for the Solver Scheduler project ( https://github.com/stackforge/nova-solver-scheduler ) , and if any of you are willing to contribute code, API, benchmarks, and also work on integration, my team and I can help you guide through this. We would be following the same processes under Stackforge at the moment. Thanks, Yathi. On 12/16/14, 11:14 AM, "Yathiraj Udupi (yudupi)" > wrote: Tim, I read the conversation thread below and this got me interested as it relates to our discussion we had in the Policy Summit (mid cycle meet up) held in Palo Alto a few months ago. This relates to our project ? Nova Solver Scheduler, which I had talked about at the Policy summit. Please see this - https://github.com/stackforge/nova-solver-scheduler We already have a working constraints-based solver framework/engine that handles Nova placement, and we are currently active in Stackforge, and aim to get this integrated into the Gantt project (https://blueprints.launchpad.net/nova/+spec/solver-scheduler), based on our discussions in the Nova scheduler sub group. When I saw discussions around using Linear programming (LP) solvers, PULP, etc, I thought of pitching in here to say, we already have demonstrated integrating a LP based solver for Nova compute placements. Please see: https://www.youtube.com/watch?v=7QzDbhkk-BI#t=942 for a demo of this (from our talk at the Atlanta Openstack summit). Based on this email thread, I believe Ramki, one of our early collaborators is driving a similar solution in the NFV ETSI research group. Glad to know our Solver scheduler project is getting interest now. As part of Congress integration, at the policy summit, I had suggested, we can try to translate a Congress policy into our Solver Scheduler?s constraints, and use this to enforce Nova placement policies. We can already demonstrate policy-driven nova placements using our pluggable constraints model. So it should be easy to integrate with Congress. The Nova solver scheduler team would be glad to help with any efforts wrt to trying out a Congress integration for Nova placements. Thanks, Yathi. On 12/16/14, 10:24 AM, "Tim Hinrichs" > wrote: [Adding openstack-dev to this thread. For those of you just joining? We started kicking around ideas for how we might integrate a special-purpose VM placement engine into Congress.] Kudva: responses inline. On Dec 16, 2014, at 6:25 AM, Prabhakar Kudva > wrote: Hi, I am very interested in this. So, it looks like there are two parts to this: 1. Policy analysis when there are a significant mix of logical and builtin predicates (i.e., runtime should identify a solution space when there are arithmetic operators). This will require linear programming/ILP type solvers. There might be a need to have a function in runtime.py that specifically deals with this (Tim?) I think it?s right that we expect there to be a mix of builtins and standard predicates. But what we?re considering here is having the linear solver be treated as if it were a domain-specific policy engine. So that solver wouldn?t be embedded into the runtime.py necessarily. Rather, we?d delegate part of the policy to that domain-specific policy engine. 2. Enforcement. That is with a large number of constraints in place for placement and scheduling, how does the policy engine communicate and enforce the placement constraints to nova scheduler. I would imagine that we could delegate either enforcement or monitoring or both. Eventually we want enforcement here, but monitoring could be useful too. And yes you?re asking the right questions. I was trying to break the problem down into pieces in my bullet (1) below. But I think there is significant overlap in the questions we need to answer whether we?re delegating monitoring or enforcement. Both of these require some form of mathematical analysis. Would be happy and interested to discuss more on these lines. Maybe take a look at how I tried to breakdown the problem into separate questions in bullet (1) below and see if that makes sense. Tim Prabhakar From: Tim Hinrichs > To: "ruby.krishnaswamy at orange.com" > Cc: "Ramki Krishnan (ramk at Brocade.com)" >, Gokul B Kandiraju/Watson/IBM at IBMUS, Prabhakar Kudva/Watson/IBM at IBMUS Date: 12/15/2014 12:09 PM Subject: Re: Placement and Scheduling via Policy ________________________________ [Adding Prabhakar and Gokul, in case they are interested.] 1) Ruby, thinking about the solver as taking 1 matrix of [vm, server] and returning another matrix helps me understand what we?re talking about?thanks. I think you?re right that once we move from placement to optimization problems in general we?ll need to figure out how to deal with actions. But if it?s a placement-specific policy engine, then we can build VM-migration into it. It seems to me that the only part left is figuring out how to take an arbitrary policy, carve off the placement-relevant portion, and create the inputs the solver needs to generate that new matrix. Some thoughts... - My gut tells me that the placement-solver should basically say ?I enforce policies having to do with the schema nova:location.? This way the Congress policy engine knows to give it policies relevant to nova:location (placement). If we do that, I believe we can carve off the right sub theory. - That leaves taking a Datalog policy where we know nova:location is important and converting it to the input language required by a linear solver. We need to remember that the Datalog rules may reference tables from other services like Neutron, Ceilometer, etc. I think the key will be figuring out what class of policies we can actually do that for reliably. Cool?a concrete question. 2) We can definitely wait until January on this. I?ll be out of touch starting Friday too; it seems we all get back early January, which seems like the right time to resume our discussions. We have some concrete questions to answer, which was what I was hoping to accomplish before we all went on holiday. Happy Holidays! Tim On Dec 15, 2014, at 5:53 AM, > > wrote: Hi Tim ?Questions: 1) Is there any more data the solver needs? Seems like it needs something about CPU-load for each VM. 2) Which solver should we be using? What does the linear program that we feed it look like? How do we translate the results of the linear solver into a collection of ?migrate_VM? API calls?? Question (2) seems to me the first to address, in particular: ?how to prepare the input (variables, constraints, goal) and invoke the solver? => We need rules that represent constraints to give the solver (e.g. a technical constraint that a VM should not be assigned to more than one server or that more than maximum resource (cpu / mem ?) of a server cannot be assigned. ?how to translate the results of the linear solver into a collection of API calls?: => The output from the ?solver? will give the new placement plan (respecting the constraints in input)? o E.g. a table of [vm, server, true/false] => Then this depends on how ?action? is going to be implemented in Congress (whether an external solver is used or not) o Is the action presented as the ?final? DB rows that the system must produce as a result of the actions? o E.g. if current vm table is [vm3, host4] and the recomputed row says [vm3, host6], then the action is to move vm3 to host6? ?how will the solver be invoked?? => When will the optimization call be invoked? => Is it ?batched?, e.g. periodically invoke Congress to compute new assignments? Which solver to use: http://www.coin-or.org/projects/ and http://www.coin-or.org/projects/PuLP.xml I think it may be useful to pass through an interface (e.g. LP modeler to generate LP files in standard formats accepted by prevalent solvers) The mathematical program: We can (Orange) contribute to writing down in an informal way the program for this precise use case, if this can wait until January. Perhaps the objective is to may be ?minimize the number of servers whose usage is less than 50%?, since the original policy ?Not more than 1 server of type1 to have a load under 50%? need not necessarily have a solution. This may help to derive the ?mappings? from Congress (rules to program equations, intermediary tables to program variables)? For ?migration? use case: it may be useful to add some constraint representing cost of migration, such that the solver computes the new assignment plan such that the maximum migration cost is not exceeded. To start with, perhaps number of migrations? I will be away from the end of the week until 5th January. I will also discuss with colleagues to see how we can formalize contribution (congress+nfv poc). Rgds Ruby De : Tim Hinrichs [mailto:thinrichs at vmware.com] Envoy? : vendredi 12 d?cembre 2014 19:41 ? : KRISHNASWAMY Ruby IMT/OLPS Cc : Ramki Krishnan (ramk at Brocade.com) Objet : Re: Placement and Scheduling via Policy There?s a ton of good stuff here! So if we took Ramki?s initial use case and combined it with Ruby?s HA constraint, we?d have something like the following policy. // anti-affinity error (server, VM1, VM2) :- same_ha_group(VM1, VM2), nova:location(VM1, server), nova:location(VM2, server) // server-utilization error(server) :- type1_server(server), ceilometer:average_utilization(server, ?cpu-util?, avg), avg < 50 As a start, this seems plenty complex to me. anti-affinity is great b/c it DOES NOT require a sophisticated solver; server-utilization is great because it DOES require a linear solver. Data the solver needs: - Ceilometer: cpu-utilization for all the servers - Nova: data as to where each VM is located - Policy: high-availability groups Questions: 1) Is there any more data the solver needs? Seems like it needs something about CPU-load for each VM. 2) Which solver should we be using? What does the linear program that we feed it look like? How do we translate the results of the linear solver into a collection of ?migrate_VM? API calls? Maybe another few emails and then we set up a phone call. Tim On Dec 11, 2014, at 1:33 AM, > > wrote: Hello A) First a small extension to the use case that Ramki proposes - Add high availability constraint. - Assuming server-a and server-b are of same size and same failure model. [Later: Assumption of identical failure rates can be loosened. Instead of considering only servers as failure domains, can introduce other failure domains ==> not just an anti-affinity policy but a calculation from 99,99.. requirement to VM placements, e.g. ] - For an exemplary maximum usage scenario, 53 physical servers could be under peak utilization (100%), 1 server (server-a) could be under partial utilization (50%) with 2 instances of type large.3 and 1 instance of type large.2, and 1 server (server-b) could be under partial utilization (37.5%) with 3 instances of type large.2. Call VM.one.large2 as the large2 VM in server-a Call VM.two.large2 as one of the large2 VM in server-b - VM.one.large2 and VM.two.large2 - When one of the large.3 instances mapped to server-a is deleted from physical server type 1, Policy 1 will be violated, since the overall utilization of server-a falls to 37,5%. - Various new placements(s) are described below VM.two.large2 must not be moved. Moving VM.two.large2 breaks non-affinity constraint. error (server, VM1, VM2) :- node (VM1, server1), node (VM2, server2), same_ha_group(VM1, VM2), equal(server1, server2); 1) New placement 1: Move 2 instances of large.2 to server-a. Overall utilization of server-a - 50%. Overall utilization of server-b - 12.5%. 2) New placement 2: Move 1 instance of large.3 to server-b. Overall utilization of server-a - 0%. Overall utilization of server-b - 62.5%. 3) New placement 3: Move 3 instances of large.2 to server-a. Overall utilization of server-a - 62.5%. Overall utilization of server-b - 0%. New placements 2 and 3 could be considered optimal, since they achieve maximal bin packing and open up the door for turning off server-a or server-b and maximizing energy efficiency. But new placement 3 breaks client policy. BTW: what happens if a given situation does not allow the policy violation to be removed? B) Ramki?s original use case can itself be extended: Adding additional constraints to the previous use case due to cases such as: - Server heterogeneity - CPU ?pinning? - ?VM groups? (and allocation - Application interference - Refining on the statement ?instantaneous energy consumption can be approximately measured using an overall utilization metric, which is a combination of CPU utilization, memory usage, I/O usage, and network usage? Let me know if this will interest you. Some (e.g. application interference) will need some time. E.G; benchmarking / profiling to class VMs etc. C) New placement plan execution - In Ramki?s original use case, violation is detected at events such as VM delete. While certainly this by itself is sufficiently complex, we may need to consider other triggering cases (periodic or when multiple VMs are deleted/added) - In this case, it may not be sufficient to compute the new placement plan that brings the system to a configuration that does not break policy, but also add other goals D) Let me know if a use case such as placing ?video conferencing servers? (geographically distributed clients) would suit you (multi site scenario) => Or is it too premature? Ruby De : Tim Hinrichs [mailto:thinrichs at vmware.com] Envoy? : mercredi 10 d?cembre 2014 19:44 ? : KRISHNASWAMY Ruby IMT/OLPS Cc : Ramki Krishnan (ramk at Brocade.com) Objet : Re: Placement and Scheduling via Policy Hi Ruby, Whatever information you think is important for the use case is good. Section 3 from one of the docs Ramki sent you covers his use case. https://datatracker.ietf.org/doc/draft-krishnan-nfvrg-policy-based-rm-nfviaas/?include_text=1 >From my point of view, the keys things for the use case are? - The placement policy (i.e. the conditions under which VMs require migration). - A description of how we want to compute what specific migrations should be performed (a sketch of (i) the information that we need about current placements, policy violations, etc., (2) what systems/algorithms/etc. can utilize that input to figure out what migrations to perform. I think we want to focus on the end-user/customer experience (write a policy, and watch the VMs move around to obey that policy in response to environment changes) and then work out the details of how to implement that experience. That?s why I didn?t include things like delays, asynchronous/synchronous, architecture, applications, etc. in my 2 bullets above. Tim On Dec 10, 2014, at 8:55 AM, > > wrote: Hi Ramki, Tim By a ?format? for describing use cases, I meant to ask what sets of information to provide, for example, - what granularity in description of use case? - a specific placement policy (and perhaps citing reasons for needing such policy)? - Specific applications - Requirements on the placement manager itself (delay, ?)? o Architecture as well - Specific services from the placement manager (using Congress), such as, o Violation detection (load, security, ?) - Adapting (e.g. context-aware) of policies used In any case I will read the documents that Ramki has sent to not resend similar things. Regards Ruby De : Ramki Krishnan [mailto:ramk at Brocade.com] Envoy? : mercredi 10 d?cembre 2014 16:59 ? : Tim Hinrichs; KRISHNASWAMY Ruby IMT/OLPS Cc : Norival Figueira; Pierre Ettori; Alex Yip; dilikris at in.ibm.com Objet : RE: Placement and Scheduling via Policy Hi Tim, This sounds like a plan. It would be great if you could add the links below to the Congress wiki. I am all for discussing this in the openstack-dev mailing list and at this point this discussion is completely open. IRTF NFVRG Research Group: https://trac.tools.ietf.org/group/irtf/trac/wiki/nfvrg IRTF NFVRG draft on NFVIaaS placement/scheduling (includes system analysis for the PoC we are thinking): https://datatracker.ietf.org/doc/draft-krishnan-nfvrg-policy-based-rm-nfviaas/?include_text=1 IRTF NFVRG draft on Policy Architecture and Framework (looking forward to your comments and thoughts): https://datatracker.ietf.org/doc/draft-norival-nfvrg-nfv-policy-arch/?include_text=1 Hi Ruby, Looking forward to your use cases. Thanks, Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From clint at fewbar.com Thu Dec 18 17:35:27 2014 From: clint at fewbar.com (Clint Byrum) Date: Thu, 18 Dec 2014 09:35:27 -0800 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <5490519A.3090804@hp.com> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> <548A3FB8.9030007@redhat.com> <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> <548B8480.9010506@redhat.com> <548EFB12.2090303@hp.com> <1418660028-sup-6503@fewbar.com> <5490519A.3090804@hp.com> Message-ID: <1418923406-sup-4459@fewbar.com> Excerpts from Anant Patil's message of 2014-12-16 07:36:58 -0800: > On 16-Dec-14 00:59, Clint Byrum wrote: > > Excerpts from Anant Patil's message of 2014-12-15 07:15:30 -0800: > >> On 13-Dec-14 05:42, Zane Bitter wrote: > >>> On 12/12/14 05:29, Murugan, Visnusaran wrote: > >>>> > >>>> > >>>>> -----Original Message----- > >>>>> From: Zane Bitter [mailto:zbitter at redhat.com] > >>>>> Sent: Friday, December 12, 2014 6:37 AM > >>>>> To: openstack-dev at lists.openstack.org > >>>>> Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept > >>>>> showdown > >>>>> > >>>>> On 11/12/14 08:26, Murugan, Visnusaran wrote: > >>>>>>>> [Murugan, Visnusaran] > >>>>>>>> In case of rollback where we have to cleanup earlier version of > >>>>>>>> resources, > >>>>>>> we could get the order from old template. We'd prefer not to have a > >>>>>>> graph table. > >>>>>>> > >>>>>>> In theory you could get it by keeping old templates around. But that > >>>>>>> means keeping a lot of templates, and it will be hard to keep track > >>>>>>> of when you want to delete them. It also means that when starting an > >>>>>>> update you'll need to load every existing previous version of the > >>>>>>> template in order to calculate the dependencies. It also leaves the > >>>>>>> dependencies in an ambiguous state when a resource fails, and > >>>>>>> although that can be worked around it will be a giant pain to implement. > >>>>>>> > >>>>>> > >>>>>> Agree that looking to all templates for a delete is not good. But > >>>>>> baring Complexity, we feel we could achieve it by way of having an > >>>>>> update and a delete stream for a stack update operation. I will > >>>>>> elaborate in detail in the etherpad sometime tomorrow :) > >>>>>> > >>>>>>> I agree that I'd prefer not to have a graph table. After trying a > >>>>>>> couple of different things I decided to store the dependencies in the > >>>>>>> Resource table, where we can read or write them virtually for free > >>>>>>> because it turns out that we are always reading or updating the > >>>>>>> Resource itself at exactly the same time anyway. > >>>>>>> > >>>>>> > >>>>>> Not sure how this will work in an update scenario when a resource does > >>>>>> not change and its dependencies do. > >>>>> > >>>>> We'll always update the requirements, even when the properties don't > >>>>> change. > >>>>> > >>>> > >>>> Can you elaborate a bit on rollback. > >>> > >>> I didn't do anything special to handle rollback. It's possible that we > >>> need to - obviously the difference in the UpdateReplace + rollback case > >>> is that the replaced resource is now the one we want to keep, and yet > >>> the replaced_by/replaces dependency will force the newer (replacement) > >>> resource to be checked for deletion first, which is an inversion of the > >>> usual order. > >>> > >> > >> This is where the version is so handy! For UpdateReplaced ones, there is > >> an older version to go back to. This version could just be template ID, > >> as I mentioned in another e-mail. All resources are at the current > >> template ID if they are found in the current template, even if they is > >> no need to update them. Otherwise, they need to be cleaned-up in the > >> order given in the previous templates. > >> > >> I think the template ID is used as version as far as I can see in Zane's > >> PoC. If the resource template key doesn't match the current template > >> key, the resource is deleted. The version is misnomer here, but that > >> field (template id) is used as though we had versions of resources. > >> > >>> However, I tried to think of a scenario where that would cause problems > >>> and I couldn't come up with one. Provided we know the actual, real-world > >>> dependencies of each resource I don't think the ordering of those two > >>> checks matters. > >>> > >>> In fact, I currently can't think of a case where the dependency order > >>> between replacement and replaced resources matters at all. It matters in > >>> the current Heat implementation because resources are artificially > >>> segmented into the current and backup stacks, but with a holistic view > >>> of dependencies that may well not be required. I tried taking that line > >>> out of the simulator code and all the tests still passed. If anybody can > >>> think of a scenario in which it would make a difference, I would be very > >>> interested to hear it. > >>> > >>> In any event though, it should be no problem to reverse the direction of > >>> that one edge in these particular circumstances if it does turn out to > >>> be a problem. > >>> > >>>> We had an approach with depends_on > >>>> and needed_by columns in ResourceTable. But dropped it when we figured out > >>>> we had too many DB operations for Update. > >>> > >>> Yeah, I initially ran into this problem too - you have a bunch of nodes > >>> that are waiting on the current node, and now you have to go look them > >>> all up in the database to see what else they're waiting on in order to > >>> tell if they're ready to be triggered. > >>> > >>> It turns out the answer is to distribute the writes but centralise the > >>> reads. So at the start of the update, we read all of the Resources, > >>> obtain their dependencies and build one central graph[1]. We than make > >>> that graph available to each resource (either by passing it as a > >>> notification parameter, or storing it somewhere central in the DB that > >>> they will all have to read anyway, i.e. the Stack). But when we update a > >>> dependency we don't update the central graph, we update the individual > >>> Resource so there's no global lock required. > >>> > >>> [1] > >>> https://github.com/zaneb/heat-convergence-prototype/blob/distributed-graph/converge/stack.py#L166-L168 > >>> > >> > >> A centralized graph and decision making will make the implementation far > >> more simpler than distributed. This looks academic, but the simplicity > >> beats everything! When each worker has to decide, there needs to be > >> lock, only DB transactions are not enough. In contrast, when the > >> decision making is centralized, that particular critical section can be > >> attempted with transaction and re-attempted if needed. > >> > > > > I'm concerned that we're losing sight of the whole point of convergence. > > > > Yes, concurrency is hard, and state management is really the only thing > > hard about concurrency. > > > > What Zane is suggesting is a lock-free approach commonly called 'Read > > Copy Update' or "RCU". It has high reader concurrency, but relies on > > there only being one writer. It is quite simple, and has proven itself > > enough to even be included in the Linux kernel: > > > > http://lwn.net/Articles/263130/ > > > I am afraid Zane is saying the other way: "distribute the writes but > centralize the reads". Nevertheless, what I meant by having a centralize Oh I missed that the writes were happening by anybody, but the reads weren't. Thanks. > decision making is the same thing you are pointing to. By centralize I > mean: > 1. Have the graph in DB, not the edges distributed in the resource > table. Also, having the graph in DB doesn't mean we need to lock it each > time we update traversal information or compute next set of ready > resources/tasks. > 2. Take the responsibility of what-to-do-next out of worker. This is the > "writer" part, where upon receiving a notification the engine will > update the graph in DB and compute the next set of resources to be > converged (having all dependencies resolved). I agree that there will be > multiple instances of engine which could execute this section code (this > code will become the critical section), so in that sense it is not > centralized. But this approach differs from workers making the decision, > where, each worker reads from DB and updates the sync point. Instead, if > the workers send "done" notification to engine, the engine can update > the graph traversal and issue request for next set of tasks. All the > workers and observers read from DB, send notifications to engine, while > the engine writes to DB. This may not be strictly followed. Please also > note that updating a graph doesn't mean that we have to lock anything. > As the notifications arrive, the graph is updated, in a TX, and > re-attempted if the two notifications try to update the same row and the > TX fails. The workers are *part of* engine but not really *are the > engine*. Given an instance of engine, and multiple workers (greenlet > threads), I can see that it turns out to be what you are suggesting > about RCU. Please correct me if I am wrong. > So this is definitely interesting. RCU works extremely well if you have a high ratio of reads to writes and a natural single writer. For instance, if you have a game where there is a scoring system and no readers do anything that depend on writes that they cause themselves. The views of the scores are all read and displayed, but updates to the score can just be thrown into an async queue and when the writer gets to them, the score increases, and the players display the new score the next time they read the score. In this case, you have tightly coupled reads to writes, so I can say now after thinking about it that RCU is not a great fit. Letting each entity do an atomic transaction using the DB's row level locks will scale out better than a single writer pattern. I do see one advantage to having writes separate from workers. That is the fact that the Heat database will likely scale differently than the endpoints the plugins in Heat will. So it may make sense to have 100 workers doing calls to nova/swift/glance/cinder/etc., but you might only have a database that can reasonably do 10 transactions at once. This is premature optimization IMO, but it might make sense to have a disciplined approach to the code so that moving those bits apart later will be possible. > I do not insist that it has to be the way I am saying. I am merely > brainstorming to see if the "centralized" write makes sense in this > case. I also admit that I do not have performance data, and I am not any > DB expert. > Indeed, thanks for your thoughts, they are much appreciated. We have quite a few experts that may be reading this thread so if we say something stupid (likely) they will hopefully correct us. Until then we can discuss things logically and test our hypotheses as necessary. :) > >> With the distributed approach, I see following drawbacks: > >> 1. Every time a resource is done, the peer resources (siblings) are > >> checked to see if they are done and the parent is propagated. This > >> happens for each resource. > > > > This is slightly inaccurate. > > > > Every time a resource is done, resources that depend on that resource > > are checked to see if they still have any unresolved dependencies. > > > > Yes. If we have a graph in DB, we should be able to easily compute this. > If the edges are distributed and kept along with the resources in the > resources, we might have to execute multiple queries or keep the graph > in memory. > > >> 2. The worker has to run through all the resources to see if the stack > >> is done, to mark it as completed. > > > > If a worker completes a resource which has no dependent resources, it > > only needs to check to see if all of the other edges of the graph are > > complete to mark the state as complete. There is no full traversal > > unless you want to make sure nothing regressed, which is not the way any > > of the specs read. > > > > I agree. For this same reason I am favoring to have the graph in DB. I > also noted that Zane is not against keeping the graph in DB, but only > that store the *actual* dependencies in resources (may be physical > resource ID?). This is fine I think, though we will be looking at graph > table some times(create/update) and looking at these dependencies in > resource table at other times (delete/clean-up?). The graph can > instantly tell if it is done or not when we look at it, but for > clean-ups and delelts we would have to rely on resource table also. > It's important to get a schema that makes sense, but if need be, we can adapt as we learn what does and doesn't work. I'd go with the simplest schema that makes sense at first. IMO one table for one entity is better than two tables for one entity. 1:1 relationships always seem to be about optimization, not simplification. > >> 3. The decision to converge is made by each worker resulting in lot of > >> contention. The centralized graph restricts the contention point to one > >> place where we can use DB transactions. It is easier to maintain code > >> where particular decisions are made at a place rather than at many > >> places. > > > > Why not let multiple workers use DB transactions? The contention happens > > _only if it needs to happen to preserve transaction consistency_ instead > > of _always_. > > > > Sure! When the workers are done with the current resource they can > update the DB and pick-up the parent if it is ready. The DB interactions > can happen as a TX. But then there would be no > one-writer-multiple-reader if we follow this. All the workers write and > read. > > >> 4. The complex part we are trying to solve is to decide on what to do > >> next when a resource is done. With centralized graph, this is abstracted > >> out to the DB API. The API will return the next set of nodes. A smart > >> SQL query can reduce a lot of logic currently being coded in > >> worker/engine. > > > > Having seen many such "smart" SQL queries, I have to say, this is > > terrifying. Complexity in database access is by far the single biggest > > obstacle to scaling out. > > > > I don't really know why you'd want logic to move into the database. It > > is the one place that you must keep simple in order to scale. We can > > scale out python like crazy.. but SQL is generally a single threaded, > > impossible to debug component. So make the schema obvious and accesses > > to it straight forward. > > > > I think we need to land somewhere between the two approaches though. > > Here is my idea for DB interaction, I realize now it's been in my head > > for a while but I never shared: > > > > CREATE TABLE resource ( > > id ..., > > ! all the stuff now > > version int, > > replaced_by int, > > complete_order int, > > primary key (id, version), > > key idx_replaced_by (replaced_by)); > > > > CREATE TABLE resource_dependencies ( > > id ...., > > version int, > > needed_by ... > > primary key (id, version, needed_by)); > > > > Then completion happens something like this: > > > > BEGIN > > SELECT @complete_order := MAX(complete_order) FROM resource WHERE stack_id = :stack_id: > > SET @complete_order := @complete_order + 1 > > UPDATE resource SET complete_order = @complete_order, state='COMPLETE' WHERE id=:id: AND version=:version:; > > ! if there is a replaced_version > > UPDATE resource SET replaced_by=:version: WHERE id=:id: AND version=:replaced_version:; > > SELECT DISTINCT r.id FROM resource r INNER JOIN resource_dependencies rd > > ON r.id = rd.resource_id AND r.version = rd.version > > WHERE r.version=:version: AND rd.needed_by=:id: AND r.state != 'COMPLETE' > > > > for id in results: > > convergequeue.submit(id) > > > > COMMIT > > > > Perhaps I've missed some revelation that makes this hard or impossible. > > But I don't see a ton of database churn (one update per completion is > > meh). I also don't see a lot of complexity in the query. The > > complete_order can be used to handle deletes in the right order later > > (note that I know that is probably the wrong way to do it and sequences > > are a thing that can be used for this). > > > >> 5. What would be the starting point for resource clean-up? The clean-up > >> has to start when all the resources are updated. With no centralized > >> graph, the DB has to be searched for all the resources with no > >> dependencies and with older versions (or having older template keys) and > >> start removing them. With centralized graph, this would be a simpler > >> with a SQL queries returning what needs to be done. The search space for > >> where to start with clean-up will be huge. > > > > "Searched" may be the wrong way. With the table structure above, you can > > find everything to delete with this query: > > > > ! Outright deletes > > SELECT r_old.id > > FROM resource r_old LEFT OUTER JOIN resource r_new > > ON r_old.id = r_new.id AND r_old.version = :cleanup_version: > > WHERE r_new.id IS NULL OR r_old.replaced_by IS NOT NULL > > ORDER BY DESC r.complete_order; > > > > That should delete everything in more or less the right order. I think > > for that one you can just delete the rows as they're confirmed deleted > > from the plugins, no large transaction needed since we'd not expect > > these rows to be updated anymore. > > > > Well, I meant simple SQL queries where we JOIN the graph table and > resource table to see if a resource can be taken up for convergence. It > is possible that the graph says a resource is ready since it has all the > dependencies stisfied, but have a previous version still in progress > from previous update. By smart, I never meant any complex queries, but > only the ones which solve _our problem_. The queries which you have > suggested above is what I meant. I was not sure we were using the DB in > the way you are suggesting, hence called for utilizing it in a better > way. > Understood, and thanks for enduring my fear of SQL logic bombs. :) > >> 6. When engine restarts, the search space on where to start will be > >> huge. With a centralized graph, the abstracted API to get next set of > >> nodes makes the implementation of decision simpler. > >> > >> I am convinced enough that it is simpler to assign the responsibility to > >> engine on what needs to be done next. No locks will be required, not > >> even resource locks! It is simpler from implementation, understanding > >> and maintenance perspective. > >> > > > > I thought you started saying you would need locks, but now saying you > > won't. > > No. We wanted to get rid of the stack lock we were using to avoid the > concurrency issues, and you had suggested using DB transactions. We had > other ideas but DB TX looked cleaner to us and we are proceeding with > it. > Awesome. :) From rakhmerov at mirantis.com Thu Dec 18 17:36:33 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Thu, 18 Dec 2014 23:36:33 +0600 Subject: [openstack-dev] [pecan] [WSME] Different content-type in request and response In-Reply-To: <834EF7CE-4EC1-496F-BFB1-464CEFD440A6@doughellmann.com> References: <93DF0931-7060-43B8-8B59-3F9C160B2075@mirantis.com> <9726CFD3-F9C0-4BDF-885B-543F473D1376@doughellmann.com> <1A4A612A-FEBF-40C4-94D6-CA8D2C4BD362@mirantis.com> <834EF7CE-4EC1-496F-BFB1-464CEFD440A6@doughellmann.com> Message-ID: <540CE80F-1F73-4C37-8CC7-970ECAACEAC0@mirantis.com> Ok, Doug, we?ll look into it. Thanks Renat Akhmerov @ Mirantis Inc. > On 18 Dec 2014, at 22:59, Doug Hellmann wrote: > > > On Dec 18, 2014, at 2:53 AM, Renat Akhmerov > wrote: > >> Doug, >> >> Sorry for trying to resurrect this thread again. It seems to be pretty important for us. Do you have some comments on that? Or if you need more context please also let us know. > > WSME has separate handlers for JSON and XML now. You could look into adding one for YAML. I think you?d want to start looking in http://git.openstack.org/cgit/stackforge/wsme/tree/wsme/rest > > By default WSME is going to want to encode the response in the same format as the inputs, because it?s going to expect the clients to want that. I?m not sure how hard it would be to change that assumption, or whether the other WSME developers would really think it?s a good idea. > > Doug > >> >> Thanks >> >> Renat Akhmerov >> @ Mirantis Inc. >> >> >> >>> On 27 Nov 2014, at 17:43, Renat Akhmerov wrote: >>> >>> Doug, thanks for your answer! >>> >>> My explanations below.. >>> >>> >>>> On 26 Nov 2014, at 21:18, Doug Hellmann wrote: >>>> >>>> >>>> On Nov 26, 2014, at 3:49 AM, Renat Akhmerov wrote: >>>> >>>>> Hi, >>>>> >>>>> I traced the WSME code and found a place [0] where it tries to get arguments from request body based on different mimetype. So looks like WSME supports only json, xml and ?application/x-www-form-urlencoded?. >>>>> >>>>> So my question is: Can we fix WSME to also support ?text/plain? mimetype? I think the first snippet that Nikolay provided is valid from WSME standpoint. >>>> >>>> WSME is intended for building APIs with structured arguments. It seems like the case of wanting to use text/plain for a single input string argument just hasn?t come up before, so this may be a new feature. >>>> >>>> How many different API calls do you have that will look like this? Would this be the only one in the API? Would it make sense to consistently use JSON, even though you only need a single string argument in this case? >>> >>> We have 5-6 API calls where we need it. >>> >>> And let me briefly explain the context. In Mistral we have a language (we call it DSL) to describe different object types: workflows, workbooks, actions. So currently when we upload say a workbook we run in a command line: >>> >>> mistral workbook-create my_wb.yaml >>> >>> where my_wb.yaml contains that DSL. The result is a table representation of actually create server side workbook. From technical perspective we now have: >>> >>> Request: >>> >>> POST /mistral_url/workbooks >>> >>> { >>> ?definition?: ?escaped content of my_wb.yaml" >>> } >>> >>> Response: >>> >>> { >>> ?id?: ?1-2-3-4?, >>> ?name?: ?my_wb_name?, >>> ?description?: ?my workbook?, >>> ... >>> } >>> >>> The point is that if we use, for example, something like ?curl? we every time have to obtain that ?escaped content of my_wb.yaml? and create that, in fact, synthetic JSON to be able to send it to the server side. >>> >>> So for us it would be much more convenient if we could just send a plain text but still be able to receive a JSON as response. I personally don?t want to use some other technology because generally WSME does it job and I like this concept of rest resources defined as classes. If it supported text/plain it would be just the best fit for us. >>> >>>>> >>>>> Or if we don?t understand something in WSME philosophy then it?d nice to hear some explanations from WSME team. Will appreciate that. >>>>> >>>>> >>>>> Another issue that previously came across is that if we use WSME then we can?t pass arbitrary set of parameters in a url query string, as I understand they should always correspond to WSME resource structure. So, in fact, we can?t have any dynamic parameters. In our particular use case it?s very inconvenient. Hoping you could also provide some info about that: how it can be achieved or if we can just fix it. >>>> >>>> Ceilometer uses an array of query arguments to allow an arbitrary number. >>>> >>>> On the other hand, it sounds like perhaps your desired API may be easier to implement using some of the other tools being used, such as JSONSchema. Are you extending an existing API or building something completely new? >>> >>> We want to improve our existing Mistral API. Basically, the idea is to be able to apply dynamic filters when we?re requesting a collection of objects using url query string. Yes, we could use JSONSchema if you say it?s absolutely impossible to do and doesn?t follow WSME concepts, that?s fine. But like I said generally I like the approach that WSME takes and don?t feel like jumping to another technology just because of this issue. >>> >>> Thanks for mentioning Ceilometer, we?ll look at it and see if that works for us. >>> >>> Renat >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramy.asselin at hp.com Thu Dec 18 17:42:50 2014 From: ramy.asselin at hp.com (Asselin, Ramy) Date: Thu, 18 Dec 2014 17:42:50 +0000 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: References: <5486D947.4090209@hp.com> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> Message-ID: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4240C5@G4W3223.americas.hpqcorp.net> Yes, Ubuntu 12.04 is tested as mentioned in the readme [1]. Note that the referenced script is just a wrapper that pulls all the latest from various locations in openstack-infra, e.g. [2]. Ubuntu 14.04 support is WIP [3] FYI, there?s a spec to get an in-tree 3rd party ci solution [4]. Please add your comments if this interests you. [1] https://github.com/rasselin/os-ext-testing/blob/master/README.md [2] https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L29 [3] https://review.openstack.org/#/c/141518/ [4] https://review.openstack.org/#/c/139745/ From: Punith S [mailto:punith.s at cloudbyte.com] Sent: Thursday, December 18, 2014 3:12 AM To: OpenStack Development Mailing List (not for usage questions); Eduard Matei Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Hi Eduard we tried running https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh on ubuntu master 12.04, and it appears to be working fine on 12.04. thanks On Thu, Dec 18, 2014 at 1:57 PM, Eduard Matei > wrote: Hi, Seems i can't install using puppet on the jenkins master using install_master.sh from https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh because it's running Ubuntu 11.10 and it appears unsupported. I managed to install puppet manually on master and everything else fails So i'm trying to manually install zuul and nodepool and jenkins job builder, see where i end up. The slave looks complete, got some errors on running install_slave so i ran parts of the script manually, changing some params and it appears installed but no way to test it without the master. Any ideas welcome. Thanks, Eduard On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy > wrote: Manually running the script requires a few environment settings. Take a look at the README here: https://github.com/openstack-infra/devstack-gate Regarding cinder, I?m using this repo to run our cinder jobs (fork from jaypipes). https://github.com/rasselin/os-ext-testing Note that this solution doesn?t use the Jenkins gerrit trigger pluggin, but zuul. There?s a sample job for cinder here. It?s in Jenkins Job Builder format. https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample You can ask more questions in IRC freenode #openstack-cinder. (irc# asselin) Ramy From: Eduard Matei [mailto:eduard.matei at cloudfounders.com] Sent: Tuesday, December 16, 2014 12:41 AM To: Bailey, Darragh Cc: OpenStack Development Mailing List (not for usage questions); OpenStack Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Hi, Can someone point me to some working documentation on how to setup third party CI? (joinfu's instructions don't seem to work, and manually running devstack-gate scripts fails: Running gate_hook Job timeout set to: 163 minutes timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory ERROR: the main setup script run by this job failed - exit code: 127 please look at the relevant log files to determine the root cause Cleaning up host ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) Build step 'Execute shell' marked build as failure. I have a working Jenkins slave with devstack and our internal libraries, i have Gerrit Trigger Plugin working and triggering on patches created, i just need the actual job contents so that it can get to comment with the test results. Thanks, Eduard On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei > wrote: Hi Darragh, thanks for your input I double checked the job settings and fixed it: - build triggers is set to Gerrit event - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin and tested separately) - Trigger on: Patchset Created - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: Type: Path, Pattern: ** (was Type Plain on both) Now the job is triggered by commit on openstack-dev/sandbox :) Regarding the Query and Trigger Gerrit Patches, i found my patch using query: status:open project:openstack-dev/sandbox change:139585 and i can trigger it manually and it executes the job. But i still have the problem: what should the job do? It doesn't actually do anything, it doesn't run tests or comment on the patch. Do you have an example of job? Thanks, Eduard On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh > wrote: Hi Eduard, I would check the trigger settings in the job, particularly which "type" of pattern matching is being used for the branches. Found it tends to be the spot that catches most people out when configuring jobs with the Gerrit Trigger plugin. If you're looking to trigger against all branches then you would want "Type: Path" and "Pattern: **" appearing in the UI. If you have sufficient access using the 'Query and Trigger Gerrit Patches' page accessible from the main view will make it easier to confirm that your Jenkins instance can actually see changes in gerrit for the given project (which should mean that it can see the corresponding events as well). Can also use the same page to re-trigger for PatchsetCreated events to see if you've set the patterns on the job correctly. Regards, Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" - Unknown On 08/12/14 14:33, Eduard Matei wrote: > Resending this to dev ML as it seems i get quicker response :) > > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: > Patchset Created", chose as server the configured Gerrit server that > was previously tested, then added the project openstack-dev/sandbox > and saved. > I made a change on dev sandbox repo but couldn't trigger my job. > > Any ideas? > > Thanks, > Eduard > > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei > > >> wrote: > > Hello everyone, > > Thanks to the latest changes to the creation of service accounts > process we're one step closer to setting up our own CI platform > for Cinder. > > So far we've got: > - Jenkins master (with Gerrit plugin) and slave (with DevStack and > our storage solution) > - Service account configured and tested (can manually connect to > review.openstack.org and get events > and publish comments) > > Next step would be to set up a job to do the actual testing, this > is where we're stuck. > Can someone please point us to a clear example on how a job should > look like (preferably for testing Cinder on Kilo)? Most links > we've found are broken, or tools/scripts are no longer working. > Also, we cannot change the Jenkins master too much (it's owned by > Ops team and they need a list of tools/scripts to review before > installing/running so we're not allowed to experiment). > > Thanks, > Eduard > > -- > > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom > they are addressed.If you are not the named addressee or an > employee or agent responsible for delivering this message to the > named addressee, you are hereby notified that you are not > authorized to read, print, retain, copy or disseminate this > message or any part of it. If you have received this email in > error we request you to notify us by reply e-mail and to delete > all electronic files of the message. If you are not the intended > recipient you are notified that disclosing, copying, distributing > or taking any action in reliance on the contents of this > information is strictly prohibited. E-mail transmission cannot be > guaranteed to be secure or error free as information could be > intercepted, corrupted, lost, destroyed, arrive late or > incomplete, or contain viruses. The sender therefore does not > accept liability for any errors or omissions in the content of > this message, and shall have no liability for any loss or damage > suffered by the user, which arise as a result of e-mail transmission. > > > > > -- > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom they > are addressed.If you are not the named addressee or an employee or > agent responsible for delivering this message to the named addressee, > you are hereby notified that you are not authorized to read, print, > retain, copy or disseminate this message or any part of it. If you > have received this email in error we request you to notify us by reply > e-mail and to delete all electronic files of the message. If you are > not the intended recipient you are notified that disclosing, copying, > distributing or taking any action in reliance on the contents of this > information is strictly prohibited. E-mail transmission cannot be > guaranteed to be secure or error free as information could be > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or > contain viruses. The sender therefore does not accept liability for > any errors or omissions in the content of this message, and shall have > no liability for any loss or damage suffered by the user, which arise > as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- regards, punith s cloudbyte.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From eduard.matei at cloudfounders.com Thu Dec 18 17:52:33 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Thu, 18 Dec 2014 19:52:33 +0200 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4240C5@G4W3223.americas.hpqcorp.net> References: <5486D947.4090209@hp.com> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4240C5@G4W3223.americas.hpqcorp.net> Message-ID: Thanks for the input. I managed to get another master working (on Ubuntu 13.10), again with some issues since it was already setup. I'm now working towards setting up the slave. Will add comments to those reviews. Thanks, Eduard On Thu, Dec 18, 2014 at 7:42 PM, Asselin, Ramy wrote: > Yes, Ubuntu 12.04 is tested as mentioned in the readme [1]. Note that > the referenced script is just a wrapper that pulls all the latest from > various locations in openstack-infra, e.g. [2]. > > Ubuntu 14.04 support is WIP [3] > > FYI, there?s a spec to get an in-tree 3rd party ci solution [4]. Please > add your comments if this interests you. > > > > [1] https://github.com/rasselin/os-ext-testing/blob/master/README.md > > [2] > https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L29 > > [3] https://review.openstack.org/#/c/141518/ > > [4] https://review.openstack.org/#/c/139745/ > > > > > > *From:* Punith S [mailto:punith.s at cloudbyte.com] > *Sent:* Thursday, December 18, 2014 3:12 AM > *To:* OpenStack Development Mailing List (not for usage questions); > Eduard Matei > > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Hi Eduard > > > > we tried running > https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh > > on ubuntu master 12.04, and it appears to be working fine on 12.04. > > > > thanks > > > > On Thu, Dec 18, 2014 at 1:57 PM, Eduard Matei < > eduard.matei at cloudfounders.com> wrote: > > Hi, > > Seems i can't install using puppet on the jenkins master using > install_master.sh from > https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh > because it's running Ubuntu 11.10 and it appears unsupported. > > I managed to install puppet manually on master and everything else fails > > So i'm trying to manually install zuul and nodepool and jenkins job > builder, see where i end up. > > > > The slave looks complete, got some errors on running install_slave so i > ran parts of the script manually, changing some params and it appears > installed but no way to test it without the master. > > > > Any ideas welcome. > > > > Thanks, > > > > Eduard > > > > On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy > wrote: > > Manually running the script requires a few environment settings. Take a > look at the README here: > > https://github.com/openstack-infra/devstack-gate > > > > Regarding cinder, I?m using this repo to run our cinder jobs (fork from > jaypipes). > > https://github.com/rasselin/os-ext-testing > > > > Note that this solution doesn?t use the Jenkins gerrit trigger pluggin, > but zuul. > > > > There?s a sample job for cinder here. It?s in Jenkins Job Builder format. > > > https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample > > > > You can ask more questions in IRC freenode #openstack-cinder. (irc# > asselin) > > > > Ramy > > > > *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] > *Sent:* Tuesday, December 16, 2014 12:41 AM > *To:* Bailey, Darragh > *Cc:* OpenStack Development Mailing List (not for usage questions); > OpenStack > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Hi, > > > > Can someone point me to some working documentation on how to setup third > party CI? (joinfu's instructions don't seem to work, and manually running > devstack-gate scripts fails: > > Running gate_hook > > Job timeout set to: 163 minutes > > timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory > > ERROR: the main setup script run by this job failed - exit code: 127 > > please look at the relevant log files to determine the root cause > > Cleaning up host > > ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) > > Build step 'Execute shell' marked build as failure. > > > > I have a working Jenkins slave with devstack and our internal libraries, i > have Gerrit Trigger Plugin working and triggering on patches created, i > just need the actual job contents so that it can get to comment with the > test results. > > > > Thanks, > > > > Eduard > > > > On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei < > eduard.matei at cloudfounders.com> wrote: > > Hi Darragh, thanks for your input > > > > I double checked the job settings and fixed it: > > - build triggers is set to Gerrit event > > - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin > and tested separately) > > - Trigger on: Patchset Created > > - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: > Type: Path, Pattern: ** (was Type Plain on both) > > Now the job is triggered by commit on openstack-dev/sandbox :) > > > > Regarding the Query and Trigger Gerrit Patches, i found my patch using > query: status:open project:openstack-dev/sandbox change:139585 and i can > trigger it manually and it executes the job. > > > > But i still have the problem: what should the job do? It doesn't actually > do anything, it doesn't run tests or comment on the patch. > > Do you have an example of job? > > > > Thanks, > > Eduard > > > > On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh wrote: > > Hi Eduard, > > > I would check the trigger settings in the job, particularly which "type" > of pattern matching is being used for the branches. Found it tends to be > the spot that catches most people out when configuring jobs with the > Gerrit Trigger plugin. If you're looking to trigger against all branches > then you would want "Type: Path" and "Pattern: **" appearing in the UI. > > If you have sufficient access using the 'Query and Trigger Gerrit > Patches' page accessible from the main view will make it easier to > confirm that your Jenkins instance can actually see changes in gerrit > for the given project (which should mean that it can see the > corresponding events as well). Can also use the same page to re-trigger > for PatchsetCreated events to see if you've set the patterns on the job > correctly. > > Regards, > Darragh Bailey > > "Nothing is foolproof to a sufficiently talented fool" - Unknown > > On 08/12/14 14:33, Eduard Matei wrote: > > Resending this to dev ML as it seems i get quicker response :) > > > > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: > > Patchset Created", chose as server the configured Gerrit server that > > was previously tested, then added the project openstack-dev/sandbox > > and saved. > > I made a change on dev sandbox repo but couldn't trigger my job. > > > > Any ideas? > > > > Thanks, > > Eduard > > > > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei > > > > wrote: > > > > Hello everyone, > > > > Thanks to the latest changes to the creation of service accounts > > process we're one step closer to setting up our own CI platform > > for Cinder. > > > > So far we've got: > > - Jenkins master (with Gerrit plugin) and slave (with DevStack and > > our storage solution) > > - Service account configured and tested (can manually connect to > > review.openstack.org and get events > > and publish comments) > > > > Next step would be to set up a job to do the actual testing, this > > is where we're stuck. > > Can someone please point us to a clear example on how a job should > > look like (preferably for testing Cinder on Kilo)? Most links > > we've found are broken, or tools/scripts are no longer working. > > Also, we cannot change the Jenkins master too much (it's owned by > > Ops team and they need a list of tools/scripts to review before > > installing/running so we're not allowed to experiment). > > > > Thanks, > > Eduard > > > > -- > > > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > > intended solely for the use of the individual or entity to whom > > they are addressed.If you are not the named addressee or an > > employee or agent responsible for delivering this message to the > > named addressee, you are hereby notified that you are not > > authorized to read, print, retain, copy or disseminate this > > message or any part of it. If you have received this email in > > error we request you to notify us by reply e-mail and to delete > > all electronic files of the message. If you are not the intended > > recipient you are notified that disclosing, copying, distributing > > or taking any action in reliance on the contents of this > > information is strictly prohibited. E-mail transmission cannot be > > guaranteed to be secure or error free as information could be > > intercepted, corrupted, lost, destroyed, arrive late or > > incomplete, or contain viruses. The sender therefore does not > > accept liability for any errors or omissions in the content of > > this message, and shall have no liability for any loss or damage > > suffered by the user, which arise as a result of e-mail transmission. > > > > > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > > intended solely for the use of the individual or entity to whom they > > are addressed.If you are not the named addressee or an employee or > > agent responsible for delivering this message to the named addressee, > > you are hereby notified that you are not authorized to read, print, > > retain, copy or disseminate this message or any part of it. If you > > have received this email in error we request you to notify us by reply > > e-mail and to delete all electronic files of the message. If you are > > not the intended recipient you are notified that disclosing, copying, > > distributing or taking any action in reliance on the contents of this > > information is strictly prohibited. E-mail transmission cannot be > > guaranteed to be secure or error free as information could be > > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or > > contain viruses. The sender therefore does not accept liability for > > any errors or omissions in the content of this message, and shall have > > no liability for any loss or damage suffered by the user, which arise > > as a result of e-mail transmission. > > > > > > > _______________________________________________ > > OpenStack-Infra mailing list > > OpenStack-Infra at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > > regards, > > > > punith s > > cloudbyte.com > -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.griffith8 at gmail.com Thu Dec 18 18:09:20 2014 From: john.griffith8 at gmail.com (John Griffith) Date: Thu, 18 Dec 2014 11:09:20 -0700 Subject: [openstack-dev] [OpenStack-dev][Cinder] Driver stats return value: infinite or unavailable In-Reply-To: References: Message-ID: On Thu, Dec 18, 2014 at 1:56 AM, Eduard Matei wrote: > Hi everyone, > > We're in a bit of a predicament regarding review: > https://review.openstack.org/#/c/130733/ > > Two days ago it got a -1 from John G asking to change infinite to > unavailable although the docs clearly say that "If the driver is unable to > provide a value for free_capacity_gb or total_capacity_gb, keywords can be > provided instead. Please use ?unknown? if the array cannot report the value > or ?infinite? if the array has no upper limit." > (http://docs.openstack.org/developer/cinder/devref/drivers.html) > > After i changed it, came Walter A. Boring IV and gave another -1 saying we > should return infinite. > > Since we use S3 as a backend and it has no upper limit (technically there is > a limit but for the purposes of our driver there's no limit as the backend > is "elastic") we could return infinite. > > Anyway, the problem is that now we missed the K-1 merge window although the > driver passed all tests (including cert tests). > > So please can someone decide which is the correct value so we can use that > and get the patched approved (unless there are other issues). > > Thanks, > Eduard > -- > > Eduard Biceri Matei, Senior Software Developer > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > CloudFounders, The Private Cloud Software Company > > Disclaimer: > This email and any files transmitted with it are confidential and intended > solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for > delivering this message to the named addressee, you are hereby notified that > you are not authorized to read, print, retain, copy or disseminate this > message or any part of it. If you have received this email in error we > request you to notify us by reply e-mail and to delete all electronic files > of the message. If you are not the intended recipient you are notified that > disclosing, copying, distributing or taking any action in reliance on the > contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as > information could be intercepted, corrupted, lost, destroyed, arrive late or > incomplete, or contain viruses. The sender therefore does not accept > liability for any errors or omissions in the content of this message, and > shall have no liability for any loss or damage suffered by the user, which > arise as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Hi Eduard, First, I owe you an apology; I stated "unavailable" in the review, but should've stated "unknown". We're in the process of trying to eliminate the reporting of infinite as it screwed up the weighing scheduler. Note that Zhiteng adjusted the scheduler so this isn't such a big deal anymore by down-grading the handling of infinite and unknown [1]. Anyway, my suggestion to not use infinite is because in the coming weeks I'd like to remove infinite from the stats reporting altogether, and for those backends that for whatever reason don't know how much capacity they have use a more accurate report of "unknown". Sorry for the confusion, I think the comments on your review have been updated to reflect this, if not I'll do that next. Thanks, John [1]: https://github.com/openstack/cinder/commit/ee9d30a73a74a2e1905eacc561c1b5188b62ca75 From eduard.matei at cloudfounders.com Thu Dec 18 18:41:16 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Thu, 18 Dec 2014 20:41:16 +0200 Subject: [openstack-dev] [OpenStack-dev][Cinder] Driver stats return value: infinite or unavailable In-Reply-To: References: Message-ID: Thanks John, I updated to unknown. Eduard On Thu, Dec 18, 2014 at 8:09 PM, John Griffith wrote: > On Thu, Dec 18, 2014 at 1:56 AM, Eduard Matei > wrote: > > Hi everyone, > > > > We're in a bit of a predicament regarding review: > > https://review.openstack.org/#/c/130733/ > > > > Two days ago it got a -1 from John G asking to change infinite to > > unavailable although the docs clearly say that "If the driver is unable > to > > provide a value for free_capacity_gb or total_capacity_gb, keywords can > be > > provided instead. Please use ?unknown? if the array cannot report the > value > > or ?infinite? if the array has no upper limit." > > (http://docs.openstack.org/developer/cinder/devref/drivers.html) > > > > After i changed it, came Walter A. Boring IV and gave another -1 saying > we > > should return infinite. > > > > Since we use S3 as a backend and it has no upper limit (technically > there is > > a limit but for the purposes of our driver there's no limit as the > backend > > is "elastic") we could return infinite. > > > > Anyway, the problem is that now we missed the K-1 merge window although > the > > driver passed all tests (including cert tests). > > > > So please can someone decide which is the correct value so we can use > that > > and get the patched approved (unless there are other issues). > > > > Thanks, > > Eduard > > -- > > > > Eduard Biceri Matei, Senior Software Developer > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > CloudFounders, The Private Cloud Software Company > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > intended > > solely for the use of the individual or entity to whom they are > addressed. > > If you are not the named addressee or an employee or agent responsible > for > > delivering this message to the named addressee, you are hereby notified > that > > you are not authorized to read, print, retain, copy or disseminate this > > message or any part of it. If you have received this email in error we > > request you to notify us by reply e-mail and to delete all electronic > files > > of the message. If you are not the intended recipient you are notified > that > > disclosing, copying, distributing or taking any action in reliance on the > > contents of this information is strictly prohibited. > > E-mail transmission cannot be guaranteed to be secure or error free as > > information could be intercepted, corrupted, lost, destroyed, arrive > late or > > incomplete, or contain viruses. The sender therefore does not accept > > liability for any errors or omissions in the content of this message, and > > shall have no liability for any loss or damage suffered by the user, > which > > arise as a result of e-mail transmission. > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Hi Eduard, > > First, I owe you an apology; I stated "unavailable" in the review, but > should've stated "unknown". We're in the process of trying to > eliminate the reporting of infinite as it screwed up the weighing > scheduler. > > Note that Zhiteng adjusted the scheduler so this isn't such a big deal > anymore by down-grading the handling of infinite and unknown [1]. > > Anyway, my suggestion to not use infinite is because in the coming > weeks I'd like to remove infinite from the stats reporting altogether, > and for those backends that for whatever reason don't know how much > capacity they have use a more accurate report of "unknown". > > Sorry for the confusion, I think the comments on your review have been > updated to reflect this, if not I'll do that next. > > Thanks, > John > > [1]: > https://github.com/openstack/cinder/commit/ee9d30a73a74a2e1905eacc561c1b5188b62ca75 > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From engg.sanj at gmail.com Thu Dec 18 19:03:54 2014 From: engg.sanj at gmail.com (Satyasanjibani Rautaray) Date: Fri, 19 Dec 2014 00:33:54 +0530 Subject: [openstack-dev] [Fuel] Adding code to add node to fuel UI In-Reply-To: References: Message-ID: Hi Mike this Document helped a lot I may be missing something thing for which i need some help below is the details for which i require some help. i have a vim.pp file for testing which will install vim on the particular node which is not a part of controller or compute or any openstack component nodes The current zabbix-server under the manager.py did something as below from nailgun.utils.zabbix import ZabbixManager @classmethod def get_zabbix_url(cls, cluster): zabbix_node = ZabbixManager.get_zabbix_node(cluster) if zabbix_node is None: return None ip_cidr = cls.get_node_network_by_netname( zabbix_node, 'public' )['ip'] ip = ip_cidr.split('/')[0] return 'http://{0}/zabbix'.format(ip) at receiver.py zabbix_url = objects.Cluster.get_network_manager( task.cluster ).get_zabbix_url(task.cluster) if zabbix_url: zabbix_suffix = " Access Zabbix dashboard at {0}".format( zabbix_url ) message += zabbix_suffix at task.py from nailgun.utils.zabbix import ZabbixManager # check if there's a zabbix server in an environment # and if there is, remove hosts if ZabbixManager.get_zabbix_node(task.cluster): zabbix_credentials = ZabbixManager.get_zabbix_credentials( task.cluster ) logger.debug("Removing nodes %s from zabbix" % (nodes_to_delete)) try: ZabbixManager.remove_from_zabbix( zabbix_credentials, nodes_to_delete ) except (errors.CannotMakeZabbixRequest, errors.ZabbixRequestError) as e: logger.warning("%s, skipping removing nodes from Zabbix", e) and https://review.openstack.org/#/c/84408/39/nailgun/nailgun/utils/zabbix.py i am not able to get how can i connect to the vim.pp file Thanks Satya On Wed, Dec 17, 2014 at 7:27 AM, Mike Scherbakov wrote: > > Hi, > did you come across > http://docs.mirantis.com/fuel-dev/develop/addition_examples.html ? > > I believe it should cover your use case. > > Thanks, > > On Tue, Dec 16, 2014 at 11:43 PM, Satyasanjibani Rautaray < > engg.sanj at gmail.com> wrote: >> >> I just need to deploy the node and install my required packages. >> On 17-Dec-2014 1:31 am, "Andrey Danin" wrote: >> >>> Hello. >>> >>> What version of Fuel do you use? Did you reupload openstack.yaml into >>> Nailgun? Do you want just to deploy an operating system and configure a >>> network on a new node? >>> >>> I would really appreciate if you use a period at the end of sentences. >>> >>> On Tuesday, December 16, 2014, Satyasanjibani Rautaray < >>> engg.sanj at gmail.com> wrote: >>> >>>> Hi, >>>> >>>> *i am in a process of creating an additional node by editing the code >>>> where the new node will be solving a different propose than installing >>>> openstack components just for testing currently the new node will install >>>> vim for me please help me what else i need to look into to create the >>>> complete setup and deploy with fuel i have edited openstack.yaml at >>>> /root/fuel-web/nailgun/nailgun/fixtures http://pastebin.com/P1MmDBzP >>>> * >>>> -- >>>> Thanks >>>> Satya >>>> Mob:9844101001 >>>> >>>> No one is the best by birth, Its his brain/ knowledge which make him >>>> the best. >>>> >>> >>> >>> -- >>> Andrey Danin >>> adanin at mirantis.com >>> skype: gcon.monolake >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > -- > Mike Scherbakov > #mihgen > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Thanks Satya Mob:9844101001 No one is the best by birth, Its his brain/ knowledge which make him the best. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.a.st.pierre at gmail.com Thu Dec 18 19:08:10 2014 From: chris.a.st.pierre at gmail.com (Chris St. Pierre) Date: Thu, 18 Dec 2014 13:08:10 -0600 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: References: <5490967C.8020801@gmail.com> <5490A5D8.2090804@catalyst.net.nz> <20141216231940.GA61726@HQSML-1081034.cable.comcast.com> <0FBF5631AB7B504D89C7E6929695B6249302F16F@ORD1EXD02.RACKSPACE.CORP> <0FBF5631AB7B504D89C7E6929695B6249302F2A2@ORD1EXD02.RACKSPACE.CORP> Message-ID: I wasn't suggesting that we *actually* use filesystem link count, and make hard links or whatever for every time the image is used. That's prima facie absurd, for a lot more reasons that you point out. I was suggesting a new database field that tracks the number of times an image is in use, by *analogy* with filesystem link counts. (If I wanted to be unnecessarily abrasive I might say, "This is a textbook example of something called an analogy," but I'm not interested in being unnecessarily abrasive.) Overloading the protected flag is *still* a terrible hack. Even if we tracked the initial state of "protected" and restored that state when an image went out of use, that would negate the ability to make an image protected while it was in use and expect that change to remain in place. So that just violates the principle of least surprise. Of course, we could have glance modify the "original_protected_state" flag when that flag is non-null and the user changes the actual "protected" flag, but this is just layering hacks upon hacks. By actually tracking the number of times an image is in use, we can preserve the ability to protect images *and* avoid deleting images in use. On Thu, Dec 18, 2014 at 5:27 AM, Kuvaja, Erno wrote: > I think that?s horrible idea. How do we do that store independent with > the linking dependencies? > > > > We should not depend universal use case like this on limited subset of > backends, specially non-OpenStack ones. Glance (nor Nova) should never > depend having direct access to the actual medium where the images are > stored. I think this is school book example for something called database. > Well arguable if this should be tracked at Glance or Nova, but definitely > not a dirty hack expecting specific backend characteristics. > > > > As mentioned before the protected image property is to ensure that the > image does not get deleted, that is also easy to track when the images are > queried. Perhaps the record needs to track the original state of protected > flag, image id and use count. 3 column table and couple of API calls. Lets > not at least make it any more complicated than it needs to be if such > functionality is desired. > > > > - Erno > > > > *From:* Nikhil Komawar [mailto:nikhil.komawar at RACKSPACE.COM] > *Sent:* 17 December 2014 20:34 > > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [glance] Option to skip deleting images in > use? > > > > Guess that's a implementation detail. Depends on the way you go about > using what's available now, I suppose. > > > > Thanks, > -Nikhil > ------------------------------ > > *From:* Chris St. Pierre [chris.a.st.pierre at gmail.com] > *Sent:* Wednesday, December 17, 2014 2:07 PM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [glance] Option to skip deleting images in > use? > > I was assuming atomic increment/decrement operations, in which case I'm > not sure I see the race conditions. Or is atomism assuming too much? > > > > On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar < > nikhil.komawar at rackspace.com> wrote: > > That looks like a decent alternative if it works. However, it would be > too racy unless we we implement a test-and-set for such properties or there > is a different job which queues up these requests and perform sequentially > for each tenant. > > > > Thanks, > -Nikhil > ------------------------------ > > *From:* Chris St. Pierre [chris.a.st.pierre at gmail.com] > *Sent:* Wednesday, December 17, 2014 10:23 AM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [glance] Option to skip deleting images in > use? > > That's unfortunately too simple. You run into one of two cases: > > > > 1. If the job automatically removes the protected attribute when an image > is no longer in use, then you lose the ability to use "protected" on images > that are not in use. I.e., there's no way to say, "nothing is currently > using this image, but please keep it around." (This seems particularly > useful for snapshots, for instance.) > > > > 2. If the job does not automatically remove the protected attribute, then > an image would be protected if it had ever been in use; to delete an image, > you'd have to manually un-protect it, which is a workflow that quite > explicitly defeats the whole purpose of flagging images as protected when > they're in use. > > > > It seems like flagging an image as *not* in use is actually a fairly > difficult problem, since it requires consensus among all components that > might be using images. > > > > The only solution that readily occurs to me would be to add something like > a filesystem link count to images in Glance. Then when Nova spawns an > instance, it increments the usage count; when the instance is destroyed, > the usage count is decremented. And similarly with other components that > use images. An image could only be deleted when its usage count was zero. > > > > There are ample opportunities to get out of sync there, but it's at least > a sketch of something that might work, and isn't *too* horribly hackish. > Thoughts? > > > > On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya > wrote: > > A simple solution that wouldn?t require modification of glance would be a > cron job > that lists images and snapshots and marks them protected while they are in > use. > > Vish > > > On Dec 16, 2014, at 3:19 PM, Collins, Sean < > Sean_Collins2 at cable.comcast.com> wrote: > > > On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote: > >> No, I'm looking to prevent images that are in use from being deleted. > "In > >> use" and "protected" are disjoint sets. > > > > I have seen multiple cases of images (and snapshots) being deleted while > > still in use in Nova, which leads to some very, shall we say, > > interesting bugs and support problems. > > > > I do think that we should try and determine a way forward on this, they > > are indeed disjoint sets. Setting an image as protected is a proactive > > measure, we should try and figure out a way to keep tenants from > > shooting themselves in the foot if possible. > > > > -- > > Sean M. Collins > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > > Chris St. Pierre > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > > Chris St. Pierre > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Chris St. Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Thu Dec 18 19:22:03 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Thu, 18 Dec 2014 14:22:03 -0500 Subject: [openstack-dev] [python-keystoneclient] python-keystoneclient 1.0.0 release Message-ID: <4DC25AB5-0EF0-4D3C-A60A-751F7A11EAED@gmail.com> The Keystone development community would like to announce the release of python-keystoneclient 1.0.0. The move to the 1.x.x development branch was made to match the perception that the library has long been considered stable. Beyond the move to the stable release version, this release is no different than any other python-keystoneclient release. This release can be installed from the following locations: * http://tarballs.openstack.org/python-keystoneclient * https://pypi.python.org/pypi/python-keystoneclient 1.0.0 ------- * Registered CLI Options will no longer use the default values instead of the ENV variables (if present) * The `curl` examples from the debug output now includes `--globoff` for ipv6 urls * HTTPClient will no longer incorrectly raise AttributeError if authentication has not occurred when checking if `.has_service_catalog` Detailed changes in this release beyond what is listed above: https://launchpad.net/python-keystoneclient/+milestone/1.0.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Thu Dec 18 19:26:20 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Thu, 18 Dec 2014 14:26:20 -0500 Subject: [openstack-dev] [keystonemiddleware] Keystone Middleware 1.3.0 release Message-ID: <85602107-FA84-41DC-80E1-B66D7E742C53@gmail.com> The Keystone development community would like to announce the 1.3.0 release of the keystone middleware package. This release can be installed from the following locations: * http://tarballs.openstack.org/keystonemiddleware * https://pypi.python.org/pypi/keystonemiddleware 1.3.0 ------- * http_connect_timeout option is now an integer instead of a boolean. * The service user for auth_token middlware can now be in a domain other than the default domain. Detailed changes in this release beyond what is listed above: https://launchpad.net/keystonemiddleware/+milestone/1.3.0 From adanin at mirantis.com Thu Dec 18 20:32:36 2014 From: adanin at mirantis.com (Andrey Danin) Date: Fri, 19 Dec 2014 00:32:36 +0400 Subject: [openstack-dev] [Fuel] Adding code to add node to fuel UI In-Reply-To: References: Message-ID: It's enough for you to just create a new role in openstack.yaml and maybe some descriptions in UI components. Then you should capture this role in Puppet manifests. Look at the 'case' operator [1]. Just add a new case for your role and call your 'vim' class here. [1] https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/osnailyfacter/manifests/cluster_simple.pp#L227 On Thu, Dec 18, 2014 at 10:03 PM, Satyasanjibani Rautaray < engg.sanj at gmail.com> wrote: > > Hi Mike > > this Document helped a lot > > I may be missing something thing for which i need some help below is the > details for which i require some help. > > i have a vim.pp file for testing which will install vim on the particular > node which is not a part of controller or compute or any openstack > component nodes > > The current zabbix-server under the manager.py did something as below > > from nailgun.utils.zabbix import ZabbixManager > > @classmethod > def get_zabbix_url(cls, cluster): > zabbix_node = ZabbixManager.get_zabbix_node(cluster) > if zabbix_node is None: > return None > ip_cidr = cls.get_node_network_by_netname( > zabbix_node, 'public' > )['ip'] > ip = ip_cidr.split('/')[0] > return 'http://{0}/zabbix'.format(ip) > > > > at receiver.py > > > zabbix_url = objects.Cluster.get_network_manager( > task.cluster > ).get_zabbix_url(task.cluster) > > if zabbix_url: > zabbix_suffix = " Access Zabbix dashboard at {0}".format( > zabbix_url > ) > message += zabbix_suffix > > > > at task.py > > > > from nailgun.utils.zabbix import ZabbixManager > # check if there's a zabbix server in an environment > # and if there is, remove hosts > if ZabbixManager.get_zabbix_node(task.cluster): > zabbix_credentials = ZabbixManager.get_zabbix_credentials( > task.cluster > ) > logger.debug("Removing nodes %s from zabbix" % (nodes_to_delete)) > try: > ZabbixManager.remove_from_zabbix( > zabbix_credentials, nodes_to_delete > ) > except (errors.CannotMakeZabbixRequest, > errors.ZabbixRequestError) as e: > logger.warning("%s, skipping removing nodes from Zabbix", e) > > > > and > > https://review.openstack.org/#/c/84408/39/nailgun/nailgun/utils/zabbix.py > > > i am not able to get how can i connect to the vim.pp file > > Thanks > Satya > > > On Wed, Dec 17, 2014 at 7:27 AM, Mike Scherbakov > wrote: >> >> Hi, >> did you come across >> http://docs.mirantis.com/fuel-dev/develop/addition_examples.html ? >> >> I believe it should cover your use case. >> >> Thanks, >> >> On Tue, Dec 16, 2014 at 11:43 PM, Satyasanjibani Rautaray < >> engg.sanj at gmail.com> wrote: >>> >>> I just need to deploy the node and install my required packages. >>> On 17-Dec-2014 1:31 am, "Andrey Danin" wrote: >>> >>>> Hello. >>>> >>>> What version of Fuel do you use? Did you reupload openstack.yaml into >>>> Nailgun? Do you want just to deploy an operating system and configure a >>>> network on a new node? >>>> >>>> I would really appreciate if you use a period at the end of sentences. >>>> >>>> On Tuesday, December 16, 2014, Satyasanjibani Rautaray < >>>> engg.sanj at gmail.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> *i am in a process of creating an additional node by editing the code >>>>> where the new node will be solving a different propose than installing >>>>> openstack components just for testing currently the new node will install >>>>> vim for me please help me what else i need to look into to create the >>>>> complete setup and deploy with fuel i have edited openstack.yaml at >>>>> /root/fuel-web/nailgun/nailgun/fixtures http://pastebin.com/P1MmDBzP >>>>> * >>>>> -- >>>>> Thanks >>>>> Satya >>>>> Mob:9844101001 >>>>> >>>>> No one is the best by birth, Its his brain/ knowledge which make him >>>>> the best. >>>>> >>>> >>>> >>>> -- >>>> Andrey Danin >>>> adanin at mirantis.com >>>> skype: gcon.monolake >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> -- >> Mike Scherbakov >> #mihgen >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > -- > Thanks > Satya > Mob:9844101001 > > No one is the best by birth, Its his brain/ knowledge which make him the > best. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Andrey Danin adanin at mirantis.com skype: gcon.monolake -------------- next part -------------- An HTML attachment was scrubbed... URL: From smelikyan at mirantis.com Thu Dec 18 20:56:34 2014 From: smelikyan at mirantis.com (Serg Melikyan) Date: Thu, 18 Dec 2014 12:56:34 -0800 Subject: [openstack-dev] [Murano] Kilo-1 Dev Milestone Released! Message-ID: Hi folks, I am happy to announce that first Kilo milestone is now available. You can download the kilo-1 release and review the changes here: https://launchpad.net/murano/kilo/kilo-1 With this milestone we release several new and important features, that I would like to kindly ask to try and play with: - Handle auth expiration for long-running deployments - Per-class configuration files We added support for long-running deployments in Murano. Previously deployment time was restricted by token expiration time, in case when user started deployment close to token expiration time, deployment was failing on Heat stack creation. We've also implemented support for per-class configuration files during this milestone. Murano may be easily extended, for example with support for different third-party services, like monitoring or firewall. You can find demo-example of such extension here: https://github.com/sergmelikyan/murano/tree/third-party In this example we add ZabbixApi class that handles interaction with Zabbix monitoring system installed outside of the cloud and exposes API to all the applications in the catalog, giving ability to configure monitoring for themselves: https://github.com/sergmelikyan/murano/blob/third-party/murano/engine/contrib/zabbix.py Obviously we need to store credentials for Zabbix somewhere, and previously it was done in main Murano configuration file. Now each class may have own configuration file, with nice ability to automatically fill class properties by configuration values. Unfortunately these features are not yet documented, please refer to commit messages and implementation for details. We would be happy for any contribution to Murano and especially contribution to our documentation. * https://review.openstack.org/134183 * https://review.openstack.org/119042 Thank you! -- Serg Melikyan, Senior Software Engineer at Mirantis, Inc. http://mirantis.com | smelikyan at mirantis.com +7 (495) 640-4904, 0261 +7 (903) 156-0836 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smelikyan at mirantis.com Thu Dec 18 21:13:45 2014 From: smelikyan at mirantis.com (Serg Melikyan) Date: Thu, 18 Dec 2014 13:13:45 -0800 Subject: [openstack-dev] [Murano] Kilo-1 Dev Milestone Released! In-Reply-To: References: Message-ID: I forgot to mention that we also released handful of well-tested Murano application, that can be used both as example and as ready applications for your cloud. Application are available in our new repository: https://github.com/stackforge/murano-apps We released following applications in this milestone: - WordPress - Zabbix Monitoring - Apache HttpServer - Apache Tomcat - MySql - PostgreSql On Thu, Dec 18, 2014 at 12:56 PM, Serg Melikyan wrote: > > Hi folks, > > I am happy to announce that first Kilo milestone is now available. You can > download the kilo-1 release and review the changes here: > https://launchpad.net/murano/kilo/kilo-1 > > With this milestone we release several new and important features, that I > would like to kindly ask to try and play with: > > - Handle auth expiration for long-running deployments > - Per-class configuration files > > We added support for long-running deployments in Murano. Previously > deployment time was restricted by token expiration time, in case when user > started deployment close to token expiration time, deployment was failing > on Heat stack creation. > > We've also implemented support for per-class configuration files during > this milestone. Murano may be easily extended, for example with support for > different third-party services, like monitoring or firewall. You can find > demo-example of such extension here: > https://github.com/sergmelikyan/murano/tree/third-party > > In this example we add ZabbixApi class that handles interaction with > Zabbix monitoring system installed outside of the cloud and exposes API to > all the applications in the catalog, giving ability to configure monitoring > for themselves: > https://github.com/sergmelikyan/murano/blob/third-party/murano/engine/contrib/zabbix.py > > Obviously we need to store credentials for Zabbix somewhere, and > previously it was done in main Murano configuration file. Now each class > may have own configuration file, with nice ability to automatically fill > class properties by configuration values. > > Unfortunately these features are not yet documented, please refer to > commit messages and implementation for details. We would be happy for any > contribution to Murano and especially contribution to our documentation. > > * https://review.openstack.org/134183 > * https://review.openstack.org/119042 > > Thank you! > > -- > Serg Melikyan, Senior Software Engineer at Mirantis, Inc. > http://mirantis.com | smelikyan at mirantis.com > > +7 (495) 640-4904, 0261 > +7 (903) 156-0836 > -- Serg Melikyan, Senior Software Engineer at Mirantis, Inc. http://mirantis.com | smelikyan at mirantis.com +7 (495) 640-4904, 0261 +7 (903) 156-0836 -------------- next part -------------- An HTML attachment was scrubbed... URL: From baoli at cisco.com Thu Dec 18 22:18:47 2014 From: baoli at cisco.com (Robert Li (baoli)) Date: Thu, 18 Dec 2014 22:18:47 +0000 Subject: [openstack-dev] [nova][sriov] SRIOV related specs pending for approval Message-ID: Hi, During the Kilo summit, the folks in the pci passthrough and SR-IOV groups discussed what we?d like to achieve in this cycle, and the result was documented in this Etherpad: https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough To get the work going, we?ve submitted a few design specs: Nova: Live migration with macvtap SR-IOV https://blueprints.launchpad.net/nova/+spec/sriov-live-migration Nova: sriov interface attach/detach https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach Nova: Api specify vnic_type https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type Neutron-Network settings support for vnic-type https://blueprints.launchpad.net/neutron/+spec/network-settings-support-vnic-type Nova: SRIOV scheduling with stateless offloads https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads Now that the specs deadline is approaching, I?d like to bring them up in here for exception considerations. A lot of works have been put into them. And we?d like to see them get through for Kilo. Regarding CI for PCI passthrough and SR-IOV, see the attached thread. thanks, Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded message was scrubbed... From: Irena Berezovsky Subject: RE: CI for NUMA, SR-IOV, and other features that can't be tested on current infra. Date: Sun, 16 Nov 2014 13:31:02 +0000 Size: 9231 URL: From mriedem at linux.vnet.ibm.com Thu Dec 18 22:18:51 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Thu, 18 Dec 2014 16:18:51 -0600 Subject: [openstack-dev] [nova][neutron] 12/18 nova meeting and k-1 milestone target recap highlights (and low-lights) Message-ID: <549352CB.9020903@linux.vnet.ibm.com> In the Nova meeting today [1] we went over the k-1 milestone targets [2] in open discussion. I updated the etherpad with my notes. For the most part things are progressing nicely, the only thing that probably needs to be mentioned here is there was apparently some disconnect between the summit and now about who was doing what for the nova-network -> neutron migration, or how, i.e. was the spec supposed to be in neutron or nova? The summit etherpad is here [3]. Kyle Mestery said he'd figure out what's going on there and get some information back to the mailing list. This is listed as a project priority for Kilo [4] and if nova needed a spec the approval deadline was today, so we'd have to talk about exceptions in k-2 for this. Speaking of, the k-2 spec exception process is going to be discussed the first week of January, then expect the details in the mailing list after that. If you feel there is nothing to do between now and then, remember we have an etherpad with review priorities [5]. Or, you know, close your laptop and spend time with friends and family over the break that at least some of us should be taking. :) [1] http://eavesdrop.openstack.org/meetings/nova/2014/nova.2014-12-18-21.00.log.html [2] https://etherpad.openstack.org/p/kilo-nova-milestone-targets [3] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron [4] http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html#nova-network-neutron-migration [5] https://etherpad.openstack.org/p/kilo-nova-priorities-tracking -- Thanks, Matt Riedemann From boris at pavlovic.me Thu Dec 18 22:16:16 2014 From: boris at pavlovic.me (Boris Pavlovic) Date: Fri, 19 Dec 2014 02:16:16 +0400 Subject: [openstack-dev] [rally] Rally scenario for network scale with VMs In-Reply-To: References: Message-ID: Ajay, Sorry for long reply. At this point Rally supports only benchmarking from temporary created users and tenants. Fortunately today we merged this Network context class: https://review.openstack.org/#/c/103306/96 it creates any amount of networks for each rally temporary tenant. So basically you can use it and extend current benchmark scenarios in rally/benchmark/scenarios/nova/ or add new one that will attach N networks to created VM (which is just few lines of code). So task is quite easy resolvable now. Best regards, Boris Pavlovic On Wed, Nov 26, 2014 at 9:54 PM, Ajay Kalambur (akalambu) < akalambu at cisco.com> wrote: > > Hi > Is there a Rally scenario under works where we create N networks and > associate N Vms with each network. > This would be a decent stress tests of neutron > Is there any such scale scenario in works > I see scenario for N networks, subnet creation and a separate one for N VM > bootups > I am looking for an integration of these 2 > > > > Ajay > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mestery at mestery.com Thu Dec 18 22:57:34 2014 From: mestery at mestery.com (Kyle Mestery) Date: Thu, 18 Dec 2014 16:57:34 -0600 Subject: [openstack-dev] [nova][neutron] 12/18 nova meeting and k-1 milestone target recap highlights (and low-lights) In-Reply-To: <549352CB.9020903@linux.vnet.ibm.com> References: <549352CB.9020903@linux.vnet.ibm.com> Message-ID: On Thu, Dec 18, 2014 at 4:18 PM, Matt Riedemann wrote: > > In the Nova meeting today [1] we went over the k-1 milestone targets [2] > in open discussion. I updated the etherpad with my notes. > > For the most part things are progressing nicely, the only thing that > probably needs to be mentioned here is there was apparently some disconnect > between the summit and now about who was doing what for the nova-network -> > neutron migration, or how, i.e. was the spec supposed to be in neutron or > nova? The summit etherpad is here [3]. > > Kyle Mestery said he'd figure out what's going on there and get some > information back to the mailing list. This is listed as a project priority > for Kilo [4] and if nova needed a spec the approval deadline was today, so > we'd have to talk about exceptions in k-2 for this. > > We have a first cut spec for this in neutron now [6]. I'd encourage all nova folks to review this one and provide comments there. It's a WIP now, we'll work to get it in shape. It's unclear if we'll need a nova side spec, we'll get that sorted by tomorrow. Thanks, Kyle [6] https://review.openstack.org/#/c/142456/2 > Speaking of, the k-2 spec exception process is going to be discussed the > first week of January, then expect the details in the mailing list after > that. If you feel there is nothing to do between now and then, remember we > have an etherpad with review priorities [5]. Or, you know, close your > laptop and spend time with friends and family over the break that at least > some of us should be taking. :) > > [1] http://eavesdrop.openstack.org/meetings/nova/2014/nova. > 2014-12-18-21.00.log.html > [2] https://etherpad.openstack.org/p/kilo-nova-milestone-targets > [3] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron > [4] http://specs.openstack.org/openstack/nova-specs/ > priorities/kilo-priorities.html#nova-network-neutron-migration > [5] https://etherpad.openstack.org/p/kilo-nova-priorities-tracking > > -- > > Thanks, > > Matt Riedemann > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dzimine at stackstorm.com Thu Dec 18 23:39:30 2014 From: dzimine at stackstorm.com (Dmitri Zimine) Date: Thu, 18 Dec 2014 15:39:30 -0800 Subject: [openstack-dev] [Mistral] For-each In-Reply-To: References: <58CB487D-D8AB-401C-8ABC-CF3F088552DC@stackstorm.com> Message-ID: <435DE1CD-82A0-48E2-B69C-1853A864ACAD@stackstorm.com> Based on the feedback so far, I updated the document and added some more details from the comments and discussions. We still think for-each as a keyword confuses people by setting up some behavior expectations (e.g., it will run sequentially, you can work with data inside the loop, you can ?nest? for-each loops - while it?s not a loop at all, just a way to run actions which are not accepting arrays of data, with arrays of data). But no better idea on the keyword just yet. DZ. On Dec 15, 2014, at 10:53 PM, Renat Akhmerov wrote: > Thanks Nikolay, > > I also left my comments and tend to like Alt2 better than others. Agree with Dmitri that ?all-permutations? thing can be just a different construct in the language and ?concurrency? should be rather a policy than a property of ?for-each? because it doesn?t have any impact on workflow logic itself, it only influence the way how engine runs a task. So again, policies are engine capabilities, not workflow ones. > > One tricky question that?s still in the air is how to deal with publishing. I mean in terms of requirements it?s pretty clear: we need to apply ?publish? once after all iterations and be able to access an array of iteration results as $. But technically, it may be a problem to implement such behavior, need to think about it more. > > Renat Akhmerov > @ Mirantis Inc. > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From akalambu at cisco.com Fri Dec 19 00:06:48 2014 From: akalambu at cisco.com (Ajay Kalambur (akalambu)) Date: Fri, 19 Dec 2014 00:06:48 +0000 Subject: [openstack-dev] [rally] Rally scenario for network scale with VMs In-Reply-To: Message-ID: I created a new scenario which scales on network and VM at same time. If you have no objection I would like to send out a review this week I actually have following reviews to do next week 1. Scenario for network stress I.e VM+ network with subnet with unique cidr. So in future we can add a router to this and do apt-get update from all Vms and test network scale 2. Scenario for booting Vms on every compute node in the cloud?.This has a dependency right now this is admin access. So for this I need Item 3 3. Provide an ability to make rally created users have admin access for things like forced host scheduling . Planning to add this to user context 4. Iperf scenario we discussed If you have objection to these I can submit reviews for these. Have the code need to write unit tests for the scenarios since looking at other reviews that seems to be the case Ajay From: Boris Pavlovic > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, December 18, 2014 at 2:16 PM To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [rally] Rally scenario for network scale with VMs Ajay, Sorry for long reply. At this point Rally supports only benchmarking from temporary created users and tenants. Fortunately today we merged this Network context class: https://review.openstack.org/#/c/103306/96 it creates any amount of networks for each rally temporary tenant. So basically you can use it and extend current benchmark scenarios in rally/benchmark/scenarios/nova/ or add new one that will attach N networks to created VM (which is just few lines of code). So task is quite easy resolvable now. Best regards, Boris Pavlovic On Wed, Nov 26, 2014 at 9:54 PM, Ajay Kalambur (akalambu) > wrote: Hi Is there a Rally scenario under works where we create N networks and associate N Vms with each network. This would be a decent stress tests of neutron Is there any such scale scenario in works I see scenario for N networks, subnet creation and a separate one for N VM bootups I am looking for an integration of these 2 Ajay _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From bpavlovic at mirantis.com Fri Dec 19 00:40:30 2014 From: bpavlovic at mirantis.com (Boris Pavlovic) Date: Fri, 19 Dec 2014 04:40:30 +0400 Subject: [openstack-dev] [rally] Rally scenario for network scale with VMs In-Reply-To: References: Message-ID: Ajay, Oh looks you are working hard on Rally. I am excited about your patches. By the way, one small advice is to share results and ideas that you have with community ASAP. it's perfectly to publish "not ideal" patches on review even if they don't have unit tests and doesn't work at all. Because community can help with advices and safe a lot of your time. 2. Scenario for booting Vms on every compute node in the cloud?.This has a > dependency right now this is admin access. So for this I need Item 3 > 3. Provide an ability to make rally created users have admin access for > things like forced host scheduling . Planning to add this to user context Not quite sure that you need point 3. If you put on scenario: @validation.required_openstack(admin=True, users=True). You'll have admin in scenario. So you can execute from him commands. E.g.: self.admin_clients("neutron").... (it's similar to self.clients but with admin access) That's how live_migrate benchmark is implemented: https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/nova/servers.py#L253 Does this make sense? 4. Iperf scenario we discussed Nice! Best regards, Boris Pavlovic On Fri, Dec 19, 2014 at 4:06 AM, Ajay Kalambur (akalambu) < akalambu at cisco.com> wrote: > > I created a new scenario which scales on network and VM at same time. > If you have no objection I would like to send out a review this week > I actually have following reviews to do next week > > 1. Scenario for network stress I.e VM+ network with subnet with unique > cidr. So in future we can add a router to this and do apt-get update from > all Vms and test network scale > 2. Scenario for booting Vms on every compute node in the cloud?.This > has a dependency right now this is admin access. So for this I need Item 3 > 3. Provide an ability to make rally created users have admin access > for things like forced host scheduling . Planning to add this to user > context > 4. Iperf scenario we discussed > > If you have objection to these I can submit reviews for these. Have the > code need to write unit tests for the scenarios since looking at other > reviews that seems to be the case > > Ajay > > > From: Boris Pavlovic > Reply-To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > Date: Thursday, December 18, 2014 at 2:16 PM > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > Subject: Re: [openstack-dev] [rally] Rally scenario for network scale > with VMs > > Ajay, > > Sorry for long reply. > > At this point Rally supports only benchmarking from temporary created > users and tenants. > > Fortunately today we merged this Network context class: > https://review.openstack.org/#/c/103306/96 > it creates any amount of networks for each rally temporary tenant. > > So basically you can use it and extend current benchmark scenarios in > rally/benchmark/scenarios/nova/ or add new one that will attach N networks > to created VM (which is just few lines of code). So task is quite easy > resolvable now. > > > Best regards, > Boris Pavlovic > > > On Wed, Nov 26, 2014 at 9:54 PM, Ajay Kalambur (akalambu) < > akalambu at cisco.com> wrote: >> >> Hi >> Is there a Rally scenario under works where we create N networks and >> associate N Vms with each network. >> This would be a decent stress tests of neutron >> Is there any such scale scenario in works >> I see scenario for N networks, subnet creation and a separate one for N >> VM bootups >> I am looking for an integration of these 2 >> >> >> >> Ajay >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Fri Dec 19 00:44:31 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 18 Dec 2014 19:44:31 -0500 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: References: <5490967C.8020801@gmail.com> <5490A5D8.2090804@catalyst.net.nz> <20141216231940.GA61726@HQSML-1081034.cable.comcast.com> <0FBF5631AB7B504D89C7E6929695B6249302F16F@ORD1EXD02.RACKSPACE.CORP> <0FBF5631AB7B504D89C7E6929695B6249302F2A2@ORD1EXD02.RACKSPACE.CORP> Message-ID: <549374EF.8040804@gmail.com> On 12/18/2014 02:08 PM, Chris St. Pierre wrote: > I wasn't suggesting that we *actually* use filesystem link count, and > make hard links or whatever for every time the image is used. That's > prima facie absurd, for a lot more reasons that you point out. I was > suggesting a new database field that tracks the number of times an image > is in use, by *analogy* with filesystem link counts. (If I wanted to be > unnecessarily abrasive I might say, "This is a textbook example of > something called an analogy," but I'm not interested in being > unnecessarily abrasive.) > > Overloading the protected flag is *still* a terrible hack. Even if we > tracked the initial state of "protected" and restored that state when an > image went out of use, that would negate the ability to make an image I guess I don't understand what you consider to be overloading of the protected flag. The original purpose of the protected flag was to protect images from being deleted. Best, -jay > protected while it was in use and expect that change to remain in place. > So that just violates the principle of least surprise. Of course, we > could have glance modify the "original_protected_state" flag when that > flag is non-null and the user changes the actual "protected" flag, but > this is just layering hacks upon hacks. By actually tracking the number > of times an image is in use, we can preserve the ability to protect > images *and* avoid deleting images in use. > > On Thu, Dec 18, 2014 at 5:27 AM, Kuvaja, Erno > wrote: > > I think that?s horrible idea. How do we do that store independent > with the linking dependencies?____ > > __ __ > > We should not depend universal use case like this on limited subset > of backends, specially non-OpenStack ones. Glance (nor Nova) should > never depend having direct access to the actual medium where the > images are stored. I think this is school book example for something > called database. Well arguable if this should be tracked at Glance > or Nova, but definitely not a dirty hack expecting specific backend > characteristics.____ > > __ __ > > As mentioned before the protected image property is to ensure that > the image does not get deleted, that is also easy to track when the > images are queried. Perhaps the record needs to track the original > state of protected flag, image id and use count. 3 column table and > couple of API calls. Lets not at least make it any more complicated > than it needs to be if such functionality is desired.____ > > __ __ > > __-__Erno____ > > __ __ > > *From:*Nikhil Komawar [mailto:nikhil.komawar at RACKSPACE.COM > ] > *Sent:* 17 December 2014 20:34 > > > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [glance] Option to skip deleting > images in use?____ > > __ __ > > Guess that's a implementation detail. Depends on the way you go > about using what's available now, I suppose.____ > > __ __ > > Thanks, > -Nikhil____ > > ------------------------------------------------------------------------ > > *From:*Chris St. Pierre [chris.a.st.pierre at gmail.com > ] > *Sent:* Wednesday, December 17, 2014 2:07 PM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [glance] Option to skip deleting > images in use?____ > > I was assuming atomic increment/decrement operations, in which case > I'm not sure I see the race conditions. Or is atomism assuming too > much?____ > > __ __ > > On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar > > > wrote:____ > > That looks like a decent alternative if it works. However, it > would be too racy unless we we implement a test-and-set for such > properties or there is a different job which queues up these > requests and perform sequentially for each tenant.____ > > __ __ > > Thanks, > -Nikhil____ > > ------------------------------------------------------------------------ > > *From:*Chris St. Pierre [chris.a.st.pierre at gmail.com > ] > *Sent:* Wednesday, December 17, 2014 10:23 AM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [glance] Option to skip deleting > images in use?____ > > That's unfortunately too simple. You run into one of two cases: ____ > > __ __ > > 1. If the job automatically removes the protected attribute when > an image is no longer in use, then you lose the ability to use > "protected" on images that are not in use. I.e., there's no way > to say, "nothing is currently using this image, but please keep > it around." (This seems particularly useful for snapshots, for > instance.)____ > > __ __ > > 2. If the job does not automatically remove the protected > attribute, then an image would be protected if it had ever been > in use; to delete an image, you'd have to manually un-protect > it, which is a workflow that quite explicitly defeats the whole > purpose of flagging images as protected when they're in use.____ > > __ __ > > It seems like flagging an image as *not* in use is actually a > fairly difficult problem, since it requires consensus among all > components that might be using images.____ > > __ __ > > The only solution that readily occurs to me would be to add > something like a filesystem link count to images in Glance. Then > when Nova spawns an instance, it increments the usage count; > when the instance is destroyed, the usage count is decremented. > And similarly with other components that use images. An image > could only be deleted when its usage count was zero.____ > > __ __ > > There are ample opportunities to get out of sync there, but it's > at least a sketch of something that might work, and isn't *too* > horribly hackish. Thoughts?____ > > __ __ > > On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya > > wrote:____ > > A simple solution that wouldn?t require modification of > glance would be a cron job > that lists images and snapshots and marks them protected > while they are in use. > > Vish____ > > > On Dec 16, 2014, at 3:19 PM, Collins, Sean > > wrote: > > > On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote: > >> No, I'm looking to prevent images that are in use from being deleted. "In > >> use" and "protected" are disjoint sets. > > > > I have seen multiple cases of images (and snapshots) being deleted while > > still in use in Nova, which leads to some very, shall we say, > > interesting bugs and support problems. > > > > I do think that we should try and determine a way forward on this, they > > are indeed disjoint sets. Setting an image as protected is a proactive > > measure, we should try and figure out a way to keep tenants from > > shooting themselves in the foot if possible. > > > > -- > > Sean M. Collins > > _______________________________________________ > > OpenStack-dev mailing list > >OpenStack-dev at lists.openstack.org > > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev____ > > > > ____ > > __ __ > > -- ____ > > Chris St. Pierre____ > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev____ > > > > ____ > > __ __ > > -- ____ > > Chris St. Pierre____ > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Chris St. Pierre > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From akalambu at cisco.com Fri Dec 19 01:32:19 2014 From: akalambu at cisco.com (Ajay Kalambur (akalambu)) Date: Fri, 19 Dec 2014 01:32:19 +0000 Subject: [openstack-dev] [rally] Rally scenario for network scale with VMs In-Reply-To: Message-ID: Ok ill get them out right away than . You should see some reviews coming in next week I will try the admin=True option and see if that works for me Ajay From: Boris Pavlovic > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, December 18, 2014 at 4:40 PM To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [rally] Rally scenario for network scale with VMs Ajay, Oh looks you are working hard on Rally. I am excited about your patches. By the way, one small advice is to share results and ideas that you have with community ASAP. it's perfectly to publish "not ideal" patches on review even if they don't have unit tests and doesn't work at all. Because community can help with advices and safe a lot of your time. 2. Scenario for booting Vms on every compute node in the cloud?.This has a dependency right now this is admin access. So for this I need Item 3 3. Provide an ability to make rally created users have admin access for things like forced host scheduling . Planning to add this to user context Not quite sure that you need point 3. If you put on scenario: @validation.required_openstack(admin=True, users=True). You'll have admin in scenario. So you can execute from him commands. E.g.: self.admin_clients("neutron").... (it's similar to self.clients but with admin access) That's how live_migrate benchmark is implemented: https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/nova/servers.py#L253 Does this make sense? 4. Iperf scenario we discussed Nice! Best regards, Boris Pavlovic On Fri, Dec 19, 2014 at 4:06 AM, Ajay Kalambur (akalambu) > wrote: I created a new scenario which scales on network and VM at same time. If you have no objection I would like to send out a review this week I actually have following reviews to do next week 1. Scenario for network stress I.e VM+ network with subnet with unique cidr. So in future we can add a router to this and do apt-get update from all Vms and test network scale 2. Scenario for booting Vms on every compute node in the cloud?.This has a dependency right now this is admin access. So for this I need Item 3 3. Provide an ability to make rally created users have admin access for things like forced host scheduling . Planning to add this to user context 4. Iperf scenario we discussed If you have objection to these I can submit reviews for these. Have the code need to write unit tests for the scenarios since looking at other reviews that seems to be the case Ajay From: Boris Pavlovic > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, December 18, 2014 at 2:16 PM To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [rally] Rally scenario for network scale with VMs Ajay, Sorry for long reply. At this point Rally supports only benchmarking from temporary created users and tenants. Fortunately today we merged this Network context class: https://review.openstack.org/#/c/103306/96 it creates any amount of networks for each rally temporary tenant. So basically you can use it and extend current benchmark scenarios in rally/benchmark/scenarios/nova/ or add new one that will attach N networks to created VM (which is just few lines of code). So task is quite easy resolvable now. Best regards, Boris Pavlovic On Wed, Nov 26, 2014 at 9:54 PM, Ajay Kalambur (akalambu) > wrote: Hi Is there a Rally scenario under works where we create N networks and associate N Vms with each network. This would be a decent stress tests of neutron Is there any such scale scenario in works I see scenario for N networks, subnet creation and a separate one for N VM bootups I am looking for an integration of these 2 Ajay _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From xyzjerry at gmail.com Fri Dec 19 01:44:29 2014 From: xyzjerry at gmail.com (Jerry Xinyu Zhao) Date: Thu, 18 Dec 2014 17:44:29 -0800 Subject: [openstack-dev] [Neutron] vm can't get ipv6 address in ra mode:slaac + address mode: slaac In-Reply-To: <5492ECBB.1000500@redhat.com> References: <5492CDB1.3080004@gmail.com> <5492E99E.1090307@gmail.com> <5492ECBB.1000500@redhat.com> Message-ID: I also saw that bugzilla bug report. but my vm is ubuntu 14.04. and i also have tried to run rootwrap command manually with sudo but still no avail. On Thu, Dec 18, 2014 at 7:03 AM, Ihar Hrachyshka wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > I suspect that's some Red Hat distro, and radvd lacks SELinux context > set to allow neutron l3 agent to spawn it. > > On 18/12/14 15:50, Jerry Zhao wrote: > > It seems that radvd was not spawned successfully in l3-agent log: > > > > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > > neutron-l3-agent: Stderr: '/usr/bin/neutron-rootwrap: Unauthorized > > command: ip netns exec qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7 > > radvd -C > > /var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf > > -p > > > /var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd > > > > > (no filter matched)\n' > > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > > neutron.agent.l3_agent Traceback (most recent call last): Dec 18 > > 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: > > 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent File > > > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/utils.py", > > > > > line 341, in call > > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > > neutron.agent.l3_agent return func(*args, **kwargs) Dec 18 > > 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: > > 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent File > > > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l3_agent.py", > > > > > line 902, in process_router > > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > > neutron.agent.l3_agent self.root_helper) Dec 18 11:23:34 > > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 > > 11:23:34.611 18015 TRACE neutron.agent.l3_agent File > > > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py", > > > > > line 111, in enable_ipv6_ra > > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > > neutron.agent.l3_agent _spawn_radvd(router_id, radvd_conf, > > router_ns, root_helper) Dec 18 11:23:34 > > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 > > 11:23:34.611 18015 TRACE neutron.agent.l3_agent File > > > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py", > > > > > line 95, in _spawn_radvd > > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > > neutron.agent.l3_agent radvd.enable(callback, True) Dec 18 11:23:34 > > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 > > 11:23:34.611 18015 TRACE neutron.agent.l3_agent File > > > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/external_process.py", > > > > > line 77, in enable > > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > > neutron.agent.l3_agent ip_wrapper.netns.execute(cmd, > > addl_env=self.cmd_addl_env) Dec 18 11:23:34 > > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 > > 11:23:34.611 18015 TRACE neutron.agent.l3_agent File > > > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", > > > > > line 554, in execute > > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > > neutron.agent.l3_agent check_exit_code=check_exit_code, > > extra_ok_codes=extra_ok_codes) Dec 18 11:23:34 > > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 > > 11:23:34.611 18015 TRACE neutron.agent.l3_agent File > > > "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py", > > > > > line 82, in execute > > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > > neutron.agent.l3_agent raise RuntimeError(m) Dec 18 11:23:34 > > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 > > 11:23:34.611 18015 TRACE neutron.agent.l3_agent RuntimeError: Dec > > 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > > neutron.agent.l3_agent Command: ['sudo', > > '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', > > 'netns', 'exec', 'qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7', > > 'radvd', '-C', > > '/var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf', > > > > > '-p', > > > '/var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd'] > > > > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 > > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE > > neutron.agent.l3_agent Exit code: 99 Dec 18 11:23:34 > > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18 > > 11:23:34.611 18015 TRACE neutron.agent.l3_agent Stdout: '' Dec 18 > > 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: > > 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Stderr: > > '/usr/bin/neutron-rootwrap: Unauthorized command: ip netns exec > > qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7 radvd -C > > /var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf > > -p > > > /var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd > > > > > (no filter matched)\n' > > > > > > On 12/18/2014 04:50 AM, Jerry Zhao wrote: > >> Hi I have configured a provider flat network with ipv6 subnet in > >> ra mode slaac and address mode slaac. However, when i launched a > >> ubuntu trusty VM, it couldn't get the ipv6 address but ipv4 only. > >> I am running the trunk code BTW. The command used are: > >> > >> neutron net-create --provider:network_type=flat > >> --provider:physical_network=datacentre --router:external=true > >> provider-net neutron subnet-create --ip-version=6 --name=ipv6 > >> --ipv6-address-mode=slaac --ipv6-ra-mode=slaac provider-net > >> 2001:470:1f0e:cb4::0/64 --allocation-pool > >> start=2001:470:1f0e:cb4::20,end=2001:470:1f0e:cb4::fffe > >> --gateway 2001:470:1f0e:cb4::3 neutron subnet-create > >> --ip-version=4 --name=ipv4 provider-net 162.3.122.0/24 > >> --allocation-pool start=162.3.122.4,end=162.3.122.253 neutron > >> router-interface-add default-router ipv6 neutron > >> router-interface-add default-router ipv4 > >> > >> The vm is reachable when i configured the ipv6 address calculated > >> by neutron manually on the nic. How can i get the auto > >> configuration to work on the VM? Thanks! > > > > > > _______________________________________________ OpenStack-dev > > mailing list OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > > iQEcBAEBCgAGBQJUkuy7AAoJEC5aWaUY1u57RLwIAKayW3wgCoyw4Qh06jRoK8Bx > 7qBCbTKiyi2DdjiYXEyDMZc3wnm7j1pvpikaByNCOA2ybXj8uFfnQiwsoFYRTxPD > PLwvYsm+Afv3Bwaz7FSj1LKA8NmxNaz0ZxqBai/6aC17HjJyNfRxxCt2ZUG+WeP/ > Yj9/0jUIoOVwOGspTcAXPQ1eaFHbs2nH0afD6aX7s4/g2i7vnQgJOOLrgRuetInN > oR/DtZ81XJFyN3q1hl6Pv5k6TO0sTbeECV1OwOjQ2wJwCCarTAZJbW1s7fF8LCFm > 0m04XGuZuWxNeSDYoamdF7a21bml1DvWJ5XHHvnblewZrK+01TUmMqAOW6KAWDo= > =//1f > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe.gordon0 at gmail.com Fri Dec 19 02:13:58 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Thu, 18 Dec 2014 18:13:58 -0800 Subject: [openstack-dev] [nova][sriov] SRIOV related specs pending for approval In-Reply-To: References: Message-ID: On Thu, Dec 18, 2014 at 2:18 PM, Robert Li (baoli) wrote: > Hi, > > During the Kilo summit, the folks in the pci passthrough and SR-IOV > groups discussed what we?d like to achieve in this cycle, and the result > was documented in this Etherpad: > https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough > > To get the work going, we?ve submitted a few design specs: > > Nova: Live migration with macvtap SR-IOV > https://blueprints.launchpad.net/nova/+spec/sriov-live-migration > > Nova: sriov interface attach/detach > https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach > > Nova: Api specify vnic_type > https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type > > Neutron-Network settings support for vnic-type > > https://blueprints.launchpad.net/neutron/+spec/network-settings-support-vnic-type > > Nova: SRIOV scheduling with stateless offloads > > https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads > > Now that the specs deadline is approaching, I?d like to bring them up in > here for exception considerations. A lot of works have been put into them. > And we?d like to see them get through for Kilo. > We haven't started the spec exception process yet. > > Regarding CI for PCI passthrough and SR-IOV, see the attached thread. > Can you share this via a link to something on http://lists.openstack.org/pipermail/openstack-dev/ > > thanks, > Robert > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.a.st.pierre at gmail.com Fri Dec 19 02:27:47 2014 From: chris.a.st.pierre at gmail.com (Chris St. Pierre) Date: Thu, 18 Dec 2014 20:27:47 -0600 Subject: [openstack-dev] [glance] Option to skip deleting images in use? In-Reply-To: <549374EF.8040804@gmail.com> References: <5490967C.8020801@gmail.com> <5490A5D8.2090804@catalyst.net.nz> <20141216231940.GA61726@HQSML-1081034.cable.comcast.com> <0FBF5631AB7B504D89C7E6929695B6249302F16F@ORD1EXD02.RACKSPACE.CORP> <0FBF5631AB7B504D89C7E6929695B6249302F2A2@ORD1EXD02.RACKSPACE.CORP> <549374EF.8040804@gmail.com> Message-ID: Presumably to prevent images from being deleted for arbitrary reasons that are left to the administrator(s) of each individual implementation of OpenStack, though. Using the protected flag to prevent images that are in use from being deleted obviates the ability to use it for arbitrary protection. That is, it can either be used as a general purpose flag to prevent deletion of an image; or it can be used as a flag for images that are in use and thus must not be deleted; but it cannot be used for both. (At least, not without a wild and woolly network of hacks to ensure that it can serve both purposes.) Given the general-purpose nature of the flag, I don't think that something that should be taken away from the administrators. And yet, it's very desirable to prevent deletion of images that are in use. I think both of these things should be supported, at the same time on the same installation. I do not consider it a solution to the problem that images can be deleted in use to take the "protected" flag away from arbitrary, bespoke use. On Thu, Dec 18, 2014 at 6:44 PM, Jay Pipes wrote: > On 12/18/2014 02:08 PM, Chris St. Pierre wrote: > >> I wasn't suggesting that we *actually* use filesystem link count, and >> make hard links or whatever for every time the image is used. That's >> prima facie absurd, for a lot more reasons that you point out. I was >> suggesting a new database field that tracks the number of times an image >> is in use, by *analogy* with filesystem link counts. (If I wanted to be >> unnecessarily abrasive I might say, "This is a textbook example of >> something called an analogy," but I'm not interested in being >> unnecessarily abrasive.) >> >> Overloading the protected flag is *still* a terrible hack. Even if we >> tracked the initial state of "protected" and restored that state when an >> image went out of use, that would negate the ability to make an image >> > > I guess I don't understand what you consider to be overloading of the > protected flag. The original purpose of the protected flag was to protect > images from being deleted. > > Best, > -jay > > protected while it was in use and expect that change to remain in place. >> So that just violates the principle of least surprise. Of course, we >> could have glance modify the "original_protected_state" flag when that >> flag is non-null and the user changes the actual "protected" flag, but >> this is just layering hacks upon hacks. By actually tracking the number >> of times an image is in use, we can preserve the ability to protect >> images *and* avoid deleting images in use. >> >> On Thu, Dec 18, 2014 at 5:27 AM, Kuvaja, Erno > > wrote: >> >> I think that?s horrible idea. How do we do that store independent >> with the linking dependencies?____ >> >> __ __ >> >> We should not depend universal use case like this on limited subset >> of backends, specially non-OpenStack ones. Glance (nor Nova) should >> never depend having direct access to the actual medium where the >> images are stored. I think this is school book example for something >> called database. Well arguable if this should be tracked at Glance >> or Nova, but definitely not a dirty hack expecting specific backend >> characteristics.____ >> >> __ __ >> >> As mentioned before the protected image property is to ensure that >> the image does not get deleted, that is also easy to track when the >> images are queried. Perhaps the record needs to track the original >> state of protected flag, image id and use count. 3 column table and >> couple of API calls. Lets not at least make it any more complicated >> than it needs to be if such functionality is desired.____ >> >> __ __ >> >> __-__Erno____ >> >> __ __ >> >> *From:*Nikhil Komawar [mailto:nikhil.komawar at RACKSPACE.COM >> ] >> *Sent:* 17 December 2014 20:34 >> >> >> *To:* OpenStack Development Mailing List (not for usage questions) >> *Subject:* Re: [openstack-dev] [glance] Option to skip deleting >> images in use?____ >> >> __ __ >> >> Guess that's a implementation detail. Depends on the way you go >> about using what's available now, I suppose.____ >> >> __ __ >> >> Thanks, >> -Nikhil____ >> >> ------------------------------------------------------------ >> ------------ >> >> *From:*Chris St. Pierre [chris.a.st.pierre at gmail.com >> ] >> *Sent:* Wednesday, December 17, 2014 2:07 PM >> *To:* OpenStack Development Mailing List (not for usage questions) >> *Subject:* Re: [openstack-dev] [glance] Option to skip deleting >> images in use?____ >> >> I was assuming atomic increment/decrement operations, in which case >> I'm not sure I see the race conditions. Or is atomism assuming too >> much?____ >> >> __ __ >> >> On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar >> > >> wrote:____ >> >> That looks like a decent alternative if it works. However, it >> would be too racy unless we we implement a test-and-set for such >> properties or there is a different job which queues up these >> requests and perform sequentially for each tenant.____ >> >> __ __ >> >> Thanks, >> -Nikhil____ >> >> ------------------------------------------------------------ >> ------------ >> >> *From:*Chris St. Pierre [chris.a.st.pierre at gmail.com >> ] >> *Sent:* Wednesday, December 17, 2014 10:23 AM >> *To:* OpenStack Development Mailing List (not for usage questions) >> *Subject:* Re: [openstack-dev] [glance] Option to skip deleting >> images in use?____ >> >> That's unfortunately too simple. You run into one of two cases: >> ____ >> >> __ __ >> >> 1. If the job automatically removes the protected attribute when >> an image is no longer in use, then you lose the ability to use >> "protected" on images that are not in use. I.e., there's no way >> to say, "nothing is currently using this image, but please keep >> it around." (This seems particularly useful for snapshots, for >> instance.)____ >> >> __ __ >> >> 2. If the job does not automatically remove the protected >> attribute, then an image would be protected if it had ever been >> in use; to delete an image, you'd have to manually un-protect >> it, which is a workflow that quite explicitly defeats the whole >> purpose of flagging images as protected when they're in use.____ >> >> __ __ >> >> It seems like flagging an image as *not* in use is actually a >> fairly difficult problem, since it requires consensus among all >> components that might be using images.____ >> >> __ __ >> >> The only solution that readily occurs to me would be to add >> something like a filesystem link count to images in Glance. Then >> when Nova spawns an instance, it increments the usage count; >> when the instance is destroyed, the usage count is decremented. >> And similarly with other components that use images. An image >> could only be deleted when its usage count was zero.____ >> >> __ __ >> >> There are ample opportunities to get out of sync there, but it's >> at least a sketch of something that might work, and isn't *too* >> horribly hackish. Thoughts?____ >> >> __ __ >> >> On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya >> > wrote:____ >> >> A simple solution that wouldn?t require modification of >> glance would be a cron job >> that lists images and snapshots and marks them protected >> while they are in use. >> >> Vish____ >> >> >> On Dec 16, 2014, at 3:19 PM, Collins, Sean >> > > wrote: >> >> > On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre >> wrote: >> >> No, I'm looking to prevent images that are in use from >> being deleted. "In >> >> use" and "protected" are disjoint sets. >> > >> > I have seen multiple cases of images (and snapshots) being >> deleted while >> > still in use in Nova, which leads to some very, shall we >> say, >> > interesting bugs and support problems. >> > >> > I do think that we should try and determine a way forward >> on this, they >> > are indeed disjoint sets. Setting an image as protected is >> a proactive >> > measure, we should try and figure out a way to keep tenants >> from >> > shooting themselves in the foot if possible. >> > >> > -- >> > Sean M. Collins >> > _______________________________________________ >> > OpenStack-dev mailing list >> >OpenStack-dev at lists.openstack.org >> >> >http://lists.openstack.org/cgi-bin/mailman/listinfo/ >> openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >> openstack-dev____ >> >> >> >> ____ >> >> __ __ >> >> -- ____ >> >> Chris St. Pierre____ >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >> openstack-dev____ >> >> >> >> ____ >> >> __ __ >> >> -- ____ >> >> Chris St. Pierre____ >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> -- >> Chris St. Pierre >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Chris St. Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlowja at outlook.com Fri Dec 19 02:47:19 2014 From: harlowja at outlook.com (Joshua Harlow) Date: Thu, 18 Dec 2014 18:47:19 -0800 Subject: [openstack-dev] taskflow 0.6.0 released Message-ID: The Oslo team is pleased to announce the release of: taskflow 0.6.0: Taskflow structured state management library. For more details, please see the git log history below and: http://launchpad.net/taskflow/+milestone/0.6.0 Please report issues through launchpad: http://bugs.launchpad.net/taskflow/ Noteable changes ---------------- * Dialects are now correctly supported in the persistence backends (mainly the ability to use sqlalchemy dialects; for example 'mysql+pymysql'). * Task *local* notification mechanism (autobind, trigger, bind, unbind, listeners_iter methods) have been replaced with direct usage of the notification type. A transition will be needed (if you are currently using these functions) to the newer property and equivalent methods/functions that the notification type provides. To help perform this transition the *nearly* equivalent method mapping is/are the following: =================== ============================ Old method New property/method/function =================== ============================ task.autobind notifier.register_deregister task.bind task.notifier.register task.unbind task.notifier.deregister task.trigger task.notifier.notify task.listeners_iter task.notifier.listeners_iter =================== ============================ * The 'EngineBase', 'ListenerBase' have been renamed to 'Engine' and 'Listener' (removal of the 'Base' post-fix); these existing classes (with the 'Base' post-fix) are marked as deprecated and will be subject to removal in a future version. * The existing listeners in taskflow/listeners now take an new constructor parameter; 'retry_listen_for' which specifies what notifications to listen for/recieve (by default ANY or '*'), existing derivatives of this class have been updated to pass this along (subclasses that have been created can omit it if they choose to). * A new 'blather()/BLATHER' logging level has been added and used to avoid the low-level scope and runtime information that is being emitted from taskflow during compilation time and at engine runtime. This should reduce the amount of noise that is generated (and really only useful by taskflow developers). * This log level is currently set at number 5 (all log levels have equivalent numbers) which appears to be a common pattern shared by the multiprocessing logger and kazoo (the library) which use it for a similar purpose. * The engine helper run/load 'engine_conf' dictionary keyword argument has been marked as deprecated and should now be replaced with usage of the 'engine=' format (where the URI contains the engine type to use and any specific engine options in the URIs parameters). The 'engine_conf' keyword argument will be subject to removal in a future version. * Tasks can now be copied via a copy() method (this will be useful in an upcoming ability for an engine to run tasks using the multiprocess library; therefore creating a nice median between thread based engines and remote work based engines). * A claims listener that can be used to connect jobboards to engines as well as a dynamic/useful logging listener that can adjust the logging levels based on notifications received. * The engine 'task_notifier' has been renamed to its more general name of 'atom_notifier' and 'task_notifier' has been marked as deprecated and it will be subject to removal in a future version. * The greenthread executor (previously in a utility module) has been moved to the types/futures.py module where it should now be acceptable to use this as a first-class type (previously it is/was not accepted to use internal utility classes/modules externally). * A new types folder + useful helper modules aid taskflow (and likely can aid other projects as well); some of these modules are splitting off into there own projects for more general usage (this is happening on a as needed/ongoing basis). * Storage methods 'ensure_retry' and 'ensure_task' have been replaced with the type agnostic 'ensure_atom' (which is now the internally supported and used API for ensuring the storage unit has allocated the details about a given atom, retry or task...). * New and improved symbol scoping/finding support! * New and improved documentation and examples! Changes in /homes/harlowja/dev/os/taskflow 0.5.0..0.6.0 ------------------------------------------------------- NOTE: Skipping requirement commits... 4e514f4 Move over to using oslo.utils [reflection, uuidutils] 6520b9c Add a basic map/reduce example to show how this can be done cafa3b2 Add a parallel table mutation example 1b06183 Add a 'can_be_registered' method that checks before notifying 97b4e18 Base task executor should provide 'wait_for_any' e9ecdc7 Replace autobind with a notifier module helper function aa8d55d Cleanup some doc warnings/bad/broken links 1f4dd72 Use the notifier type in the task class/module directly cdfd8ec Use a tiny clamp helper to clamp the 'on_progress' value 624d966 Retain the existence of a 'EngineBase' until 0.7 or later f5060ff Remove the base postfix from the internal task executor b4e4e21 Remove usage of listener base postfix a440ec4 Add a moved_inheritable_class deprecation helper 1c7d242 Avoid holding the lock while scanning for existing jobs 79ff9e7 Remove the base postfix for engine abstract base class 880f7d2 Avoid popping while another entity is iterating fda6fde Use explict 'attr_dict' when adding provider->consumer edge eaf4995 Properly handle and skip empty intermediary flows f1457a0 Ensure message gets processed correctly b275c51 Just assign a empty collection instead of copy/clear f333e1b Remove rtype from task clone() doc 14431bc Add and use a new simple helper logging module a113368 Have the sphinx copyright date be dynamic 2f78ecf Add appropriate links into README.rst 4eb0ca2 Use condition variables using 'with' 50b866c Use an appropriate ``extract_traceback`` limit c4d3279 Allow all deprecation helpers to take a stacklevel cd664bd Correctly identify stack level in ``_extract_engine`` 5f0b514 Stop returning atoms from execute/revert methods dc4262e Have tasks be able to provide copy() methods e978eca Allow stopwatches to be restarted af62f4c Ensure that failures can be pickled e168f44 Rework pieces of the task callback capability dc39351 Just use 4 spaces for classifier indents 4707ac7 Move atom action handlers to there own subfolder/submodule 2033d01 Workflow documentation is now in infra-manual 2f03736 Ensure frozen attribute is set in fsm clones/copies 8150553 Fix split on "+" for connection strings that specify dialects b8e975e Update listeners to ensure they correctly handle all atoms cf45a70 Allow for the notifier to provide a 'details_filter' c698842 Be explicit about publish keyword arguments 6a6aa79 Some package additions and adjustments to the env_builder.sh a692138 Cache immutable visible scopes in the runtime component 14ecaa4 Raise value errors instead of asserts 1de8bbd Add a claims listener that connects job claims to engines 1e8fabd Split the scheduler into sub-schedulers 35fcd90 Use a module level constant to provide the DEFAULT_LISTEN_FOR 178f279 Move the _pformat() method to be a classmethod 9675964 Add link to issue 17911 e07fb21 Avoid deepcopying exception values 95e94f7 Include documentation of the utility modules 265181f Use a metaclass to dynamically add testcases to example runner cf85dd0 Remove default setting of 'mysql_traditional_mode' bb8ea56 Move scheduler and completer classes to there own modules 2832d6e Ensure that the zookeeper backend creates missing atoms 148723b Use the deprecation utility module instead of warnings 49d7a51 Tweaks to setup.cfg 17fb4b4 Add a jobboard high level architecture diagram 487cc51 Mark 'task_notifier' as renamed to 'atom_notifier' 2f7e582 Revert wrapt usage until further notice 2a7ca47 Add a history retry object, makes retry histories easier to use 7d199e0 Format failures via a static method c543dc2 When creating daemon threads use the bundled threading_utils 613af61 Ensure failure types contain only immutable items b24656c Mark 'task_notifier' as renamed to 'atom_notifier' f3e4bb0 Use wrapt to provide the deprecated class proxy c8b0f25 Reduce the worker-engine joint testing time 543b6a0 Link bug in requirements so people understand why pbr is listed 34b358a Use standard threading locks in the cache types edb9212 Handle the case where '_exc_type_names' is empty 5671868 Add pbr to installation requirements 3c9871d Remove direct usage of the deprecated failure location 1f12ab3 Fix the example 'default_provides' ac8eefd Use constants for retry automatically provided kwargs 58f27fc Remove direct usage of the deprecated notifier location 7fe6bf0 Remove attrdict and just use existing types ca101d1 Use the mock that finds a working implementation b014fc7 Add a futures type that can unify our future functionality ac77b4d Bump the deprecation version number 7ca6313 Use and verify event and latch wait() return using timeouts d433a53 Deprecate `engine_conf` and prefer `engine` instead 3a8a78e Use constants for link metadata keys d638c8f Bump up the sqlalchemy version for py26 bf84288 Hoist the notifier to its own module f2ea4f1 Move failure to its own type specific module a15e07a Use constants for revert automatically provided kwargs 8e66177 Improve some of the task docstrings bcae66b We can now use PyMySQL in py3.x tests c90e360 Add the database schema to the sqlalchemy docs 52494e7 Change messaging from handler connection timeouts -> operation timeouts b86b7e1 Switch to a custom NotImplementedError derivative 94b4b60 Allow the worker banner to be written to an arbitrary location 8d14318 Update engine class names to better reflect there usage 95b30d6 Refactor parts of the job lock/job condition zookeeper usage cf1e468 Add a more dynamic/useful logging listener be254ea Use timeutils functions instead of misc.wallclock eedc335 Expose only `ensure_atom` from storage dc688c1 Increase robustness of WBE message and request processing 7640b09 Bring in a newer optional eventlet 9537f52 Document more function/class/method params 7fe2f51 Remove no longer needed r/w lock interface base class 8bbc2fd Better handle the tree freeze method c5c2211 Ensure state machine can be frozen 97e6bb1 Link a few of the classes to implemented features/bugs in python 6bbf85b Add a timing listener that also prints the results c5aa2f9 Remove useless __exit__ return 26793dc Add a state machine copy() method d98f23d Add a couple of scope shadowing test cases d6ef687 Relax the graph flow symbol constraints 76641d8 Relax the unordered flow symbol constraints 2339bac Relax the linear flow symbol constraints fa077c9 Revamp the symbol lookup mechanism e68d72f Be smarter about required flow symbols 296e660 Have the dispatch_job function return a future 8dc6e4f Add a sample script that can be used to build a test environment be4fac3 Extract the state changes from the ensure storage method Diffstat (except docs and test files) ------------------------------------- CONTRIBUTING.rst | 4 +- README.rst | 5 +- openstack-common.conf | 1 - requirements-py2.txt | 7 +- requirements-py3.txt | 7 +- setup.cfg | 14 +- taskflow/atom.py | 26 +- taskflow/conductors/base.py | 26 +- taskflow/conductors/single_threaded.py | 26 +- taskflow/engines/action_engine/actions/retry.py | 124 + taskflow/engines/action_engine/actions/task.py | 145 + taskflow/engines/action_engine/compiler.py | 456 +- taskflow/engines/action_engine/completer.py | 114 + taskflow/engines/action_engine/engine.py | 53 +- taskflow/engines/action_engine/executor.py | 84 +- taskflow/engines/action_engine/retry_action.py | 86 - taskflow/engines/action_engine/runner.py | 15 +- taskflow/engines/action_engine/runtime.py | 228 +- taskflow/engines/action_engine/scheduler.py | 114 + taskflow/engines/action_engine/scopes.py | 113 + taskflow/engines/action_engine/task_action.py | 116 - taskflow/engines/base.py | 49 +- taskflow/engines/helpers.py | 201 +- taskflow/engines/worker_based/cache.py | 4 +- taskflow/engines/worker_based/dispatcher.py | 6 +- taskflow/engines/worker_based/endpoint.py | 15 +- taskflow/engines/worker_based/engine.py | 30 +- taskflow/engines/worker_based/executor.py | 52 +- taskflow/engines/worker_based/protocol.py | 51 +- taskflow/engines/worker_based/proxy.py | 55 +- taskflow/engines/worker_based/server.py | 123 +- taskflow/engines/worker_based/worker.py | 17 +- taskflow/examples/calculate_in_parallel.py | 2 +- taskflow/examples/create_parallel_volume.py | 13 +- taskflow/examples/delayed_return.py | 6 +- taskflow/examples/fake_billing.py | 12 +- taskflow/examples/graph_flow.py | 4 +- .../examples/jobboard_produce_consume_colors.py | 7 +- taskflow/examples/parallel_table_multiply.py | 129 + taskflow/examples/persistence_example.py | 13 +- taskflow/examples/resume_vm_boot.py | 22 +- taskflow/examples/resume_volume_create.py | 6 +- taskflow/examples/run_by_iter.py | 13 +- taskflow/examples/run_by_iter_enumerate.py | 8 +- taskflow/examples/simple_linear_pass.py | 2 +- taskflow/examples/simple_map_reduce.py | 115 + taskflow/examples/timing_listener.py | 59 + taskflow/examples/wbe_mandelbrot.py | 5 +- taskflow/examples/wbe_simple_linear.py | 19 +- taskflow/examples/wrapped_exception.py | 20 +- taskflow/exceptions.py | 91 +- taskflow/flow.py | 48 +- taskflow/jobs/backends/__init__.py | 8 +- taskflow/jobs/backends/impl_zookeeper.py | 90 +- taskflow/jobs/job.py | 3 +- taskflow/jobs/jobboard.py | 4 +- taskflow/listeners/base.py | 199 +- taskflow/listeners/claims.py | 100 + taskflow/listeners/logging.py | 175 +- taskflow/listeners/printing.py | 16 +- taskflow/listeners/timing.py | 55 +- taskflow/logging.py | 92 + taskflow/openstack/common/__init__.py | 17 - taskflow/openstack/common/uuidutils.py | 37 - taskflow/patterns/graph_flow.py | 177 +- taskflow/patterns/linear_flow.py | 50 +- taskflow/patterns/unordered_flow.py | 61 +- taskflow/persistence/backends/__init__.py | 13 +- taskflow/persistence/backends/impl_dir.py | 2 +- taskflow/persistence/backends/impl_memory.py | 3 +- taskflow/persistence/backends/impl_sqlalchemy.py | 11 +- taskflow/persistence/backends/impl_zookeeper.py | 8 +- taskflow/persistence/backends/sqlalchemy/models.py | 2 +- taskflow/persistence/logbook.py | 21 +- taskflow/retry.py | 141 +- taskflow/storage.py | 343 +- taskflow/task.py | 205 +- taskflow/test.py | 78 + taskflow/types/cache.py | 25 +- taskflow/types/failure.py | 342 + taskflow/types/fsm.py | 42 +- taskflow/types/futures.py | 206 + taskflow/types/graph.py | 22 + taskflow/types/latch.py | 17 +- taskflow/types/notifier.py | 278 + taskflow/types/timing.py | 41 +- taskflow/types/tree.py | 24 +- taskflow/utils/async_utils.py | 82 +- taskflow/utils/deprecation.py | 257 + taskflow/utils/eventlet_utils.py | 192 - taskflow/utils/kazoo_utils.py | 10 +- taskflow/utils/lock_utils.py | 67 +- taskflow/utils/misc.py | 627 +- taskflow/utils/persistence_utils.py | 4 +- taskflow/utils/reflection.py | 251 - taskflow/utils/threading_utils.py | 20 + test-requirements.txt | 13 +- tools/env_builder.sh | 126 + tools/schema_generator.py | 83 + tox.ini | 8 +- 158 files changed, 7662 insertions(+), 12475 deletions(-) Requirements updates -------------------- diff --git a/requirements-py2.txt b/requirements-py2.txt index 6134877..e142007 100644 --- a/requirements-py2.txt +++ b/requirements-py2.txt @@ -4,0 +5,3 @@ +# See: https://bugs.launchpad.net/pbr/+bug/1384919 for why this is here... +pbr>=0.6,!=0.7,<1.0 + @@ -17 +20 @@ networkx>=1.8 -stevedore>=1.0.0 # Apache-2.0 +stevedore>=1.1.0 # Apache-2.0 @@ -26 +29 @@ jsonschema>=2.0.0,<3.0.0 -oslo.utils>=1.0.0 # Apache-2.0 +oslo.utils>=1.1.0 # Apache-2.0 diff --git a/requirements-py3.txt b/requirements-py3.txt index ea30582..d827a17 100644 --- a/requirements-py3.txt +++ b/requirements-py3.txt @@ -4,0 +5,3 @@ +# See: https://bugs.launchpad.net/pbr/+bug/1384919 for why this is here... +pbr>=0.6,!=0.7,<1.0 + @@ -14 +17 @@ networkx>=1.8 -stevedore>=1.0.0 # Apache-2.0 +stevedore>=1.1.0 # Apache-2.0 @@ -20 +23 @@ jsonschema>=2.0.0,<3.0.0 -oslo.utils>=1.0.0 # Apache-2.0 +oslo.utils>=1.1.0 # Apache-2.0 diff --git a/test-requirements.txt b/test-requirements.txt index 19856bb..96ab944 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -6 +6 @@ hacking>=0.9.2,<0.10 -oslotest>=1.1.0 # Apache-2.0 +oslotest>=1.2.0 # Apache-2.0 @@ -8 +8 @@ mock>=1.0 -testtools>=0.9.34 +testtools>=0.9.36,!=1.2.0 @@ -22 +22,6 @@ kazoo>=1.3.1 -alembic>=0.6.4 +# +# Explict mysql drivers are also not listed here so that we can test against +# PyMySQL or MySQL-python depending on the python version the tests are being +# ran in (MySQL-python is currently preferred for 2.x environments, since +# it has been used in openstack for the longest). +alembic>=0.7.1 @@ -26 +31 @@ psycopg2 -sphinx>=1.1.2,!=1.2.0,<1.3 +sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 From zbitter at redhat.com Fri Dec 19 03:28:54 2014 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 18 Dec 2014 22:28:54 -0500 Subject: [openstack-dev] [Heat] Convergence proof-of-concept showdown In-Reply-To: <4641310AFBEE10419D0A020273367C140CA3A305@G1W3645.americas.hpqcorp.net> References: <54768A81.7010007@redhat.com> <4641310AFBEE10419D0A020273367C140CA06093@G1W3656.americas.hpqcorp.net> <547C1285.7090909@hp.com> <547FEEEB.3070507@redhat.com> <4641310AFBEE10419D0A020273367C140CA293D1@G2W2436.americas.hpqcorp.net> <54862401.3020508@redhat.com> <4641310AFBEE10419D0A020273367C140CA2B42C@G2W2436.americas.hpqcorp.net> <54888721.50404@redhat.com> <4641310AFBEE10419D0A020273367C140CA3954A@G1W3645.americas.hpqcorp.net> <548A3FB8.9030007@redhat.com> <4641310AFBEE10419D0A020273367C140CA39ACC@G1W3645.americas.hpqcorp.net> <548B8480.9010506@redhat.com> <4641310AFBEE10419D0A020273367C140CA3A305@G1W3645.americas.hpqcorp.net> Message-ID: <54939B76.60701@redhat.com> On 15/12/14 07:47, Murugan, Visnusaran wrote: > We have similar questions regarding other > areas in your implementation, which we believe if we understand the outline of your implementation. It is difficult to get > a hold on your approach just by looking at code. Docs strings / Etherpad will help. I added a bunch of extra docstrings and comments: https://github.com/zaneb/heat-convergence-prototype/commit/5d79e009196dc224bd588e19edef5f0939b04607 I also implemented a --pdb option that will automatically set breakpoints at the beginning of all of the asynchronous events, so that you'll be dropped into the debugger and can single-step through the code, look at variables and so on: https://github.com/zaneb/heat-convergence-prototype/commit/2a7a56dde21cad979fae25acc9fb01c6b4d9c6f7 I hope that helps. If you have more questions, please feel free to ask. cheers, Zane. From chenrui.momo at gmail.com Fri Dec 19 03:36:03 2014 From: chenrui.momo at gmail.com (Rui Chen) Date: Fri, 19 Dec 2014 11:36:03 +0800 Subject: [openstack-dev] [nova] Volunteer for BP 'Improve Nova KVM IO support' Message-ID: Hi, Is Anybody still working on this nova BP 'Improve Nova KVM IO support'? https://blueprints.launchpad.net/nova/+spec/improve-nova-kvm-io-support I willing to complement nova-spec and implement this feature in kilo or subsequent versions. Feel free to assign this BP to me, thanks:) Best Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From everett.toews at RACKSPACE.COM Fri Dec 19 04:19:41 2014 From: everett.toews at RACKSPACE.COM (Everett Toews) Date: Fri, 19 Dec 2014 04:19:41 +0000 Subject: [openstack-dev] [api] Analysis of current API design Message-ID: <36742172-3C0F-4195-993F-D935000726EC@rackspace.com> Hi All, At the recent API WG meeting [1] we discussed the need for more analysis of current API design. We need to get better at doing analysis of current API design as part of our guideline proposals. We are not creating these guidelines in a vacuum. The current design should be analyzed and taken into account. Naturally the type of analysis will vary from guideline to guideline but backing your proposals with some kind of analysis will only make them better. Let?s take some examples. 1. Anne Gentle and I want to improve the consistency of service catalogs across cloud providers both public and private. This is going to require the analysis of many providers and we?ve got a start on it here [2]. Hopefully a guideline for the service catalog should fall out of the analysis of the many providers. 2. There?s a guideline for metadata up for review [3]. I wasn?t aware of all of the places where the concept of metadata is used in OpenStack so I did some analysis [4]. I found that the representation was pretty consistent but how metadata was CRUDed wasn?t as consistent. I hope that information can help the review along. 3. This Guideline for collection resources' representation structures [5] basically codifies in a guideline what was found in the analysis. Good stuff and it has definitely helped the review along. For more information about analysis of current API design see #1 of How to Contribute [5] Any thoughts or feedback on the above? Thanks, Everett [1] http://eavesdrop.openstack.org/meetings/api_wg/2014/api_wg.2014-12-18-16.00.log.html [2] https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog [3] https://review.openstack.org/#/c/141229/ [4] https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Metadata [5] https://review.openstack.org/#/c/133660/ [6] https://wiki.openstack.org/wiki/API_Working_Group#How_to_Contribute From asalkeld at mirantis.com Fri Dec 19 05:07:54 2014 From: asalkeld at mirantis.com (Angus Salkeld) Date: Fri, 19 Dec 2014 15:07:54 +1000 Subject: [openstack-dev] [Mistral] For-each In-Reply-To: References: Message-ID: On Mon, Dec 15, 2014 at 8:00 PM, Nikolay Makhotkin wrote: > > Hi, > > Here is the doc with suggestions on specification for for-each feature. > > You are free to comment and ask questions. > > > https://docs.google.com/document/d/1iw0OgQcU0LV_i3Lnbax9NqAJ397zSYA3PMvl6F_uqm0/edit?usp=sharing > > > Just as a drive by comment, there is a Heat spec for a "for-each": https://review.openstack.org/#/c/140849/ (there hasn't been a lot of feedback for it yet tho') Nice to have these somewhat consistent. -Angus > > -- > Best Regards, > Nikolay > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.tran2 at hp.com Fri Dec 19 07:05:07 2014 From: steven.tran2 at hp.com (Tran, Steven) Date: Fri, 19 Dec 2014 07:05:07 +0000 Subject: [openstack-dev] [tox] Connection to pypi.python.org timed out when installing markupsafe in tox Message-ID: <928760976E4ACC46B58B36866F60B0DD080A7E79@G1W3642.americas.hpqcorp.net> Hi, I'm new to Openstack & tox and I run into this issue with tox, so hopefully someone can point me a direction on how to resolve it. I try to run tox and out of many packages, tox gets timed out installing "markupsafe" but not those before it. In fact, the failure is on oslo.db and markupsafe is one of its dependencies. I put "markupsafe" in a separate requirements.txt and try to install markupsafe before oslo.db and I still hit the timeout installing "markupsafe". However, if I run the pip install manually with that same requirements.txt that contain "markupsafe", the install is successful. I suspect it's the proxy issue under tox. But how do I include proxy in tox? I have environment $http_proxy & $https_proxy set up properly. I try to add proxy to pip: "install_command = pip install -U --proxy {opts} {packages}" under tox.ini but it doesn't help. I also increase the timeout for pip or using an older version of pip (1.4) as someone suggest but that doesn't help either. I'm running Ubuntu 14.04.1. pip 1.5.6. stack at tv-156:/opt/stack/congress$ tox -e pep8 -v using tox.ini: /opt/stack/congress/tox.ini using tox-1.6.1 from /usr/local/lib/python2.7/dist-packages/tox/__init__.pyc GLOB sdist-make: /opt/stack/congress/setup.py /opt/stack/congress$ /usr/bin/python /opt/stack/congress/setup.py sdist --formats=zip --dist-dir /opt/stack/congress/.tox/dist >/opt/stack/congress/.tox/log/tox-0.log pep8 create: /opt/stack/congress/.tox/pep8 /opt/stack/congress/.tox$ /usr/bin/python /usr/lib/python2.7/dist-packages/virtualenv.py --setuptools --python /usr/bin/python pep8 >/opt/stack/congress/.tox/pep8/log/pep8-0.log pep8 installdeps: -r/opt/stack/congress/requirements.txt, -r/opt/stack/congress/requirements2.txt, -r/opt/stack/congress/requirements3.txt, -r/opt/stack/congress/test-requirements.txt /opt/stack/congress$ /opt/stack/congress/.tox/pep8/bin/pip install -U -r/opt/stack/congress/requirements.txt -r/opt/stack/congress/requirements2.txt -r/opt/stack/congress/requirements3.txt -r/opt/stack/congress/test-requirements.txt >/opt/stack/congress/.tox/pep8/log/pep8-1.log ERROR: invocation failed, logfile: /opt/stack/congress/.tox/pep8/log/pep8-1.log ERROR: actionid=pep8 msg=getenv cmdargs=[local('/opt/stack/congress/.tox/pep8/bin/pip'), 'install', '-U', '-r/opt/stack/congress/requirements.txt', '-r/opt/stack/congress/requirements2.txt', '-r/opt/stack/congress/requirements3.txt', '-r/opt/stack/congress/test-requirements.txt'] env={'PYTHONIOENCODING': 'utf_8', 'NO_PROXY': 'localhost,127.0.0.1,localaddress,.localdomain.com,192.168.178.88', 'http_proxy': 'http://proxy.houston.hp.com:8080', 'FTP_PROXY': 'http://proxy.houston.hp.com:8080', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'SSH_CLIENT': '192.168.2.49 62999 22', 'LOGNAME': 'stack', 'USER': 'stack', 'PATH': '/opt/stack/congress/.tox/pep8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games', 'HOME': '/home/stack', 'LANG': 'en_US.UTF-8', 'TERM': 'xterm', 'SHELL': '/bin/bash', 'LANGUAGE': 'en_US:en', 'HTTPS_PROXY': 'https://proxy.houston.hp.com:8080', 'SHLVL': '1', 'https_proxy': 'https://proxy.houston.hp.com:8080', 'XDG_RUNTIME_DIR': '/run/user/1000', 'VIRTUAL_ENV': '/opt/stack/congress/.tox/pep8', 'ftp_proxy': 'http://proxy.houston.hp.com:8080', 'LC_ALL': 'C', 'XDG_SESSION_ID': '16', '_': '/usr/local/bin/tox', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'SSH_TTY': '/dev/pts/19', 'OLDPWD': '/home/stack', 'HTTP_PROXY': 'http://proxy.houston.hp.com:8080', 'no_proxy': 'localhost,127.0.0.1,localaddress,.localdomain.com,192.168.178.88', 'PWD': '/opt/stack/congress', 'MAIL': '/var/mail/stack', 'SSH_CONNECTION': '192.168.2.49 62999 192.168.178.88 22'} Downloading/unpacking argparse from https://pypi.python.org/packages/2.7/a/argparse/argparse-1.3.0-py2.py3-none-any.whl#md5=c89395a1a43b61ca6a116aed5e3b1d59 (from -r /opt/stack/congress/requirements.txt (line 4)) Downloading argparse-1.3.0-py2.py3-none-any.whl Downloading/unpacking Babel>=1.3 (from -r /opt/stack/congress/requirements.txt (line 5)) Running setup.py (path:/opt/stack/congress/.tox/pep8/build/Babel/setup.py) egg_info for package Babel warning: no previously-included files matching '*' found under directory 'docs/_build' warning: no previously-included files matching '*.pyc' found under directory 'tests' warning: no previously-included files matching '*.pyo' found under directory 'tests' Downloading/unpacking eventlet>=0.15.2 (from -r /opt/stack/congress/requirements.txt (line 6)) Downloading/unpacking keystonemiddleware>=1.0.0 (from -r /opt/stack/congress/requirements.txt (line 7)) Downloading/unpacking mox>=0.5.3 (from -r /opt/stack/congress/requirements.txt (line 8)) Downloading mox-0.5.3.tar.gz Running setup.py (path:/opt/stack/congress/.tox/pep8/build/mox/setup.py) egg_info for package mox Downloading/unpacking Paste (from -r /opt/stack/congress/requirements.txt (line 9)) Running setup.py (path:/opt/stack/congress/.tox/pep8/build/Paste/setup.py) egg_info for package Paste warning: no previously-included files matching '*' found under directory 'docs/_build/_sources' Downloading/unpacking PasteDeploy>=1.5.0 (from -r /opt/stack/congress/requirements.txt (line 10)) Downloading PasteDeploy-1.5.2-py2.py3-none-any.whl Downloading/unpacking pbr>=0.6,!=0.7,<1.0 (from -r /opt/stack/congress/requirements.txt (line 11)) Downloading/unpacking posix-ipc (from -r /opt/stack/congress/requirements.txt (line 12)) Running setup.py (path:/opt/stack/congress/.tox/pep8/build/posix-ipc/setup.py) egg_info for package posix-ipc Downloading/unpacking python-keystoneclient>=0.11.1 (from -r /opt/stack/congress/requirements.txt (line 13)) Downloading/unpacking python-novaclient>=2.18.0 (from -r /opt/stack/congress/requirements.txt (line 14)) Downloading/unpacking python-neutronclient>=2.3.6,<3 (from -r /opt/stack/congress/requirements.txt (line 15)) Downloading/unpacking python-ceilometerclient>=1.0.6 (from -r /opt/stack/congress/requirements.txt (line 16)) Downloading/unpacking python-cinderclient>=1.1.0 (from -r /opt/stack/congress/requirements.txt (line 17)) Downloading/unpacking python-swiftclient>=2.2.0 (from -r /opt/stack/congress/requirements.txt (line 18)) Downloading/unpacking alembic>=0.6.4 (from -r /opt/stack/congress/requirements.txt (line 19)) Running setup.py (path:/opt/stack/congress/.tox/pep8/build/alembic/setup.py) egg_info for package alembic warning: no files found matching '*.jpg' under directory 'docs' warning: no files found matching '*.sty' under directory 'docs' warning: no files found matching '*.dat' under directory 'tests' no previously-included directories found matching 'docs/build/output' Downloading/unpacking python-glanceclient>=0.14.0 (from -r /opt/stack/congress/requirements.txt (line 20)) Downloading/unpacking Routes>=1.12.3,!=2.0 (from -r /opt/stack/congress/requirements.txt (line 22)) Running setup.py (path:/opt/stack/congress/.tox/pep8/build/Routes/setup.py) egg_info for package Routes warning: no previously-included files matching '.DS_Store' found anywhere in distribution warning: no previously-included files matching '*.hgignore' found anywhere in distribution warning: no previously-included files matching '*.hgtags' found anywhere in distribution Downloading/unpacking six>=1.7.0 (from -r /opt/stack/congress/requirements.txt (line 23)) Downloading six-1.8.0-py2.py3-none-any.whl Downloading/unpacking oslo.config>=1.4.0 (from -r /opt/stack/congress/requirements.txt (line 24)) Downloading oslo.config-1.5.0-py2.py3-none-any.whl Downloading/unpacking markupsafe>=0.9.2 (from -r /opt/stack/congress/requirements2.txt (line 1)) Cleaning up... Exception: Traceback (most recent call last): File "/opt/stack/congress/.tox/pep8/local/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/opt/stack/congress/.tox/pep8/local/lib/python2.7/site-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/opt/stack/congress/.tox/pep8/local/lib/python2.7/site-packages/pip/req.py", line 1197, in prepare_files do_download, File "/opt/stack/congress/.tox/pep8/local/lib/python2.7/site-packages/pip/req.py", line 1375, in unpack_url self.session, File "/opt/stack/congress/.tox/pep8/local/lib/python2.7/site-packages/pip/download.py", line 546, in unpack_http_url resp = session.get(target_url, stream=True) File "/opt/stack/congress/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py", line 395, in get return self.request('GET', url, **kwargs) File "/opt/stack/congress/.tox/pep8/local/lib/python2.7/site-packages/pip/download.py", line 237, in request return super(PipSession, self).request(method, url, *args, **kwargs) File "/opt/stack/congress/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py", line 383, in request resp = self.send(prep, **send_kwargs) File "/opt/stack/congress/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py", line 486, in send r = adapter.send(request, **kwargs) File "/opt/stack/congress/.tox/pep8/local/lib/python2.7/site-packages/pip/_vendor/requests/adapters.py", line 387, in send raise Timeout(e) Timeout: (, 'Connection to pypi.python.org timed out. (connect timeout=30.0)') Storing debug log for failure in /tmp/tmpVXPUw4 ERROR: could not install deps [-r/opt/stack/congress/requirements.txt, -r/opt/stack/congress/requirements2.txt, -r/opt/stack/congress/requirements3.txt, -r/opt/stack/congress/test-requirements.txt] _________________________________________________________________________________________ summary _________________________________________________________________________________________ ERROR: pep8: could not install deps [-r/opt/stack/congress/requirements.txt, -r/opt/stack/congress/requirements2.txt, -r/opt/stack/congress/requirements3.txt, -r/opt/stack/congress/test-requirements.txt] And .. stack at tv-156:/opt/stack/congress$ /opt/stack/congress/.tox/pep8/bin/pip install -U -r/opt/stack/congress/requirements2.txt Downloading/unpacking markupsafe>=0.9.2 (from -r /opt/stack/congress/requirements2.txt (line 1)) Downloading MarkupSafe-0.23.tar.gz Running setup.py (path:/opt/stack/congress/.tox/pep8/build/markupsafe/setup.py) egg_info for package markupsafe Installing collected packages: markupsafe Running setup.py install for markupsafe building 'markupsafe._speedups' extension x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c markupsafe/_speedups.c -o build/temp.linux-x86_64-2.7/markupsafe/_speedups.o x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/markupsafe/_speedups.o -o build/lib.linux-x86_64-2.7/markupsafe/_speedups.so Successfully installed markupsafe Cleaning up... stack at tv-156:/opt/stack/congress$ cat /opt/stack/congress/requirements.txt # The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. argparse Babel>=1.3 eventlet>=0.15.2 keystonemiddleware>=1.0.0 mox>=0.5.3 Paste PasteDeploy>=1.5.0 pbr>=0.6,!=0.7,<1.0 posix_ipc python-keystoneclient>=0.11.1 python-novaclient>=2.18.0 python-neutronclient>=2.3.6,<3 python-ceilometerclient>=1.0.6 python-cinderclient>=1.1.0 python-swiftclient>=2.2.0 alembic>=0.6.4 python-glanceclient>=0.14.0 Routes>=1.12.3,!=2.0 six>=1.7.0 oslo.config>=1.4.0 # Apache-2.0 # markupsafe>=0.9.2 # oslo.db>=1.1.0 # Apache-2.0 # oslo.serialization>=1.0.0 # Apache-2.0 # WebOb>=1.2.3 stack at tv-156:/opt/stack/congress$ cat /opt/stack/congress/requirements2.txt markupsafe>=0.9.2 Thanks, -Steven -------------- next part -------------- An HTML attachment was scrubbed... URL: From amit.das at cloudbyte.com Fri Dec 19 07:41:00 2014 From: amit.das at cloudbyte.com (Amit Das) Date: Fri, 19 Dec 2014 13:11:00 +0530 Subject: [openstack-dev] [cinder] [driver] DB operations Message-ID: Hi Stackers, I have been developing a Cinder driver for CloudByte storage and have come across some scenarios where the driver needs to do create, read & update operations on cinder database (volume_admin_metadata table). This is required to establish a mapping between OpenStack IDs with the backend storage IDs. Now, I have got some review comments w.r.t the usage of DB related operations esp. w.r.t raising the context to admin. In short, it has been advised not to use "*context.get_admin_context()*". https://review.openstack.org/#/c/102511/15/cinder/volume/drivers/cloudbyte/cloudbyte.py However, i get errors trying to use the default context as shown below: *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 103, in is_admin_context* *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher return context.is_admin* *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher AttributeError: 'module' object has no attribute 'is_admin'* So what is the proper way to run these DB operations from within a driver ? Regards, Amit *CloudByte Inc.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From sahid.ferdjaoui at redhat.com Fri Dec 19 08:01:49 2014 From: sahid.ferdjaoui at redhat.com (Sahid Orentino Ferdjaoui) Date: Fri, 19 Dec 2014 09:01:49 +0100 Subject: [openstack-dev] [nova] Volunteer for BP 'Improve Nova KVM IO support' In-Reply-To: References: Message-ID: <20141219080149.GA2160@redhat.redhat.com> On Fri, Dec 19, 2014 at 11:36:03AM +0800, Rui Chen wrote: > Hi, > > Is Anybody still working on this nova BP 'Improve Nova KVM IO support'? > https://blueprints.launchpad.net/nova/+spec/improve-nova-kvm-io-support This feature is already in review, since it is only to add an option to libvirt I guess we can consider to do not address a spec but I may be wrong. https://review.openstack.org/#/c/117442/ s. > I willing to complement nova-spec and implement this feature in kilo or > subsequent versions. > > Feel free to assign this BP to me, thanks:) > > Best Regards. > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From y-goto at jp.fujitsu.com Fri Dec 19 08:02:04 2014 From: y-goto at jp.fujitsu.com (Yasunori Goto) Date: Fri, 19 Dec 2014 17:02:04 +0900 Subject: [openstack-dev] [Heat] How can I write at milestone section of blueprint? Message-ID: <20141219170200.0619.E1E9C6FF@jp.fujitsu.com> Hello, This is the first mail at Openstack community, and I have a small question about how to write blueprint for Heat. Currently our team would like to propose 2 interfaces for users operation in HOT. (One is "Event handler" which is to notify user's defined event to heat. Another is definitions of action when heat catches the above notification.) So, I'm preparing the blueprint for it. However, I can not find how I can write at the milestone section of blueprint. Heat blueprint template has a section for Milestones. "Milestones -- Target Milestone for completeion:" But I don't think I can decide it by myself. In my understanding, it should be decided by PTL. In addition, probably the above our request will not finish by Kilo. I suppose it will be "L" version or later. So, what should I write at this section? "Kilo-x", "L version", or empty? Thanks, -- Yasunori Goto From skywalker.nick at gmail.com Fri Dec 19 08:19:19 2014 From: skywalker.nick at gmail.com (Li Ma) Date: Fri, 19 Dec 2014 16:19:19 +0800 Subject: [openstack-dev] [oslo.messaging][devstack] ZeroMQ driver maintenance next steps In-Reply-To: <69DE76EB-8BCD-4D35-9099-13B71160AF80@doughellmann.com> References: <546A058C.7090800@ubuntu.com> <1C39F2D6-C600-4BA5-8F95-79E2E7DA7172@doughellmann.com> <6F5E7E0C-656F-4616-A816-BC2E340299AD@doughellmann.com> <37ecc12f660ddca2b5a76930c06c1ce6@sileht.net> <546B4DC7.9090309@ubuntu.com> <548679CD.4030205@gmail.com> <69DE76EB-8BCD-4D35-9099-13B71160AF80@doughellmann.com> Message-ID: <5493DF87.9000901@gmail.com> On 2014/12/9 22:07, Doug Hellmann wrote: > On Dec 8, 2014, at 11:25 PM, Li Ma wrote: > >> Hi all, I tried to deploy zeromq by devstack and it definitely failed with lots of problems, like dependencies, topics, matchmaker setup, etc. I've already registered a blueprint for devstack-zeromq [1]. > I added the [devstack] tag to the subject of this message so that team will see the thread. > >> Besides, I suggest to build a wiki page in order to trace all the workitems related with ZeroMQ. The general sections may be [Why ZeroMQ], [Current Bugs & Reviews], [Future Plan & Blueprints], [Discussions], [Resources], etc. > Coordinating the work on this via a wiki page makes sense. Please post the link when you?re ready. > > Doug > Hi all, I collected the current status of ZeroMQ driver and posted a wiki link [1] for them. For those bugs that marked as Critical & High, as far as I know, some developers are working on them. Patches are coming soon. BTW, I'm also working on the devstack support. Hope to land everything in the kilo cycle. [1] https://wiki.openstack.org/wiki/ZeroMQ From thierry at openstack.org Fri Dec 19 08:20:04 2014 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 19 Dec 2014 09:20:04 +0100 Subject: [openstack-dev] Kilo-1 development milestone available Message-ID: <5493DFB4.3090000@openstack.org> Hi everyone, The first milestone of the Kilo development cycle, "kilo-1" is now reached for Keystone, Glance, Nova, Horizon, Neutron, Cinder, Ceilometer, Heat, Trove, Sahara, and Ironic ! It contains all the new features and bugfixes that have been added since the Juno Feature Freeze in September. You can find the full list of new features and fixed bugs, as well as tarball downloads, at: https://launchpad.net/keystone/kilo/kilo-1 https://launchpad.net/glance/kilo/kilo-1 https://launchpad.net/nova/kilo/kilo-1 https://launchpad.net/horizon/kilo/kilo-1 https://launchpad.net/neutron/kilo/kilo-1 https://launchpad.net/cinder/kilo/kilo-1 https://launchpad.net/ceilometer/kilo/kilo-1 https://launchpad.net/heat/kilo/kilo-1 https://launchpad.net/trove/kilo/kilo-1 https://launchpad.net/sahara/kilo/kilo-1 https://launchpad.net/ironic/kilo/kilo-1 91 blueprints were implemented and no less than 1056 bugs were fixed during this milestone. And that is not even counting all the features and bugfixes that were pushed to Oslo libraries in the same timeframe. The next development milestone, kilo-2, is scheduled for February 5th. You can further track upcoming features and Kilo release cycle status at: http://status.openstack.org/release/ Enjoy your local end-of-year holidays ! -- Thierry Carrez (ttx) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From thomas.spatzier at de.ibm.com Fri Dec 19 08:27:10 2014 From: thomas.spatzier at de.ibm.com (Thomas Spatzier) Date: Fri, 19 Dec 2014 09:27:10 +0100 Subject: [openstack-dev] [Heat] How can I write at milestone section of blueprint? In-Reply-To: <20141219170200.0619.E1E9C6FF@jp.fujitsu.com> References: <20141219170200.0619.E1E9C6FF@jp.fujitsu.com> Message-ID: Hi Yasunori, you can submit a blueprint spec as a gerrit review to the heat-specs repository [1]. I would suggest to have a look at some existing specs that already got accepted to have an example for the format, important sections etc. All kilo related specs are in a kilo sub-directory in the repo, and the proposed milestone is mentioned in the spec itself. [1] https://github.com/openstack/heat-specs Regards, Thomas > From: Yasunori Goto > To: Openstack-Dev-ML > Date: 19/12/2014 09:05 > Subject: [openstack-dev] [Heat] How can I write at milestone section > of blueprint? > > > Hello, > > This is the first mail at Openstack community, > and I have a small question about how to write blueprint for Heat. > > Currently our team would like to propose 2 interfaces > for users operation in HOT. > (One is "Event handler" which is to notify user's defined event to heat. > Another is definitions of action when heat catches the above notification.) > So, I'm preparing the blueprint for it. > > However, I can not find how I can write at the milestone section of blueprint. > > Heat blueprint template has a section for Milestones. > "Milestones -- Target Milestone for completeion:" > > But I don't think I can decide it by myself. > In my understanding, it should be decided by PTL. > > In addition, probably the above our request will not finish > by Kilo. I suppose it will be "L" version or later. > > So, what should I write at this section? > "Kilo-x", "L version", or empty? > > Thanks, > > -- > Yasunori Goto > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From y-goto at jp.fujitsu.com Fri Dec 19 08:57:47 2014 From: y-goto at jp.fujitsu.com (Yasunori Goto) Date: Fri, 19 Dec 2014 17:57:47 +0900 Subject: [openstack-dev] [Heat] How can I write at milestone section of blueprint? In-Reply-To: References: <20141219170200.0619.E1E9C6FF@jp.fujitsu.com> Message-ID: <20141219175743.7CC8.E1E9C6FF@jp.fujitsu.com> Hi, Thomas-san, Thank you for your response. > you can submit a blueprint spec as a gerrit review to the heat-specs > repository [1]. > I would suggest to have a look at some existing specs that already got > accepted to have an example for the format, important sections etc. > > All kilo related specs are in a kilo sub-directory in the repo, and the > proposed milestone is mentioned in the spec itself. > > [1] https://github.com/openstack/heat-specs Hmm. However, "Kilo-1", "Kilo-2", or "Kilo-3" is written at the section of "Target Milestone for completion:" in these blueprint. I don't find I can decide such concrete Milestone for my new blueprint. It is why I asked... Thanks, > > Regards, > Thomas > > > > From: Yasunori Goto > > To: Openstack-Dev-ML > > Date: 19/12/2014 09:05 > > Subject: [openstack-dev] [Heat] How can I write at milestone section > > of blueprint? > > > > > > Hello, > > > > This is the first mail at Openstack community, > > and I have a small question about how to write blueprint for Heat. > > > > Currently our team would like to propose 2 interfaces > > for users operation in HOT. > > (One is "Event handler" which is to notify user's defined event to heat. > > Another is definitions of action when heat catches the above > notification.) > > So, I'm preparing the blueprint for it. > > > > However, I can not find how I can write at the milestone section of > blueprint. > > > > Heat blueprint template has a section for Milestones. > > "Milestones -- Target Milestone for completeion:" > > > > But I don't think I can decide it by myself. > > In my understanding, it should be decided by PTL. > > > > In addition, probably the above our request will not finish > > by Kilo. I suppose it will be "L" version or later. > > > > So, what should I write at this section? > > "Kilo-x", "L version", or empty? > > > > Thanks, > > > > -- > > Yasunori Goto > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Yasunori Goto From sbogatkin at mirantis.com Fri Dec 19 09:00:09 2014 From: sbogatkin at mirantis.com (Stanislaw Bogatkin) Date: Fri, 19 Dec 2014 13:00:09 +0400 Subject: [openstack-dev] [FUEL] Bootstrap NTP sync. Message-ID: Hi guys, We have a little concern related to Fuel bootstrap node NTP sync. Currently we trying to sync time on bootstrap node with master node, but problem is that NTP protocol has long convergence time, so if we just install master node and right after that try to start some bootstrap node - bootstrap fails to sync time with master due to that fact that master doesn't appear as "trust time source" at that moment. How we can solve that problem: 1. We can start bootstrap long time after master (when master will convergence it's time) - seems that it's a bad idea, cause master node convergence time depends on upstream NTP servers and may be quite long - user shouldn't wait so long time to just start bootstrap node. 2. We can use master local time as "trust" forcibly - actually, we already do that for case when master is a bare metal node. We can do it for virtual node too, it is not such bad idea as many can say, especially when master node stratum will low (10-12). 3. We can mask return value for bootstrap node ntpdate service such way that it always will return success - it's a dirty hack, it will calm down customers, but it doesn't solve problem - time will be unsynced. As for me - second option is best. What do you think about it? -------------- next part -------------- An HTML attachment was scrubbed... URL: From khanh-toan.tran at cloudwatt.com Fri Dec 19 09:03:31 2014 From: khanh-toan.tran at cloudwatt.com (Khanh-Toan Tran) Date: Fri, 19 Dec 2014 09:03:31 +0000 (UTC) Subject: [openstack-dev] [Congress] Re: Placement and Scheduling via Policy In-Reply-To: References: <98E78314-3F6D-4F06-A6EF-6BDA94E54B4B@vmware.com> <2c1d29c9ed6c41aabc8ae2efa964b9aa@BRMWP-EXMB12.corp.brocade.com> <6084_1418799460_54912964_6084_19713_1_CCD65D20E73C3348AA859183AFEB3ECF14209493@PEXCVZYM12.corporate.adroot.infra.ftgroup> Message-ID: <1696083167.22523772.1418979811149.JavaMail.zimbra@cloudwatt.com> Hi all, I've made an analyse a while a go how to use SolverScheduler with a policy engine: https://docs.google.com/document/d/1RfP7jRsw1mXMjd7in72ARjK0fTrsQv1bqolOriIQB2Y Basically there should be a plugin that translates the policy into constraints for solver to solve. This was made using Policy-Based Engine [1], but it works well with Congress. [1] https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler ----- Mail original ----- > De: "Tim Hinrichs" > ?: "ruby krishnaswamy" > Cc: "Prabhakar Kudva" , "openstack-dev" , "Gokul B Kandiraju" > > Envoy?: Jeudi 18 D?cembre 2014 18:24:59 > Objet: Re: [openstack-dev] [Congress] Re: Placement and Scheduling via Policy > > Hi all, > > Responses inline. > > On Dec 16, 2014, at 10:57 PM, > > > > wrote: > > Hi Tim & All > > @Tim: I did not reply to openstack-dev. Do you think we could have an > openstack list specific for ?congress? to which anybody may subscribe? > > Sending to openstack-dev is the right thing, as long as we put [Congress] in > the subject. Everyone I know sets up filters on openstack-dev so they only > get the mail they care about. I think you?re the only one in the group who > isn?t subscribed to that list. > > > > 1) Enforcement: > By this we mean ?how will the actions computed by the policy > engine be executed by the concerned OpenStack functional module?. > > > In this case, it is better to first work this out for a ?simpler? case, > e.g. your running example concerning the network/groups. > Note: some actions concern only some data base (e.g. insert the > user within some group). > > > > 2) From Prabhakar?s mail > > ?Enforcement. That is with a large number of constraints in place for > placement and > scheduling, how does the policy engine communicate and enforce the placement > constraints to nova scheduler. ? > > Nova scheduler (current): It assigns VMs to servers based on the > policy set by the administrator (through filters and host > aggregates). > > The administrator also configures a scheduling heuristic (implemented as a > driver), for example ?round-robin? driver. > Then the computed assignment > is sent back to the > requestor (API server) that > interacts with nova-compute > to provision the VM. > The current nova-scheduler > has another function: It > updates the allocation > status of each compute node > on the DB (through another > indirection called > nova-conductor) > > So it is correct to re-interpret your statement as follows: > > - What is the entity with which the policy engine interacts for either > proactive or reactive placement management? > > - How will the output from the policy engine (for example the placement > matrix) be communicated back? > > o Proactive: this gives the mapping of VM to host > > o Reactive: this gives the new mapping of running VMs to hosts > > - How starting from the placement matrix, the correct migration plan > will be executed? (for reactive case) > > > > 3) Currently openstack does not have ?automated management of reactive > placement?: Hence if the policy engine is used for reactive placement, then > there is a need for another ?orchestrator? that can interpret the new > proposed placement configuration (mapping of VM to servers) and execute the > reconfiguration workflow. > > > 4) So with a policy-based ?placement engine? that is integrated with > external solvers, then this engine will replace nova-scheduler? > > Could we converge on this? > > > > The notes from Yathiraj say that there is already a policy-based Nova > scheduler we can use. I suggest we look into that. It could potentially > simplify our problem to the point where we need only figure out how to > convert a fragment of the Congress policy language into their policy > language. But those of you who are experts in placement will know better. > > https://github.com/stackforge/nova-solver-scheduler > > Tim > > > Regards > Ruby > > De : Tim Hinrichs [mailto:thinrichs at vmware.com] > Envoy? : mardi 16 d?cembre 2014 19:25 > ? : Prabhakar Kudva > Cc : KRISHNASWAMY Ruby IMT/OLPS; Ramki Krishnan > (ramk at Brocade.com); Gokul B Kandiraju; > openstack-dev > Objet : [Congress] Re: Placement and Scheduling via Policy > > [Adding openstack-dev to this thread. For those of you just joining? We > started kicking around ideas for how we might integrate a special-purpose VM > placement engine into Congress.] > > Kudva: responses inline. > > > On Dec 16, 2014, at 6:25 AM, Prabhakar Kudva > > wrote: > > > Hi, > > I am very interested in this. > > So, it looks like there are two parts to this: > 1. Policy analysis when there are a significant mix of logical and builtin > predicates (i.e., > runtime should identify a solution space when there are arithmetic > operators). This will > require linear programming/ILP type solvers. There might be a need to have a > function > in runtime.py that specifically deals with this (Tim?) > > I think it?s right that we expect there to be a mix of builtins and standard > predicates. But what we?re considering here is having the linear solver be > treated as if it were a domain-specific policy engine. So that solver > wouldn?t be embedded into the runtime.py necessarily. Rather, we?d delegate > part of the policy to that domain-specific policy engine. > > > 2. Enforcement. That is with a large number of constraints in place for > placement and > scheduling, how does the policy engine communicate and enforce the placement > constraints to nova scheduler. > > I would imagine that we could delegate either enforcement or monitoring or > both. Eventually we want enforcement here, but monitoring could be useful > too. > > And yes you?re asking the right questions. I was trying to break the problem > down into pieces in my bullet (1) below. But I think there is significant > overlap in the questions we need to answer whether we?re delegating > monitoring or enforcement. > > > Both of these require some form of mathematical analysis. > > Would be happy and interested to discuss more on these lines. > > Maybe take a look at how I tried to breakdown the problem into separate > questions in bullet (1) below and see if that makes sense. > > Tim > > > Prabhakar > > > > > > > From: Tim Hinrichs > > To: > "ruby.krishnaswamy at orange.com" > > > Cc: "Ramki Krishnan (ramk at Brocade.com)" > >, Gokul B > Kandiraju/Watson/IBM at IBMUS, Prabhakar Kudva/Watson/IBM at IBMUS > Date: 12/15/2014 12:09 PM > Subject: Re: Placement and Scheduling via Policy > ________________________________ > > > > [Adding Prabhakar and Gokul, in case they are interested.] > > 1) Ruby, thinking about the solver as taking 1 matrix of [vm, server] and > returning another matrix helps me understand what we?re talking > about?thanks. I think you?re right that once we move from placement to > optimization problems in general we?ll need to figure out how to deal with > actions. But if it?s a placement-specific policy engine, then we can build > VM-migration into it. > > It seems to me that the only part left is figuring out how to take an > arbitrary policy, carve off the placement-relevant portion, and create the > inputs the solver needs to generate that new matrix. Some thoughts... > > - My gut tells me that the placement-solver should basically say ?I enforce > policies having to do with the schema nova:location.? This way the Congress > policy engine knows to give it policies relevant to nova:location > (placement). If we do that, I believe we can carve off the right sub > theory. > > - That leaves taking a Datalog policy where we know nova:location is > important and converting it to the input language required by a linear > solver. We need to remember that the Datalog rules may reference tables > from other services like Neutron, Ceilometer, etc. I think the key will be > figuring out what class of policies we can actually do that for reliably. > Cool?a concrete question. > > > 2) We can definitely wait until January on this. I?ll be out of touch > starting Friday too; it seems we all get back early January, which seems > like the right time to resume our discussions. We have some concrete > questions to answer, which was what I was hoping to accomplish before we all > went on holiday. > > Happy Holidays! > Tim > > > On Dec 15, 2014, at 5:53 AM, > > > > wrote: > > Hi Tim > > ?Questions: > 1) Is there any more data the solver needs? Seems like it needs something > about CPU-load for each VM. > 2) Which solver should we be using? What does the linear program that we > feed it look like? How do we translate the results of the linear solver > into a collection of ?migrate_VM? API calls?? > > > > Question (2) seems to me the first to address, in particular: > ?how to prepare the input (variables, constraints, goal) and invoke the > solver? > => We need rules that represent constraints to give the solver (e.g. a > technical constraint that a VM should not be assigned to more than one > server or that more than maximum resource (cpu / mem ?) of a server cannot > be assigned. > > ?how to translate the results of the linear solver into a collection of > API calls?: > => The output from the ?solver? will give the new placement plan (respecting > the constraints in input)? > o E.g. a table of [vm, server, true/false] > => Then this depends on how ?action? is going to be implemented in Congress > (whether an external solver is used or not) > o Is the action presented as the ?final? DB rows that the system must > produce as a result of the actions? > o E.g. if current vm table is [vm3, host4] and the recomputed row says > [vm3, host6], then the action is to move vm3 to host6? > > > ?how will the solver be invoked?? > => When will the optimization call be invoked? > => Is it ?batched?, e.g. periodically invoke Congress to compute new > assignments? > > Which solver to use: > http://www.coin-or.org/projects/ > and > http://www.coin-or.org/projects/PuLP.xml > I think it may be useful to pass through an interface (e.g. LP modeler > to generate LP files in standard formats accepted by prevalent > solvers) > > > The mathematical program: > We can (Orange) contribute to writing down in an informal way the > program for this precise use case, if this can wait until January. > Perhaps the objective is to may be ?minimize the number of servers > whose usage is less than 50%?, since the original policy ?Not more > than 1 server of type1 to have a load under 50%? need not necessarily > have a solution. > > This may help to derive the ?mappings? from Congress (rules to program > equations, intermediary tables to program variables)? > > > For ?migration? use case: it may be useful to add some constraint > representing cost of migration, such that the solver computes the new > assignment plan such that the maximum migration cost is not exceeded. To > start with, perhaps number of migrations? > > > I will be away from the end of the week until 5th January. I will also > discuss with colleagues to see how we can formalize contribution > (congress+nfv poc). > > Rgds > Ruby > > De : Tim Hinrichs [mailto:thinrichs at vmware.com] > Envoy? : vendredi 12 d?cembre 2014 19:41 > ? : KRISHNASWAMY Ruby IMT/OLPS > Cc : Ramki Krishnan (ramk at Brocade.com) > Objet : Re: Placement and Scheduling via Policy > > There?s a ton of good stuff here! > > So if we took Ramki?s initial use case and combined it with Ruby?s HA > constraint, we?d have something like the following policy. > > > // anti-affinity > error (server, VM1, VM2) :- > same_ha_group(VM1, VM2), > nova:location(VM1, server), > nova:location(VM2, server) > > // server-utilization > error(server) :- > type1_server(server), > ceilometer:average_utilization(server, ?cpu-util?, avg), > avg < 50 > > As a start, this seems plenty complex to me. anti-affinity is great b/c it > DOES NOT require a sophisticated solver; server-utilization is great because > it DOES require a linear solver. > > Data the solver needs: > - Ceilometer: cpu-utilization for all the servers > - Nova: data as to where each VM is located > - Policy: high-availability groups > > Questions: > 1) Is there any more data the solver needs? Seems like it needs something > about CPU-load for each VM. > 2) Which solver should we be using? What does the linear program that we > feed it look like? How do we translate the results of the linear solver > into a collection of ?migrate_VM? API calls? > > Maybe another few emails and then we set up a phone call. > > Tim > > > > > > > > > On Dec 11, 2014, at 1:33 AM, > > > > wrote: > > > Hello > > A) First a small extension to the use case that Ramki proposes > > - Add high availability constraint. > - Assuming server-a and server-b are of same size and same failure model. > [Later: Assumption of identical failure rates can be loosened. > Instead of considering only servers as failure domains, can introduce > other failure domains ==> not just an anti-affinity policy but a > calculation from 99,99.. requirement to VM placements, e.g. > ] > - For an exemplary maximum usage scenario, 53 physical servers could be > under peak utilization (100%), 1 server (server-a) could be under partial > utilization (50%) with 2 instances of type large.3 and 1 instance of type > large.2, and 1 server (server-b) could be under partial utilization > (37.5%) with 3 instances of type large.2. > Call VM.one.large2 as the large2 VM in server-a > Call VM.two.large2 as one of the large2 VM in server-b > > - VM.one.large2 and VM.two.large2 > - When one of the large.3 instances mapped to server-a is deleted from > physical server type 1, Policy 1 will be violated, since the overall > utilization of server-a falls to 37,5%. > > - Various new placements(s) are described below > > > VM.two.large2 must not be moved. Moving VM.two.large2 breaks non-affinity > constraint. > > error (server, VM1, VM2) :- > node (VM1, server1), > node (VM2, server2), > same_ha_group(VM1, VM2), > equal(server1, server2); > > 1) New placement 1: Move 2 instances of large.2 to server-a. Overall > utilization of server-a - 50%. Overall utilization of server-b - > 12.5%. > > 2) New placement 2: Move 1 instance of large.3 to server-b. Overall > utilization of server-a - 0%. Overall utilization of server-b - > 62.5%. > > 3) New placement 3: Move 3 instances of large.2 to server-a. Overall > utilization of server-a - 62.5%. Overall utilization of server-b - > 0%. > > New placements 2 and 3 could be considered optimal, since they > achieve maximal bin packing and open up the door for turning off > server-a or server-b and maximizing energy efficiency. > > But new placement 3 breaks client policy. > > > BTW: what happens if a given situation does not allow the policy violation to > be removed? > > B) Ramki?s original use case can itself be extended: > > Adding additional constraints to the previous use case due to cases such > as: > > - Server heterogeneity > > - CPU ?pinning? > > - ?VM groups? (and allocation > > - Application interference > > - Refining on the statement ?instantaneous energy consumption can be > approximately measured using an overall utilization metric, which is a > combination of CPU utilization, memory usage, I/O usage, and network usage? > > > Let me know if this will interest you. Some (e.g. application interference) > will need some time. E.G; benchmarking / profiling to class VMs etc. > > > C) New placement plan execution > > - In Ramki?s original use case, violation is detected at events such as > VM delete. > While certainly this by itself is sufficiently complex, we may need to > consider other triggering cases (periodic or when multiple VMs are > deleted/added) > - In this case, it may not be sufficient to compute the new placement > plan that brings the system to a configuration that does not break policy, > but also add other goals > > > > D) Let me know if a use case such as placing ?video conferencing servers? > (geographically distributed clients) would suit you (multi site scenario) > > => Or is it too premature? > > Ruby > > De : Tim Hinrichs [mailto:thinrichs at vmware.com] > Envoy? : mercredi 10 d?cembre 2014 19:44 > ? : KRISHNASWAMY Ruby IMT/OLPS > Cc : Ramki Krishnan (ramk at Brocade.com) > Objet : Re: Placement and Scheduling via Policy > > Hi Ruby, > > Whatever information you think is important for the use case is good. > Section 3 from one of the docs Ramki sent you covers his use case. > https://datatracker.ietf.org/doc/draft-krishnan-nfvrg-policy-based-rm-nfviaas/?include_text=1 > > From my point of view, the keys things for the use case are? > > - The placement policy (i.e. the conditions under which VMs require > migration). > > - A description of how we want to compute what specific migrations should be > performed (a sketch of (i) the information that we need about current > placements, policy violations, etc., (2) what systems/algorithms/etc. can > utilize that input to figure out what migrations to perform. > > I think we want to focus on the end-user/customer experience (write a policy, > and watch the VMs move around to obey that policy in response to environment > changes) and then work out the details of how to implement that experience. > That?s why I didn?t include things like delays, asynchronous/synchronous, > architecture, applications, etc. in my 2 bullets above. > > Tim > > On Dec 10, 2014, at 8:55 AM, > > > > wrote: > > > > Hi Ramki, Tim > > > By a ?format? for describing use cases, I meant to ask what sets of > information to provide, for example, > - what granularity in description of use case? > - a specific placement policy (and perhaps citing reasons for needing > such policy)? > - Specific applications > - Requirements on the placement manager itself (delay, ?)? > o Architecture as well > - Specific services from the placement manager (using Congress), such > as, > o Violation detection (load, security, ?) > - Adapting (e.g. context-aware) of policies used > > > In any case I will read the documents that Ramki has sent to not resend > similar things. > > Regards > Ruby > > De : Ramki Krishnan [mailto:ramk at Brocade.com] > Envoy? : mercredi 10 d?cembre 2014 16:59 > ? : Tim Hinrichs; KRISHNASWAMY Ruby IMT/OLPS > Cc : Norival Figueira; Pierre Ettori; Alex Yip; > dilikris at in.ibm.com > Objet : RE: Placement and Scheduling via Policy > > Hi Tim, > > This sounds like a plan. It would be great if you could add the links below > to the Congress wiki. I am all for discussing this in the openstack-dev > mailing list and at this point this discussion is completely open. > > IRTF NFVRG Research Group: > https://trac.tools.ietf.org/group/irtf/trac/wiki/nfvrg > > IRTF NFVRG draft on NFVIaaS placement/scheduling (includes system analysis > for the PoC we are thinking): > https://datatracker.ietf.org/doc/draft-krishnan-nfvrg-policy-based-rm-nfviaas/?include_text=1 > > IRTF NFVRG draft on Policy Architecture and Framework (looking forward to > your comments and thoughts): > https://datatracker.ietf.org/doc/draft-norival-nfvrg-nfv-policy-arch/?include_text=1 > > Hi Ruby, > > Looking forward to your use cases. > > Thanks, > Ramki > > > > _________________________________________________________________________________________________________________________ > > Ce message et ses pieces jointes peuvent contenir des informations > confidentielles ou privilegiees et ne doivent donc > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu > ce message par erreur, veuillez le signaler > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages > electroniques etant susceptibles d'alteration, > Orange decline toute responsabilite si ce message a ete altere, deforme ou > falsifie. Merci. > > This message and its attachments may contain confidential or privileged > information that may be protected by law; > they should not be distributed, used or copied without authorisation. > If you have received this email in error, please notify the sender and delete > this message and its attachments. > As emails may be altered, Orange is not liable for messages that have been > modified, changed or falsified. > Thank you. > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From chenrui.momo at gmail.com Fri Dec 19 09:03:52 2014 From: chenrui.momo at gmail.com (Rui Chen) Date: Fri, 19 Dec 2014 17:03:52 +0800 Subject: [openstack-dev] [nova] Volunteer for BP 'Improve Nova KVM IO support' In-Reply-To: <20141219080149.GA2160@redhat.redhat.com> References: <20141219080149.GA2160@redhat.redhat.com> Message-ID: Thank @Sahid, I will help to review this patch :) 2014-12-19 16:01 GMT+08:00 Sahid Orentino Ferdjaoui < sahid.ferdjaoui at redhat.com>: > > On Fri, Dec 19, 2014 at 11:36:03AM +0800, Rui Chen wrote: > > Hi, > > > > Is Anybody still working on this nova BP 'Improve Nova KVM IO support'? > > https://blueprints.launchpad.net/nova/+spec/improve-nova-kvm-io-support > > This feature is already in review, since it is only to add an option > to libvirt I guess we can consider to do not address a spec but I > may be wrong. > > https://review.openstack.org/#/c/117442/ > > s. > > > I willing to complement nova-spec and implement this feature in kilo or > > subsequent versions. > > > > Feel free to assign this BP to me, thanks:) > > > > Best Regards. > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pasquale.porreca at dektech.com.au Fri Dec 19 09:07:14 2014 From: pasquale.porreca at dektech.com.au (Pasquale Porreca) Date: Fri, 19 Dec 2014 10:07:14 +0100 Subject: [openstack-dev] #PERSONAL# : Git checkout command for Blueprints submission In-Reply-To: References: Message-ID: <5493EAC2.4070403@dektech.com.au> I think you meant git checkout -b bp/ :) @Swati: when you commit, add the message: Implements: blueprint For more info you can give a look at gerrit workflow wiki: https://wiki.openstack.org/wiki/Gerrit_Workflow On 12/18/14 18:17, Edgar Magana wrote: > It is git checkout -b bp/ > > Edgar > > From: Swati Shukla1 > > Reply-To: "openstack-dev at lists.openstack.org > " > > > Date: Tuesday, December 16, 2014 at 10:53 PM > To: "openstack-dev at lists.openstack.org > " > > > Subject: [openstack-dev] #PERSONAL# : Git checkout command for > Blueprints submission > > > Hi All, > > Generally, for bug submissions, we use ""git checkout -b > bug/"" > > What is the similar 'git checkout' command for blueprints submission? > > Swati Shukla > Tata Consultancy Services > Mailto: swati.shukla1 at tcs.com > Website: http://www.tcs.com > ____________________________________________ > Experience certainty. IT Services > Business Solutions > Consulting > ____________________________________________ > > =====-----=====-----===== > Notice: The information contained in this e-mail > message and/or attachments to it may contain > confidential or privileged information. If you are > not the intended recipient, any dissemination, use, > review, distribution, printing or copying of the > information contained in this e-mail message > and/or attachments to it are strictly prohibited. If > you have received this communication in error, > please notify us by reply e-mail or telephone and > immediately and permanently delete the message > and any attachments. Thank you > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr -------------- next part -------------- An HTML attachment was scrubbed... URL: From pshchelokovskyy at mirantis.com Fri Dec 19 09:13:28 2014 From: pshchelokovskyy at mirantis.com (Pavlo Shchelokovskyy) Date: Fri, 19 Dec 2014 11:13:28 +0200 Subject: [openstack-dev] [Heat] How can I write at milestone section of blueprint? In-Reply-To: <20141219175743.7CC8.E1E9C6FF@jp.fujitsu.com> References: <20141219170200.0619.E1E9C6FF@jp.fujitsu.com> <20141219175743.7CC8.E1E9C6FF@jp.fujitsu.com> Message-ID: Hi Yasunori, that's the point of using code review for specs - you make your the best bet / effort, and we then as a community decide if it should be changed ;) So feel free submitting it with e.g. K-3 target, we'll figure out the correct thing during review. Best regards, Pavlo Shchelokovskyy Software Engineer Mirantis Inc www.mirantis.com On Fri, Dec 19, 2014 at 10:57 AM, Yasunori Goto wrote: > > Hi, Thomas-san, > > Thank you for your response. > > > you can submit a blueprint spec as a gerrit review to the heat-specs > > repository [1]. > > I would suggest to have a look at some existing specs that already got > > accepted to have an example for the format, important sections etc. > > > > All kilo related specs are in a kilo sub-directory in the repo, and the > > proposed milestone is mentioned in the spec itself. > > > > [1] https://github.com/openstack/heat-specs > > Hmm. > However, "Kilo-1", "Kilo-2", or "Kilo-3" is written at the section of > "Target Milestone for completion:" in these blueprint. > > I don't find I can decide such concrete Milestone for my new blueprint. > It is why I asked... > > Thanks, > > > > > > Regards, > > Thomas > > > > > > > From: Yasunori Goto > > > To: Openstack-Dev-ML > > > Date: 19/12/2014 09:05 > > > Subject: [openstack-dev] [Heat] How can I write at milestone section > > > of blueprint? > > > > > > > > > Hello, > > > > > > This is the first mail at Openstack community, > > > and I have a small question about how to write blueprint for Heat. > > > > > > Currently our team would like to propose 2 interfaces > > > for users operation in HOT. > > > (One is "Event handler" which is to notify user's defined event to > heat. > > > Another is definitions of action when heat catches the above > > notification.) > > > So, I'm preparing the blueprint for it. > > > > > > However, I can not find how I can write at the milestone section of > > blueprint. > > > > > > Heat blueprint template has a section for Milestones. > > > "Milestones -- Target Milestone for completeion:" > > > > > > But I don't think I can decide it by myself. > > > In my understanding, it should be decided by PTL. > > > > > > In addition, probably the above our request will not finish > > > by Kilo. I suppose it will be "L" version or later. > > > > > > So, what should I write at this section? > > > "Kilo-x", "L version", or empty? > > > > > > Thanks, > > > > > > -- > > > Yasunori Goto > > > > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Yasunori Goto > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From y-goto at jp.fujitsu.com Fri Dec 19 09:20:45 2014 From: y-goto at jp.fujitsu.com (Yasunori Goto) Date: Fri, 19 Dec 2014 18:20:45 +0900 Subject: [openstack-dev] [Heat] How can I write at milestone section of blueprint? In-Reply-To: References: <20141219175743.7CC8.E1E9C6FF@jp.fujitsu.com> Message-ID: <20141219182042.7CCB.E1E9C6FF@jp.fujitsu.com> Hi, Pavlo-san > Hi Yasunori, > > that's the point of using code review for specs - you make your the best > bet / effort, and we then as a community decide if it should be changed ;) > > So feel free submitting it with e.g. K-3 target, we'll figure out the > correct thing during review. Ok, I got it. Thanks, -- Yasunori Goto From duncan.thomas at gmail.com Fri Dec 19 09:20:55 2014 From: duncan.thomas at gmail.com (Duncan Thomas) Date: Fri, 19 Dec 2014 01:20:55 -0800 Subject: [openstack-dev] [cinder] [driver] DB operations In-Reply-To: References: Message-ID: So our general advice has historical been 'drivers should not be accessing the db directly'. I haven't had chance to look at your driver code yet, I've been on vacation, but my suggestion is that if you absolutely must store something in the admin metadata rather than somewhere that is covered by the model update (generally provider location and provider auth) then writing some helper methods that wrap the context bump and db call would be better than accessing it directly from the driver. Duncan Thomas On Dec 18, 2014 11:41 PM, "Amit Das" wrote: > Hi Stackers, > > I have been developing a Cinder driver for CloudByte storage and have come > across some scenarios where the driver needs to do create, read & update > operations on cinder database (volume_admin_metadata table). This is > required to establish a mapping between OpenStack IDs with the backend > storage IDs. > > Now, I have got some review comments w.r.t the usage of DB related > operations esp. w.r.t raising the context to admin. > > In short, it has been advised not to use "*context.get_admin_context()*". > > > https://review.openstack.org/#/c/102511/15/cinder/volume/drivers/cloudbyte/cloudbyte.py > > However, i get errors trying to use the default context as shown below: > > *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher File > "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 103, in > is_admin_context* > *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher return > context.is_admin* > *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher > AttributeError: 'module' object has no attribute 'is_admin'* > > So what is the proper way to run these DB operations from within a driver ? > > > Regards, > Amit > *CloudByte Inc.* > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp at jamezpolley.com Fri Dec 19 09:21:17 2014 From: jp at jamezpolley.com (James Polley) Date: Fri, 19 Dec 2014 10:21:17 +0100 Subject: [openstack-dev] [TripleO] Weekly meeting roundup - midcycle confirmation, nomergepy blockage Message-ID: We had a few topics come up in this week's meeting which we thought are worth drawing everyone's attention to You can see the full meeting logs at http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-12-16-19.01.log.html (the shorter meeting notes don't have much information) *No mergepy - blocked by bug 1401929* As part of our plans to move away from needing merge.py, Tomas Sedovic raised https://review.openstack.org/#/c/140375/ to make the f20 job use the no-mergepy options. Unfortunately this has run into a roadblock, discussed in https://bugs.launchpad.net/heat/+bug/1401929 (Overzealous validation of images in empty ResourceGroups). Steve Hardy is (I believe) working on a fix, targeted at kilo-2. In the meantime it'd be nice if we could find a workaround... *Mid-cycle dates confirmed - please RSVP* Clint has confirmed that we have the Seattle office booked for Feb 18-20. We have limisted space, so if you're planning to attend, *please visit https://etherpad.openstack.org/p/kilo-tripleo-midcycle-meetup and add your name to the confirmed list, even if you previously indicated you were coming. *Currently we only have 8 confirmed attendees The etherpad also has a draft schedule and venue details. We aren't planning to arrange a group rate for the hotel, but there are many good hotels in a short distance from the HP office. -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Fri Dec 19 09:44:48 2014 From: soulxu at gmail.com (Alex Xu) Date: Fri, 19 Dec 2014 17:44:48 +0800 Subject: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy Message-ID: Hi, There is problem when evacuate instance. If the instance is in the server group with affinity policy, the instance can't evacuate out the failed compute node. I know there is soft affinity policy under development, but think of if the instance in server group with hard affinity means no way to get it back when compute node failed, it's really confuse. I guess there should be some people concern that will violate the affinity policy. But I think the compute node already down, all the instance in that server group are down also, so I think we needn't care about the policy anymore. I wrote up a patch can fix this problem: https://review.openstack.org/#/c/135607/ We have some discussion on the gerrit (Thanks Sylvain for discuss with me), but we still not sure we are on the right direction. So I bring this up at here. Thanks Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From vivekanandan.narasimhan at hp.com Fri Dec 19 09:44:42 2014 From: vivekanandan.narasimhan at hp.com (Narasimhan, Vivekanandan) Date: Fri, 19 Dec 2014 09:44:42 +0000 Subject: [openstack-dev] Request for comments for a possible solution In-Reply-To: <1921367349.598949.1418923701155.JavaMail.zimbra@redhat.com> References: <1921367349.598949.1418923701155.JavaMail.zimbra@redhat.com> Message-ID: <289BD3B977EE7247AB06499D1B4AF1C426A988D8@G5W2725.americas.hpqcorp.net> Hi Mike, Few clarifications inline [Vivek] -----Original Message----- From: Mike Kolesnik [mailto:mkolesni at redhat.com] Sent: Thursday, December 18, 2014 10:58 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution Hi Mathieu, Thanks for the quick reply, some comments inline.. Regards, Mike ----- Original Message ----- > Hi mike, > > thanks for working on this bug : > > On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton wrote: > > > > > > On 12/18/14, 2:06 PM, "Mike Kolesnik" wrote: > > > >>Hi Neutron community members. > >> > >>I wanted to query the community about a proposal of how to fix HA > >>routers not working with L2Population (bug 1365476[1]). > >>This bug is important to fix especially if we want to have HA > >>routers and DVR routers working together. > >> > >>[1] https://bugs.launchpad.net/neutron/+bug/1365476 > >> > >>What's happening now? > >>* HA routers use distributed ports, i.e. the port with the same IP & > >>MAC > >> details is applied on all nodes where an L3 agent is hosting this > >>router. > >>* Currently, the port details have a binding pointing to an > >>arbitrary node > >> and this is not updated. > >>* L2pop takes this "potentially stale" information and uses it to create: > >> 1. A tunnel to the node. > >> 2. An FDB entry that directs traffic for that port to that node. > >> 3. If ARP responder is on, ARP requests will not traverse the network. > >>* Problem is, the master router wouldn't necessarily be running on > >>the > >> reported agent. > >> This means that traffic would not reach the master node but some > >>arbitrary > >> node where the router master might be running, but might be in > >>another > >> state (standby, fail). > >> > >>What is proposed? > >>Basically the idea is not to do L2Pop for HA router ports that > >>reside on the tenant network. > >>Instead, we would create a tunnel to each node hosting the HA router > >>so that the normal learning switch functionality would take care of > >>switching the traffic to the master router. > > > > In Neutron we just ensure that the MAC address is unique per network. > > Could a duplicate MAC address cause problems here? > > gary, AFAIU, from a Neutron POV, there is only one port, which is the > router Port, which is plugged twice. One time per port. > I think that the capacity to bind a port to several host is also a > prerequisite for a clean solution here. This will be provided by > patches to this bug : > https://bugs.launchpad.net/neutron/+bug/1367391 > > > >>This way no matter where the master router is currently running, the > >>data plane would know how to forward traffic to it. > >>This solution requires changes on the controller only. > >> > >>What's to gain? > >>* Data plane only solution, independent of the control plane. > >>* Lowest failover time (same as HA routers today). > >>* High backport potential: > >> * No APIs changed/added. > >> * No configuration changes. > >> * No DB changes. > >> * Changes localized to a single file and limited in scope. > >> > >>What's the alternative? > >>An alternative solution would be to have the controller update the > >>port binding on the single port so that the plain old L2Pop happens > >>and notifies about the location of the master router. > >>This basically negates all the benefits of the proposed solution, > >>but is wider. > >>This solution depends on the report-ha-router-master spec which is > >>currently in the implementation phase. > >> > >>It's important to note that these two solutions don't collide and > >>could be done independently. The one I'm proposing just makes more > >>sense from an HA viewpoint because of it's benefits which fit the HA > >>methodology of being fast & having as little outside dependency as > >>possible. > >>It could be done as an initial solution which solves the bug for > >>mechanism drivers that support normal learning switch (OVS), and > >>later kept as an optimization to the more general, controller based, > >>solution which will solve the issue for any mechanism driver working > >>with L2Pop (Linux Bridge, possibly others). > >> > >>Would love to hear your thoughts on the subject. > > You will have to clearly update the doc to mention that deployment > with Linuxbridge+l2pop are not compatible with HA. Yes this should be added and this is already the situation right now. However if anyone would like to work on a LB fix (the general one or some specific one) I would gladly help with reviewing it. > > Moreover, this solution is downgrading the l2pop solution, by > disabling the ARP-responder when VMs want to talk to a HA router. > This means that ARP requests will be duplicated to every overlay > tunnel to feed the OVS Mac learning table. > This is something that we were trying to avoid with l2pop. But may be > this is acceptable. Yes basically you're correct, however this would be only limited to those tunnels that connect to the nodes where the HA router is hosted, so we would still limit the amount of traffic that is sent across the underlay. Also bear in mind that ARP is actually good (at least in OVS case) since it helps the VM locate on which tunnel the master is, so once it receives the ARP response it records a flow that directs the traffic to the correct tunnel, so we just get hit by the one ARP broadcast but it's sort of a necessary evil in order to locate the master.. [Vivek] When the failover happens, the VMs would be actually sending traffic to the old master node. They won't be getting any response back. At this time does the VMs redo an ARP request for the HA Router? And that again sets up the learned rules correctly again in br-tun, so that the routed traffic from VM continues on to the new master.. > > I know that ofagent is also using l2pop, I would like to know if > ofagent deployment will be compatible with the workaround that you are > proposing. I would like to know that too, hopefully someone from OFagent can shed some light. > > My concern is that, with DVR, there are at least two major features > that are not compatible with Linuxbridge. > Linuxbridge is not running in the gate. I don't know if anybody is > running a 3rd party testing with Linuxbridge deployments. If anybody > does, it would be great to have it voting on gerrit! > > But I really wonder what is the future of linuxbridge compatibility? > should we keep on improving OVS solution without taking into account > the linuxbridge implementation? I don't know actually, but my capability is to fix it for OVS the best way possible. As I said the situation for LB won't become worse than it already is, legacy routers would till function as always.. This fix also will not block fixing LB in any other way since it can be easily adjusted (if necessary) to work only for supporting mechanisms (OVS AFAIK). Also if anyone is willing to pick up the glove and implement the general controller based fix, or something more focused on LB I will happily help review what I can. [Vivek] Also by this proposal, will the HA Router be able to co-operate with DVR which actually mandates L2-Pop? -- Thanks, Vivek > > Regards, > > Mathieu > > >> > >>Regards, > >>Mike > >> > >>_______________________________________________ > >>OpenStack-dev mailing list > >>OpenStack-dev at lists.openstack.org > >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mathieu.rohon at gmail.com Fri Dec 19 09:58:23 2014 From: mathieu.rohon at gmail.com (Mathieu Rohon) Date: Fri, 19 Dec 2014 10:58:23 +0100 Subject: [openstack-dev] Request for comments for a possible solution In-Reply-To: <289BD3B977EE7247AB06499D1B4AF1C426A988D8@G5W2725.americas.hpqcorp.net> References: <1921367349.598949.1418923701155.JavaMail.zimbra@redhat.com> <289BD3B977EE7247AB06499D1B4AF1C426A988D8@G5W2725.americas.hpqcorp.net> Message-ID: Hi vivek, On Fri, Dec 19, 2014 at 10:44 AM, Narasimhan, Vivekanandan wrote: > Hi Mike, > > Few clarifications inline [Vivek] > > -----Original Message----- > From: Mike Kolesnik [mailto:mkolesni at redhat.com] > Sent: Thursday, December 18, 2014 10:58 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution > > Hi Mathieu, > > Thanks for the quick reply, some comments inline.. > > Regards, > Mike > > ----- Original Message ----- >> Hi mike, >> >> thanks for working on this bug : >> >> On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton wrote: >> > >> > >> > On 12/18/14, 2:06 PM, "Mike Kolesnik" wrote: >> > >> >>Hi Neutron community members. >> >> >> >>I wanted to query the community about a proposal of how to fix HA >> >>routers not working with L2Population (bug 1365476[1]). >> >>This bug is important to fix especially if we want to have HA >> >>routers and DVR routers working together. >> >> >> >>[1] https://bugs.launchpad.net/neutron/+bug/1365476 >> >> >> >>What's happening now? >> >>* HA routers use distributed ports, i.e. the port with the same IP & >> >>MAC >> >> details is applied on all nodes where an L3 agent is hosting this >> >>router. >> >>* Currently, the port details have a binding pointing to an >> >>arbitrary node >> >> and this is not updated. >> >>* L2pop takes this "potentially stale" information and uses it to create: >> >> 1. A tunnel to the node. >> >> 2. An FDB entry that directs traffic for that port to that node. >> >> 3. If ARP responder is on, ARP requests will not traverse the network. >> >>* Problem is, the master router wouldn't necessarily be running on >> >>the >> >> reported agent. >> >> This means that traffic would not reach the master node but some >> >>arbitrary >> >> node where the router master might be running, but might be in >> >>another >> >> state (standby, fail). >> >> >> >>What is proposed? >> >>Basically the idea is not to do L2Pop for HA router ports that >> >>reside on the tenant network. >> >>Instead, we would create a tunnel to each node hosting the HA router >> >>so that the normal learning switch functionality would take care of >> >>switching the traffic to the master router. >> > >> > In Neutron we just ensure that the MAC address is unique per network. >> > Could a duplicate MAC address cause problems here? >> >> gary, AFAIU, from a Neutron POV, there is only one port, which is the >> router Port, which is plugged twice. One time per port. >> I think that the capacity to bind a port to several host is also a >> prerequisite for a clean solution here. This will be provided by >> patches to this bug : >> https://bugs.launchpad.net/neutron/+bug/1367391 >> >> >> >>This way no matter where the master router is currently running, the >> >>data plane would know how to forward traffic to it. >> >>This solution requires changes on the controller only. >> >> >> >>What's to gain? >> >>* Data plane only solution, independent of the control plane. >> >>* Lowest failover time (same as HA routers today). >> >>* High backport potential: >> >> * No APIs changed/added. >> >> * No configuration changes. >> >> * No DB changes. >> >> * Changes localized to a single file and limited in scope. >> >> >> >>What's the alternative? >> >>An alternative solution would be to have the controller update the >> >>port binding on the single port so that the plain old L2Pop happens >> >>and notifies about the location of the master router. >> >>This basically negates all the benefits of the proposed solution, >> >>but is wider. >> >>This solution depends on the report-ha-router-master spec which is >> >>currently in the implementation phase. >> >> >> >>It's important to note that these two solutions don't collide and >> >>could be done independently. The one I'm proposing just makes more >> >>sense from an HA viewpoint because of it's benefits which fit the HA >> >>methodology of being fast & having as little outside dependency as >> >>possible. >> >>It could be done as an initial solution which solves the bug for >> >>mechanism drivers that support normal learning switch (OVS), and >> >>later kept as an optimization to the more general, controller based, >> >>solution which will solve the issue for any mechanism driver working >> >>with L2Pop (Linux Bridge, possibly others). >> >> >> >>Would love to hear your thoughts on the subject. >> >> You will have to clearly update the doc to mention that deployment >> with Linuxbridge+l2pop are not compatible with HA. > > Yes this should be added and this is already the situation right now. > However if anyone would like to work on a LB fix (the general one or some specific one) I would gladly help with reviewing it. > >> >> Moreover, this solution is downgrading the l2pop solution, by >> disabling the ARP-responder when VMs want to talk to a HA router. >> This means that ARP requests will be duplicated to every overlay >> tunnel to feed the OVS Mac learning table. >> This is something that we were trying to avoid with l2pop. But may be >> this is acceptable. > > Yes basically you're correct, however this would be only limited to those tunnels that connect to the nodes where the HA router is hosted, so we would still limit the amount of traffic that is sent across the underlay. > > Also bear in mind that ARP is actually good (at least in OVS case) since it helps the VM locate on which tunnel the master is, so once it receives the ARP response it records a flow that directs the traffic to the correct tunnel, so we just get hit by the one ARP broadcast but it's sort of a necessary evil in order to locate the master.. > > [Vivek] When the failover happens, the VMs would be actually sending traffic to the old master node. > They won't be getting any response back. > > At this time does the VMs redo an ARP request for the HA Router? > And that again sets up the learned rules correctly again in br-tun, so that the routed traffic > from VM continues on to the new master.. The new master will send a gARP packet to update the learning tables. > > >> >> I know that ofagent is also using l2pop, I would like to know if >> ofagent deployment will be compatible with the workaround that you are >> proposing. > > I would like to know that too, hopefully someone from OFagent can shed some light. > >> >> My concern is that, with DVR, there are at least two major features >> that are not compatible with Linuxbridge. >> Linuxbridge is not running in the gate. I don't know if anybody is >> running a 3rd party testing with Linuxbridge deployments. If anybody >> does, it would be great to have it voting on gerrit! >> >> But I really wonder what is the future of linuxbridge compatibility? >> should we keep on improving OVS solution without taking into account >> the linuxbridge implementation? > > I don't know actually, but my capability is to fix it for OVS the best way possible. > As I said the situation for LB won't become worse than it already is, legacy routers would till function as always.. This fix also will not block fixing LB in any other way since it can be easily adjusted (if > necessary) to work only for supporting mechanisms (OVS AFAIK). > > Also if anyone is willing to pick up the glove and implement the general controller based fix, or something more focused on LB I will happily help review what I can. > > [Vivek] Also by this proposal, will the HA Router be able to co-operate with DVR which actually mandates L2-Pop? > > -- > Thanks, > > Vivek > >> >> Regards, >> >> Mathieu >> >> >> >> >>Regards, >> >>Mike >> >> >> >>_______________________________________________ >> >>OpenStack-dev mailing list >> >>OpenStack-dev at lists.openstack.org >> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From shardy at redhat.com Fri Dec 19 10:00:08 2014 From: shardy at redhat.com (Steven Hardy) Date: Fri, 19 Dec 2014 10:00:08 +0000 Subject: [openstack-dev] [TripleO] Weekly meeting roundup - midcycle confirmation, nomergepy blockage In-Reply-To: References: Message-ID: <20141219100008.GC22503@t430slt.redhat.com> On Fri, Dec 19, 2014 at 10:21:17AM +0100, James Polley wrote: > We had a few topics come up in this week's meeting which we thought are > worth drawing everyone's attention to > You can see the full meeting logs > atA http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-12-16-19.01.log.html > (the shorter meeting notes don't have much information) > No mergepy - blocked by bug 1401929 > As part of our plans to move away from needing merge.py, Tomas Sedovic > raisedA https://review.openstack.org/#/c/140375/ to make the f20 job use > the no-mergepy options. > > Unfortunately this has run into a roadblock, discussed > inA https://bugs.launchpad.net/heat/+bug/1401929 (Overzealous validation > of images in empty ResourceGroups). > Steve Hardy is (I believe) working on a fix, targeted at kilo-2. In the > meantime it'd be nice if we could find a workaround... The obvious workaround is just passing image parameters in to images which exist in glance (e.g any image, just so the validation passes). Not sure it's worth spending time on that though, as I have a fix for it in heat, just working on tests then this will be ready for review today: https://review.openstack.org/#/c/141444/ Apologies for the inconvenience, hopefully we can get this unblocked today. Steve From matthew.gilliard at gmail.com Fri Dec 19 10:05:44 2014 From: matthew.gilliard at gmail.com (Matthew Gilliard) Date: Fri, 19 Dec 2014 10:05:44 +0000 Subject: [openstack-dev] [libvirt][vpnaas] IRC Meeting Clash Message-ID: Hello, At the moment, both Libvirt [1] and VPNaaS [2] are down as having meetings in #openstack-meeting-3 at 1500UTC on Tuesdays. Of course, there can be only one - and it looks as if the VPN meeting is the one that actually takes place there. What's the status of the libvirt meetings? Have they moved, or are they not happening any more? Thanks, Matthew [1] https://wiki.openstack.org/wiki/Meetings#Libvirt_Meeting [2] https://wiki.openstack.org/wiki/Meetings#VPN_as_a_Service_.28VPNaaS.29_team_meeting From berrange at redhat.com Fri Dec 19 10:12:16 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Fri, 19 Dec 2014 10:12:16 +0000 Subject: [openstack-dev] [libvirt][vpnaas] IRC Meeting Clash In-Reply-To: References: Message-ID: <20141219101216.GD4410@redhat.com> On Fri, Dec 19, 2014 at 10:05:44AM +0000, Matthew Gilliard wrote: > Hello, > > At the moment, both Libvirt [1] and VPNaaS [2] are down as having > meetings in #openstack-meeting-3 at 1500UTC on Tuesdays. Of course, > there can be only one - and it looks as if the VPN meeting is the one > that actually takes place there. > > What's the status of the libvirt meetings? Have they moved, or are > they not happening any more? They happen, but when there are no agenda items that need discussing they're done pretty quickly. Given that VPN meeting was only added ot the wiki a week ago, I think it should be the meeting that changes time or place. NB identifying free slots from the wiki is a total PITA. It is easier to install the iCal calendar feed, which gives you a visual representation of what the free/busy slots are during the week. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From shardy at redhat.com Fri Dec 19 10:17:49 2014 From: shardy at redhat.com (Steven Hardy) Date: Fri, 19 Dec 2014 10:17:49 +0000 Subject: [openstack-dev] [Heat] How can I write at milestone section of blueprint? In-Reply-To: <20141219170200.0619.E1E9C6FF@jp.fujitsu.com> References: <20141219170200.0619.E1E9C6FF@jp.fujitsu.com> Message-ID: <20141219101748.GD22503@t430slt.redhat.com> On Fri, Dec 19, 2014 at 05:02:04PM +0900, Yasunori Goto wrote: > > Hello, > > This is the first mail at Openstack community, Welcome! :) > and I have a small question about how to write blueprint for Heat. > > Currently our team would like to propose 2 interfaces > for users operation in HOT. > (One is "Event handler" which is to notify user's defined event to heat. > Another is definitions of action when heat catches the above notification.) > So, I'm preparing the blueprint for it. Please include details of the exact use-case, e.g the problem you're trying to solve (not just the proposed solution), as it's possible we can suggest solutions based on exiting interfaces. > However, I can not find how I can write at the milestone section of blueprint. > > Heat blueprint template has a section for Milestones. > "Milestones -- Target Milestone for completeion:" > > But I don't think I can decide it by myself. > In my understanding, it should be decided by PTL. Normally, it's decided by when the person submitting the spec expects to finish writing the code by. The PTL doesn't really have much control over that ;) > In addition, probably the above our request will not finish > by Kilo. I suppose it will be "L" version or later. So to clarify, you want to propose the feature, but you're not planning on working on it (e.g implementing it) yourself? > So, what should I write at this section? > "Kilo-x", "L version", or empty? As has already been mentioned, it doesn't matter that much - I see it as a statement of intent from developers. If you're just requesting a feature, you can even leave it blank if you want and we'll update it when an assignee is found (e.g during the spec review). Thanks, Steve From chmouel at chmouel.com Fri Dec 19 10:17:27 2014 From: chmouel at chmouel.com (Chmouel Boudjnah) Date: Fri, 19 Dec 2014 11:17:27 +0100 Subject: [openstack-dev] [openstack-announce] [keystonemiddleware] keystonemiddlware release 1.3.0 In-Reply-To: References: Message-ID: On Thu, Dec 18, 2014 at 8:25 PM, Morgan Fainberg wrote: > > * http_connect_timeout option is now an integer instead of a boolean. > * The service user for auth_token middlware can now be in a domain other > than the default domain. > fyi it has a fix in there as well so you can now use it in your py3 based wsgi service. Chmouel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mathieu.rohon at gmail.com Fri Dec 19 10:26:26 2014 From: mathieu.rohon at gmail.com (Mathieu Rohon) Date: Fri, 19 Dec 2014 11:26:26 +0100 Subject: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution In-Reply-To: <1921367349.598949.1418923701155.JavaMail.zimbra@redhat.com> References: <1921367349.598949.1418923701155.JavaMail.zimbra@redhat.com> Message-ID: Mike, I'm not even sure that your solution works without being able to bind a router HA port to several hosts. What's happening currently is that you : 1.create the router on two l3agent. 2. those l3agent trigger the sync_router() on the l3plugin. 3. l3plugin.sync_routers() will trigger l2plugin.update_port(host=l3agent). 4. ML2 will bind the port to the host mentioned in the last update_port(). >From a l2pop perspective, this will result in creating only one tunnel to the host lastly specified. I can't find any code that forces that only the master router binds its router port. So we don't even know if the host which binds the router port is hosting the master router or the slave one, and so if l2pop is creating the tunnel to the master or to the slave. Can you confirm that the above sequence is correct? or am I missing something? Without the capacity to bind a port to several hosts, l2pop won't be able to create tunnel correctly, that's the reason why I was saying that a prerequisite for a smart solution would be to first fix the bug : https://bugs.launchpad.net/neutron/+bug/1367391 DVR Had the same issue. Their workaround was to create a new port_binding tables, that manages the capacity for one DVR port to be bound to several host. As mentioned in the bug 1367391, this adding a technical debt in ML2, which has to be tackle down in priority from my POV. On Thu, Dec 18, 2014 at 6:28 PM, Mike Kolesnik wrote: > Hi Mathieu, > > Thanks for the quick reply, some comments inline.. > > Regards, > Mike > > ----- Original Message ----- >> Hi mike, >> >> thanks for working on this bug : >> >> On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton wrote: >> > >> > >> > On 12/18/14, 2:06 PM, "Mike Kolesnik" wrote: >> > >> >>Hi Neutron community members. >> >> >> >>I wanted to query the community about a proposal of how to fix HA routers >> >>not >> >>working with L2Population (bug 1365476[1]). >> >>This bug is important to fix especially if we want to have HA routers and >> >>DVR >> >>routers working together. >> >> >> >>[1] https://bugs.launchpad.net/neutron/+bug/1365476 >> >> >> >>What's happening now? >> >>* HA routers use distributed ports, i.e. the port with the same IP & MAC >> >> details is applied on all nodes where an L3 agent is hosting this >> >>router. >> >>* Currently, the port details have a binding pointing to an arbitrary node >> >> and this is not updated. >> >>* L2pop takes this "potentially stale" information and uses it to create: >> >> 1. A tunnel to the node. >> >> 2. An FDB entry that directs traffic for that port to that node. >> >> 3. If ARP responder is on, ARP requests will not traverse the network. >> >>* Problem is, the master router wouldn't necessarily be running on the >> >> reported agent. >> >> This means that traffic would not reach the master node but some >> >>arbitrary >> >> node where the router master might be running, but might be in another >> >> state (standby, fail). >> >> >> >>What is proposed? >> >>Basically the idea is not to do L2Pop for HA router ports that reside on >> >>the >> >>tenant network. >> >>Instead, we would create a tunnel to each node hosting the HA router so >> >>that >> >>the normal learning switch functionality would take care of switching the >> >>traffic to the master router. >> > >> > In Neutron we just ensure that the MAC address is unique per network. >> > Could a duplicate MAC address cause problems here? >> >> gary, AFAIU, from a Neutron POV, there is only one port, which is the >> router Port, which is plugged twice. One time per port. >> I think that the capacity to bind a port to several host is also a >> prerequisite for a clean solution here. This will be provided by >> patches to this bug : >> https://bugs.launchpad.net/neutron/+bug/1367391 >> >> >> >>This way no matter where the master router is currently running, the data >> >>plane would know how to forward traffic to it. >> >>This solution requires changes on the controller only. >> >> >> >>What's to gain? >> >>* Data plane only solution, independent of the control plane. >> >>* Lowest failover time (same as HA routers today). >> >>* High backport potential: >> >> * No APIs changed/added. >> >> * No configuration changes. >> >> * No DB changes. >> >> * Changes localized to a single file and limited in scope. >> >> >> >>What's the alternative? >> >>An alternative solution would be to have the controller update the port >> >>binding >> >>on the single port so that the plain old L2Pop happens and notifies about >> >>the >> >>location of the master router. >> >>This basically negates all the benefits of the proposed solution, but is >> >>wider. >> >>This solution depends on the report-ha-router-master spec which is >> >>currently in >> >>the implementation phase. >> >> >> >>It's important to note that these two solutions don't collide and could >> >>be done >> >>independently. The one I'm proposing just makes more sense from an HA >> >>viewpoint >> >>because of it's benefits which fit the HA methodology of being fast & >> >>having as >> >>little outside dependency as possible. >> >>It could be done as an initial solution which solves the bug for mechanism >> >>drivers that support normal learning switch (OVS), and later kept as an >> >>optimization to the more general, controller based, solution which will >> >>solve >> >>the issue for any mechanism driver working with L2Pop (Linux Bridge, >> >>possibly >> >>others). >> >> >> >>Would love to hear your thoughts on the subject. >> >> You will have to clearly update the doc to mention that deployment >> with Linuxbridge+l2pop are not compatible with HA. > > Yes this should be added and this is already the situation right now. > However if anyone would like to work on a LB fix (the general one or some > specific one) I would gladly help with reviewing it. > >> >> Moreover, this solution is downgrading the l2pop solution, by >> disabling the ARP-responder when VMs want to talk to a HA router. >> This means that ARP requests will be duplicated to every overlay >> tunnel to feed the OVS Mac learning table. >> This is something that we were trying to avoid with l2pop. But may be >> this is acceptable. > > Yes basically you're correct, however this would be only limited to those > tunnels that connect to the nodes where the HA router is hosted, so we > would still limit the amount of traffic that is sent across the underlay. > > Also bear in mind that ARP is actually good (at least in OVS case) since > it helps the VM locate on which tunnel the master is, so once it receives > the ARP response it records a flow that directs the traffic to the correct > tunnel, so we just get hit by the one ARP broadcast but it's sort of a > necessary evil in order to locate the master.. > >> >> I know that ofagent is also using l2pop, I would like to know if >> ofagent deployment will be compatible with the workaround that you are >> proposing. > > I would like to know that too, hopefully someone from OFagent can shed > some light. > >> >> My concern is that, with DVR, there are at least two major features >> that are not compatible with Linuxbridge. >> Linuxbridge is not running in the gate. I don't know if anybody is >> running a 3rd party testing with Linuxbridge deployments. If anybody >> does, it would be great to have it voting on gerrit! >> >> But I really wonder what is the future of linuxbridge compatibility? >> should we keep on improving OVS solution without taking into account >> the linuxbridge implementation? > > I don't know actually, but my capability is to fix it for OVS the best > way possible. > As I said the situation for LB won't become worse than it already is, > legacy routers would till function as always.. This fix also will not > block fixing LB in any other way since it can be easily adjusted (if > necessary) to work only for supporting mechanisms (OVS AFAIK). > > Also if anyone is willing to pick up the glove and implement > the general controller based fix, or something more focused on LB I will > happily help review what I can. > >> >> Regards, >> >> Mathieu >> >> >> >> >>Regards, >> >>Mike >> >> >> >>_______________________________________________ >> >>OpenStack-dev mailing list >> >>OpenStack-dev at lists.openstack.org >> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From akuznetsova at mirantis.com Fri Dec 19 10:39:25 2014 From: akuznetsova at mirantis.com (Anastasia Kuznetsova) Date: Fri, 19 Dec 2014 13:39:25 +0300 Subject: [openstack-dev] [Mistral] Plans to load and performance testing Message-ID: Hello everyone, I want to announce that Mistral team has started work on load and performance testing in this release cycle. Brief information about scope of our work can be found here: https://wiki.openstack.org/wiki/Mistral/Testing#Load_and_Performance_Testing First results are published here: https://etherpad.openstack.org/p/mistral-rally-testing-results Thanks, Anastasia Kuznetsova @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From akuznetsova at mirantis.com Fri Dec 19 11:11:42 2014 From: akuznetsova at mirantis.com (Anastasia Kuznetsova) Date: Fri, 19 Dec 2014 14:11:42 +0300 Subject: [openstack-dev] [Mistral] ActionProvider In-Reply-To: References: Message-ID: Winson, Renat, I think that it is a good idea. Moreover, it is relevant, because about a month ago there was a question from one guy in our IRC channel about what if some of other 3rd party systems which provide their own client bindings (in python) want to integrate with Mistral, how it will work. For that moment we just thought about it, but hadn't any blueprints or discussions. Thanks, Anastasia Kuznetsova @ Mirantis Inc. On Thu, Dec 18, 2014 at 9:33 AM, Renat Akhmerov wrote: > > Winson, > > The idea itself makes a lot of sense to me because we?ve had a number of > discussions about how we could make action subsystem even more pluggable > and flexible. One of the questions that we?d like to solve is to be able to > add actions ?on the fly? and at the same time stay safe. I think this whole > thing is about specific technical details so I would like to see more of > them. Generally speaking, you?re right about actions residing in a > database, about 3 months ago we made this refactoring and put all actions > into db but it may not be 100% necessary. Btw, we already have a concept of > action generator that we use to automatically build OpenStack actions so > you can take a look at how they work. Long story short? We?ve already made > some steps towards being more flexible and have some facilities that could > be further improved. > > Again, the idea is very interesting to me (and not only to me). Please > share the details. > > Thanks > > Renat Akhmerov > @ Mirantis Inc. > > > > > On 17 Dec 2014, at 13:22, W Chan wrote: > > > > Renat, > > > > We want to introduce the concept of an ActionProvider to Mistral. We > are thinking that with an ActionProvider, a third party system can extend > Mistral with it's own action catalog and set of dedicated and specialized > action executors. The ActionProvider will return it's own list of actions > via an abstract interface. This minimizes the complexity and latency in > managing and sync'ing the Action table. In the DSL, we can define provider > specific context/configuration separately and apply to all provider > specific actions without explicitly passing as inputs. WDYT? > > > > Winson > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tnapierala at mirantis.com Fri Dec 19 11:12:21 2014 From: tnapierala at mirantis.com (Tomasz Napierala) Date: Fri, 19 Dec 2014 12:12:21 +0100 Subject: [openstack-dev] [FUEL] Bootstrap NTP sync. In-Reply-To: References: Message-ID: > On 19 Dec 2014, at 10:00, Stanislaw Bogatkin wrote: > > Hi guys, > > We have a little concern related to Fuel bootstrap node NTP sync. Currently we trying to sync time on bootstrap node with master node, but problem is that NTP protocol has long convergence time, so if we just install master node and right after that try to start some bootstrap node - bootstrap fails to sync time with master due to that fact that master doesn't appear as "trust time source" at that moment. > How we can solve that problem: > > 1. We can start bootstrap long time after master (when master will convergence it's time) - seems that it's a bad idea, cause master node convergence time depends on upstream NTP servers and may be quite long - user shouldn't wait so long time to just start bootstrap node. > > 2. We can use master local time as "trust" forcibly - actually, we already do that for case when master is a bare metal node. We can do it for virtual node too, it is not such bad idea as many can say, especially when master node stratum will low (10-12). > > 3. We can mask return value for bootstrap node ntpdate service such way that it always will return success - it's a dirty hack, it will calm down customers, but it doesn't solve problem - time will be unsynced. > > As for me - second option is best. What do you think about it? Second option looks best, although it?s still against standards. I guess that if we provide possibility to deifne external NTP server as an alternative, we are hsfa here and can live with that. Regards, -- Tomasz 'Zen' Napierala Sr. OpenStack Engineer tnapierala at mirantis.com From eduard.matei at cloudfounders.com Fri Dec 19 11:35:54 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Fri, 19 Dec 2014 13:35:54 +0200 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: References: <5486D947.4090209@hp.com> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4240C5@G4W3223.americas.hpqcorp.net> Message-ID: Hi all, After a little struggle with the config scripts i managed to get a working setup that is able to process openstack-dev/sandbox and run noop-check-comunication. Then, i tried enabling dsvm-tempest-full job but it keeps returning "NOT_REGISTERED" 2014-12-19 12:07:14,683 INFO zuul.IndependentPipelineManager: Change depends on changes [] 2014-12-19 12:07:14,683 INFO zuul.Gearman: Launch job noop-check-communication for change with dependent changes [] 2014-12-19 12:07:14,693 INFO zuul.Gearman: Launch job dsvm-tempest-full for change with dependent changes [] 2014-12-19 12:07:14,694 ERROR zuul.Gearman: Job is not registered with Gearman 2014-12-19 12:07:14,694 INFO zuul.Gearman: Build complete, result NOT_REGISTERED 2014-12-19 12:07:14,765 INFO zuul.Gearman: Build started 2014-12-19 12:07:14,910 INFO zuul.Gearman: Build complete, result SUCCESS 2014-12-19 12:07:14,916 INFO zuul.IndependentPipelineManager: Reporting change , actions: [, {'verified': -1}>] Nodepoold's log show no reference to dsvm-tempest-full and neither jenkins' logs. Any idea how to enable this job? Also, i got the "Cloud provider" setup and i can access it from the jenkins master. Any idea how i can manually trigger dsvm-tempest-full job to run and test the cloud provider without having to push a review to Gerrit? Thanks, Eduard On Thu, Dec 18, 2014 at 7:52 PM, Eduard Matei < eduard.matei at cloudfounders.com> wrote: > > Thanks for the input. > > I managed to get another master working (on Ubuntu 13.10), again with some > issues since it was already setup. > I'm now working towards setting up the slave. > > Will add comments to those reviews. > > Thanks, > Eduard > > On Thu, Dec 18, 2014 at 7:42 PM, Asselin, Ramy > wrote: > >> Yes, Ubuntu 12.04 is tested as mentioned in the readme [1]. Note that >> the referenced script is just a wrapper that pulls all the latest from >> various locations in openstack-infra, e.g. [2]. >> >> Ubuntu 14.04 support is WIP [3] >> >> FYI, there?s a spec to get an in-tree 3rd party ci solution [4]. Please >> add your comments if this interests you. >> >> >> >> [1] https://github.com/rasselin/os-ext-testing/blob/master/README.md >> >> [2] >> https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L29 >> >> [3] https://review.openstack.org/#/c/141518/ >> >> [4] https://review.openstack.org/#/c/139745/ >> >> >> >> >> >> *From:* Punith S [mailto:punith.s at cloudbyte.com] >> *Sent:* Thursday, December 18, 2014 3:12 AM >> *To:* OpenStack Development Mailing List (not for usage questions); >> Eduard Matei >> >> *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need >> help setting up CI >> >> >> >> Hi Eduard >> >> >> >> we tried running >> https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh >> >> on ubuntu master 12.04, and it appears to be working fine on 12.04. >> >> >> >> thanks >> >> >> >> On Thu, Dec 18, 2014 at 1:57 PM, Eduard Matei < >> eduard.matei at cloudfounders.com> wrote: >> >> Hi, >> >> Seems i can't install using puppet on the jenkins master using >> install_master.sh from >> https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh >> because it's running Ubuntu 11.10 and it appears unsupported. >> >> I managed to install puppet manually on master and everything else fails >> >> So i'm trying to manually install zuul and nodepool and jenkins job >> builder, see where i end up. >> >> >> >> The slave looks complete, got some errors on running install_slave so i >> ran parts of the script manually, changing some params and it appears >> installed but no way to test it without the master. >> >> >> >> Any ideas welcome. >> >> >> >> Thanks, >> >> >> >> Eduard >> >> >> >> On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy >> wrote: >> >> Manually running the script requires a few environment settings. Take >> a look at the README here: >> >> https://github.com/openstack-infra/devstack-gate >> >> >> >> Regarding cinder, I?m using this repo to run our cinder jobs (fork from >> jaypipes). >> >> https://github.com/rasselin/os-ext-testing >> >> >> >> Note that this solution doesn?t use the Jenkins gerrit trigger pluggin, >> but zuul. >> >> >> >> There?s a sample job for cinder here. It?s in Jenkins Job Builder format. >> >> >> https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample >> >> >> >> You can ask more questions in IRC freenode #openstack-cinder. (irc# >> asselin) >> >> >> >> Ramy >> >> >> >> *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] >> *Sent:* Tuesday, December 16, 2014 12:41 AM >> *To:* Bailey, Darragh >> *Cc:* OpenStack Development Mailing List (not for usage questions); >> OpenStack >> *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need >> help setting up CI >> >> >> >> Hi, >> >> >> >> Can someone point me to some working documentation on how to setup third >> party CI? (joinfu's instructions don't seem to work, and manually running >> devstack-gate scripts fails: >> >> Running gate_hook >> >> Job timeout set to: 163 minutes >> >> timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory >> >> ERROR: the main setup script run by this job failed - exit code: 127 >> >> please look at the relevant log files to determine the root cause >> >> Cleaning up host >> >> ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) >> >> Build step 'Execute shell' marked build as failure. >> >> >> >> I have a working Jenkins slave with devstack and our internal libraries, >> i have Gerrit Trigger Plugin working and triggering on patches created, i >> just need the actual job contents so that it can get to comment with the >> test results. >> >> >> >> Thanks, >> >> >> >> Eduard >> >> >> >> On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei < >> eduard.matei at cloudfounders.com> wrote: >> >> Hi Darragh, thanks for your input >> >> >> >> I double checked the job settings and fixed it: >> >> - build triggers is set to Gerrit event >> >> - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger >> Plugin and tested separately) >> >> - Trigger on: Patchset Created >> >> - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: >> Type: Path, Pattern: ** (was Type Plain on both) >> >> Now the job is triggered by commit on openstack-dev/sandbox :) >> >> >> >> Regarding the Query and Trigger Gerrit Patches, i found my patch using >> query: status:open project:openstack-dev/sandbox change:139585 and i can >> trigger it manually and it executes the job. >> >> >> >> But i still have the problem: what should the job do? It doesn't actually >> do anything, it doesn't run tests or comment on the patch. >> >> Do you have an example of job? >> >> >> >> Thanks, >> >> Eduard >> >> >> >> On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh wrote: >> >> Hi Eduard, >> >> >> I would check the trigger settings in the job, particularly which "type" >> of pattern matching is being used for the branches. Found it tends to be >> the spot that catches most people out when configuring jobs with the >> Gerrit Trigger plugin. If you're looking to trigger against all branches >> then you would want "Type: Path" and "Pattern: **" appearing in the UI. >> >> If you have sufficient access using the 'Query and Trigger Gerrit >> Patches' page accessible from the main view will make it easier to >> confirm that your Jenkins instance can actually see changes in gerrit >> for the given project (which should mean that it can see the >> corresponding events as well). Can also use the same page to re-trigger >> for PatchsetCreated events to see if you've set the patterns on the job >> correctly. >> >> Regards, >> Darragh Bailey >> >> "Nothing is foolproof to a sufficiently talented fool" - Unknown >> >> On 08/12/14 14:33, Eduard Matei wrote: >> > Resending this to dev ML as it seems i get quicker response :) >> > >> > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: >> > Patchset Created", chose as server the configured Gerrit server that >> > was previously tested, then added the project openstack-dev/sandbox >> > and saved. >> > I made a change on dev sandbox repo but couldn't trigger my job. >> > >> > Any ideas? >> > >> > Thanks, >> > Eduard >> > >> > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei >> > > > > wrote: >> > >> > Hello everyone, >> > >> > Thanks to the latest changes to the creation of service accounts >> > process we're one step closer to setting up our own CI platform >> > for Cinder. >> > >> > So far we've got: >> > - Jenkins master (with Gerrit plugin) and slave (with DevStack and >> > our storage solution) >> > - Service account configured and tested (can manually connect to >> > review.openstack.org and get events >> > and publish comments) >> > >> > Next step would be to set up a job to do the actual testing, this >> > is where we're stuck. >> > Can someone please point us to a clear example on how a job should >> > look like (preferably for testing Cinder on Kilo)? Most links >> > we've found are broken, or tools/scripts are no longer working. >> > Also, we cannot change the Jenkins master too much (it's owned by >> > Ops team and they need a list of tools/scripts to review before >> > installing/running so we're not allowed to experiment). >> > >> > Thanks, >> > Eduard >> > >> > -- >> > >> > *Eduard Biceri Matei, Senior Software Developer* >> > www.cloudfounders.com >> > | eduard.matei at cloudfounders.com >> > >> > >> > >> > >> > *CloudFounders, The Private Cloud Software Company* >> > >> > Disclaimer: >> > This email and any files transmitted with it are confidential and >> > intended solely for the use of the individual or entity to whom >> > they are addressed.If you are not the named addressee or an >> > employee or agent responsible for delivering this message to the >> > named addressee, you are hereby notified that you are not >> > authorized to read, print, retain, copy or disseminate this >> > message or any part of it. If you have received this email in >> > error we request you to notify us by reply e-mail and to delete >> > all electronic files of the message. If you are not the intended >> > recipient you are notified that disclosing, copying, distributing >> > or taking any action in reliance on the contents of this >> > information is strictly prohibited. E-mail transmission cannot be >> > guaranteed to be secure or error free as information could be >> > intercepted, corrupted, lost, destroyed, arrive late or >> > incomplete, or contain viruses. The sender therefore does not >> > accept liability for any errors or omissions in the content of >> > this message, and shall have no liability for any loss or damage >> > suffered by the user, which arise as a result of e-mail >> transmission. >> > >> > >> > >> > >> > -- >> > *Eduard Biceri Matei, Senior Software Developer* >> > www.cloudfounders.com >> > | eduard.matei at cloudfounders.com >> > >> > >> > >> > >> > *CloudFounders, The Private Cloud Software Company* >> > >> > Disclaimer: >> > This email and any files transmitted with it are confidential and >> > intended solely for the use of the individual or entity to whom they >> > are addressed.If you are not the named addressee or an employee or >> > agent responsible for delivering this message to the named addressee, >> > you are hereby notified that you are not authorized to read, print, >> > retain, copy or disseminate this message or any part of it. If you >> > have received this email in error we request you to notify us by reply >> > e-mail and to delete all electronic files of the message. If you are >> > not the intended recipient you are notified that disclosing, copying, >> > distributing or taking any action in reliance on the contents of this >> > information is strictly prohibited. E-mail transmission cannot be >> > guaranteed to be secure or error free as information could be >> > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or >> > contain viruses. The sender therefore does not accept liability for >> > any errors or omissions in the content of this message, and shall have >> > no liability for any loss or damage suffered by the user, which arise >> > as a result of e-mail transmission. >> > >> > >> >> > _______________________________________________ >> > OpenStack-Infra mailing list >> > OpenStack-Infra at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra >> >> >> >> >> >> -- >> >> *Eduard Biceri Matei, Senior Software Developer* >> >> www.cloudfounders.com >> >> | eduard.matei at cloudfounders.com >> >> >> >> >> >> >> >> *CloudFounders, The Private Cloud Software Company* >> >> >> >> Disclaimer: >> >> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. >> If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. >> E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. >> >> >> >> >> >> -- >> >> *Eduard Biceri Matei, Senior Software Developer* >> >> www.cloudfounders.com >> >> | eduard.matei at cloudfounders.com >> >> >> >> >> >> >> >> *CloudFounders, The Private Cloud Software Company* >> >> >> >> Disclaimer: >> >> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. >> If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. >> E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> -- >> >> *Eduard Biceri Matei, Senior Software Developer* >> >> www.cloudfounders.com >> >> | eduard.matei at cloudfounders.com >> >> >> >> >> >> >> >> *CloudFounders, The Private Cloud Software Company* >> >> >> >> Disclaimer: >> >> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. >> If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. >> E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> -- >> >> regards, >> >> >> >> punith s >> >> cloudbyte.com >> > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Fri Dec 19 11:43:23 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Fri, 19 Dec 2014 17:43:23 +0600 Subject: [openstack-dev] [mistral] Mistral Kilo-1 development milestone released Message-ID: <7BEBD9E2-0593-4340-82DB-2A224C2940EF@mirantis.com> Hi, Mistral Kilo-1 (tagged as 2015.1.0b1) has been released. In this release we completed 10 blueprints and fixed 18 bugs. The most noticeable achievements are: Fully completed direct workflow ?join? control which remarkably enriches variety of ways that user can build workflows. ?join" allows to synchronize multiple routes running in parallel in a workflow and merge their results their results for further processing. Two examples of that have been added into mistral-extra project (see ?send_tenant_stat_join? workflow in tenant_statistics.yaml workbook and workflow create_vm_and_volume.yaml) Any workflow now finally can be correctly resumed if it was previously paused. Partially completed ?for-each? task execution pattern that allows processing multiple items passed as a collection to a task. This feature is now experimental and the team is actively working on stabilizing it: both design and implementation. ?pause-before? task policy which in conjunction with workflow resume can be used to run workflows in what can be called ?debug mode? or step by step. This, however, is not all that the Team is planning to do on that topic. Even more exciting capabilities like running workflows in ?dry run? mode and allowing "human interventions in case of errors and resuming workflows" are on the roadmap. Task-executor affinity with using ?target? task property. Tasks can be configured to be running only on a certain group of executors. Fixed a number bugs with authentication and multitenancy support. Significantly improved integration testing. And finally we started HA testing & Benchmarking with Rally and got first results. This work will be consistently going on. All ideas and results on testing will be shared and everyone is welcome to contribute their vision and experience. Mistral Server and Mistral Client release launchpad pages: https://launchpad.net/mistral/kilo/kilo-1 https://launchpad.net/python-mistralclient/kilo/kilo-1 Thanks P.S.: We?ve made a decision to follow all the release procedures adopted for official OpenStack projects to simplify future incubation and integration and from now on all terms, principles and versioning will be corresponding. Renat Akhmerov @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Fri Dec 19 11:46:17 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Fri, 19 Dec 2014 17:46:17 +0600 Subject: [openstack-dev] [Mistral] ActionProvider In-Reply-To: References: Message-ID: Yeah, We had a very productive discussion with Winson today and he?s going to prepare more formal specification on what he?s suggesting. I?d like to say in advance though that I really really like it in terms of what it will allow to do. Renat Akhmerov @ Mirantis Inc. > On 19 Dec 2014, at 17:11, Anastasia Kuznetsova wrote: > > Winson, Renat, > > I think that it is a good idea. Moreover, it is relevant, because about a month ago there was a question from one guy in our IRC channel about what if some of other 3rd party systems which provide their own client bindings (in python) want to integrate with Mistral, how it will work. For that moment we just thought about it, but hadn't any blueprints or discussions. > > Thanks, > Anastasia Kuznetsova > @ Mirantis Inc. > > > On Thu, Dec 18, 2014 at 9:33 AM, Renat Akhmerov > wrote: > Winson, > > The idea itself makes a lot of sense to me because we?ve had a number of discussions about how we could make action subsystem even more pluggable and flexible. One of the questions that we?d like to solve is to be able to add actions ?on the fly? and at the same time stay safe. I think this whole thing is about specific technical details so I would like to see more of them. Generally speaking, you?re right about actions residing in a database, about 3 months ago we made this refactoring and put all actions into db but it may not be 100% necessary. Btw, we already have a concept of action generator that we use to automatically build OpenStack actions so you can take a look at how they work. Long story short? We?ve already made some steps towards being more flexible and have some facilities that could be further improved. > > Again, the idea is very interesting to me (and not only to me). Please share the details. > > Thanks > > Renat Akhmerov > @ Mirantis Inc. > > > > > On 17 Dec 2014, at 13:22, W Chan > wrote: > > > > Renat, > > > > We want to introduce the concept of an ActionProvider to Mistral. We are thinking that with an ActionProvider, a third party system can extend Mistral with it's own action catalog and set of dedicated and specialized action executors. The ActionProvider will return it's own list of actions via an abstract interface. This minimizes the complexity and latency in managing and sync'ing the Action table. In the DSL, we can define provider specific context/configuration separately and apply to all provider specific actions without explicitly passing as inputs. WDYT? > > > > Winson > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbogatkin at mirantis.com Fri Dec 19 12:07:19 2014 From: sbogatkin at mirantis.com (Stanislaw Bogatkin) Date: Fri, 19 Dec 2014 16:07:19 +0400 Subject: [openstack-dev] [FUEL] Bootstrap NTP sync. In-Reply-To: References: Message-ID: Hi Tomasz, External NTP is good, but we should be able to deploy w/o internet access (but slaves should be in sync with master node), so sync with master node is better. I understand that it's slightly against standards - but at that moment we just obligatoriness to be in sync with master node, cause some of followed tasks depends on synced time. On Fri, Dec 19, 2014 at 2:12 PM, Tomasz Napierala wrote: > > > > On 19 Dec 2014, at 10:00, Stanislaw Bogatkin > wrote: > > > > Hi guys, > > > > We have a little concern related to Fuel bootstrap node NTP sync. > Currently we trying to sync time on bootstrap node with master node, but > problem is that NTP protocol has long convergence time, so if we just > install master node and right after that try to start some bootstrap node - > bootstrap fails to sync time with master due to that fact that master > doesn't appear as "trust time source" at that moment. > > How we can solve that problem: > > > > 1. We can start bootstrap long time after master (when master will > convergence it's time) - seems that it's a bad idea, cause master node > convergence time depends on upstream NTP servers and may be quite long - > user shouldn't wait so long time to just start bootstrap node. > > > > 2. We can use master local time as "trust" forcibly - actually, we > already do that for case when master is a bare metal node. We can do it for > virtual node too, it is not such bad idea as many can say, especially when > master node stratum will low (10-12). > > > > 3. We can mask return value for bootstrap node ntpdate service such way > that it always will return success - it's a dirty hack, it will calm down > customers, but it doesn't solve problem - time will be unsynced. > > > > As for me - second option is best. What do you think about it? > > Second option looks best, although it?s still against standards. I guess > that if we provide possibility to deifne external NTP server as an > alternative, we are hsfa here and can live with that. > > Regards, > -- > Tomasz 'Zen' Napierala > Sr. OpenStack Engineer > tnapierala at mirantis.com > > > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flavio at redhat.com Fri Dec 19 12:17:00 2014 From: flavio at redhat.com (Flavio Percoco) Date: Fri, 19 Dec 2014 13:17:00 +0100 Subject: [openstack-dev] [all] Lets not assume everyone is using the global `CONF` object (zaqar broken by latest keystoneclient release 1.0) Message-ID: <20141219121700.GA13255@redhat.com> Greetings, DISCLAIMER: The following comments are neither finger pointing the author of this work nor the keystone team. RANT: We should really stop assuming everyone is using a global `CONF` object. Moreover, we should really stop using it, especially in libraries. That said, here's a gentle note for all of us: If I understood the flow of changes correctly, keystoneclient recently introduced a auth_section[0] option, which needs to be registered in order for it to work properly. In keystoneclient, it's been correctly added a function[1] to register this option in a conf object. keystonemiddleware was then updated to support the above and a call to the register function[1] was then added to the `auth_token` module[2]. The above, unfortunately, broke Zaqar's auth because Zaqar is not using the global `CONF` object which means it has to register keystonemiddleware's options itself. Since the option was registered in the global conf instead of the conf object passed to `AuthProtocol`, the new `auth_section` option is not bein registered as keystoneclient excepts. So, as a gentle reminder to everyone, please, lets not assume all projects are using the global `CONF` object and make sure all libraries provide a good way to register the required options. I think either secretly registering options or exposing a function to let consumers do so is fine. I hate complaining without helping to solve the problem so, here's[3] a workaround to provide a, hopefully, better way to do this. Note that this shouldn't be the definitive fix and that we also implemented a workaround in zaqar as well. Cheers, Flavio [0] https://github.com/openstack/python-keystoneclient/blob/41afe3c963fa01f61b67c44e572eee34b0972382/keystoneclient/auth/conf.py#L20 [1] https://github.com/openstack/python-keystoneclient/blob/41afe3c963fa01f61b67c44e572eee34b0972382/keystoneclient/auth/conf.py#L49 [2] https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token.py#L356 [3] https://review.openstack.org/143063 -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From chhagarw at in.ibm.com Fri Dec 19 12:31:48 2014 From: chhagarw at in.ibm.com (Chhavi Agarwal) Date: Fri, 19 Dec 2014 18:01:48 +0530 Subject: [openstack-dev] [cinder] Grouping Multiple Backend provider In-Reply-To: References: Message-ID: To be more clear let me explain it with the help of a use case :- Consider I have 4 registered storage providers SVC, LVM, EMC, HDS I want to use to 2 ( SVC, LVM ) providers for Task1 other 2 ( EMC, HDS ) providers for Task2. Is it possible to have a filter which can be used for a set of volume drivers. I tried below multi-backend providers but it by default calls the CapacityFilter and tries to process all the registered providers. Thanks & Regards, Chhavi Agarwal Cloud System Software Group. From: Chhavi Agarwal/India/IBM at IBMIN To: openstack-dev at lists.openstack.org Cc: Shyama Venugopal/India/IBM at IBMIN, Monica O Joshi/India/IBM at IBMIN, Imranuddin W Kazi/India/IBM at IBMIN Date: 12/18/2014 09:25 PM Subject: [openstack-dev] [cinder] Multiple Backend for different volume_types Hi All, As per the below link multi-backend support :- https://wiki.openstack.org/wiki/Cinder-multi-backend Its mentioned that we currently only support passing a multi backend provider per volume_type. "There can be > 1 backend per volume_type, and the capacity scheduler kicks in and keeps the backends of a particular volume_type " Is there a way to support multi backend provider across different volume_type. For eg if I want my volume_type to have both SVC and LVM drivers to be passed as my backend provider. Thanks & Regards, Chhavi Agarwal Cloud System Software Group. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From j.rosenboom at x-ion.de Fri Dec 19 12:35:06 2014 From: j.rosenboom at x-ion.de (Dr. Jens Rosenboom) Date: Fri, 19 Dec 2014 13:35:06 +0100 Subject: [openstack-dev] Git client vulnerability Message-ID: <54941B7A.2020905@x-ion.de> As this may affect a reasonable percentage of the target audience, I would like to make everyone aware of https://github.com/blog/1938-vulnerability-announced-update-your-git-clients While github.com claim to have patched their servers, people using other repos may want to be extra cautious. From fungi at yuggoth.org Fri Dec 19 13:19:48 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 19 Dec 2014 13:19:48 +0000 Subject: [openstack-dev] Git client vulnerability In-Reply-To: <54941B7A.2020905@x-ion.de> References: <54941B7A.2020905@x-ion.de> Message-ID: <20141219131948.GH2497@yuggoth.org> On 2014-12-19 13:35:06 +0100 (+0100), Dr. Jens Rosenboom wrote: [...] > While github.com claim to have patched their servers, people using > other repos may want to be extra cautious. Please re-read that advisory[1]. GitHub's _servers_ were not affected as this is a client-side vulnerability. What GitHub did was release fixed versions of their "GitHub for Windows" and "GitHub for Mac" _client_ tools. That said, people using Git (and apparently Mercurial[2]?) clients on non-case-sensitive filesystems (that's mainly Windows and Mac, not typical Linux/BSD) are at risk if they haven't upgraded their client applications accordingly. [1] https://github.com/blog/1938-vulnerability-announced-update-your-git-clients [2] http://www.openwall.com/lists/oss-security/2014/12/19/1 -- Jeremy Stanley From kragniz at gmail.com Fri Dec 19 13:34:06 2014 From: kragniz at gmail.com (Louis Taylor) Date: Fri, 19 Dec 2014 13:34:06 +0000 Subject: [openstack-dev] Git client vulnerability In-Reply-To: <20141219131948.GH2497@yuggoth.org> References: <54941B7A.2020905@x-ion.de> <20141219131948.GH2497@yuggoth.org> Message-ID: <20141219133405.GA20977@gmail.com> On Fri, Dec 19, 2014 at 01:19:48PM +0000, Jeremy Stanley wrote: > Please re-read that advisory[1]. GitHub's _servers_ were not > affected as this is a client-side vulnerability. What GitHub did was > release fixed versions of their "GitHub for Windows" and "GitHub for > Mac" _client_ tools. Github's servers were patched such that is is now not possible to host a malicious repository on github servers, and attempts to push one will be rejected. This is mentioned in the advisory. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: Digital signature URL: From bpavlovic at mirantis.com Fri Dec 19 13:51:54 2014 From: bpavlovic at mirantis.com (Boris Pavlovic) Date: Fri, 19 Dec 2014 17:51:54 +0400 Subject: [openstack-dev] [Mistral] Plans to load and performance testing In-Reply-To: References: Message-ID: Anastasia, Nice work on this. But I belive that you should put bigger load here: https://etherpad.openstack.org/p/mistral-rally-testing-results As well concurrency should be at least 2-3 times bigger than times otherwise it won't generate proper load and you won't collect enough data for statistical analyze. As well use "rps" runner that generates more real life load. Plus it will be nice to share as well output of "rally task report" command. By the way what do you think about using Rally scenarios (that you already wrote) for integration testing as well? Best regards, Boris Pavlovic On Fri, Dec 19, 2014 at 2:39 PM, Anastasia Kuznetsova < akuznetsova at mirantis.com> wrote: > > Hello everyone, > > I want to announce that Mistral team has started work on load and > performance testing in this release cycle. > > Brief information about scope of our work can be found here: > > https://wiki.openstack.org/wiki/Mistral/Testing#Load_and_Performance_Testing > > First results are published here: > https://etherpad.openstack.org/p/mistral-rally-testing-results > > Thanks, > Anastasia Kuznetsova > @ Mirantis Inc. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkranz at redhat.com Fri Dec 19 14:07:53 2014 From: dkranz at redhat.com (David Kranz) Date: Fri, 19 Dec 2014 09:07:53 -0500 Subject: [openstack-dev] [qa] neutron "client returns one value" has finally merged Message-ID: <54943139.80700@redhat.com> Neutron patches can resume as normal. Thanks for the patience. -David From doug at doughellmann.com Fri Dec 19 14:07:59 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 19 Dec 2014 09:07:59 -0500 Subject: [openstack-dev] [all] Lets not assume everyone is using the global `CONF` object (zaqar broken by latest keystoneclient release 1.0) In-Reply-To: <20141219121700.GA13255@redhat.com> References: <20141219121700.GA13255@redhat.com> Message-ID: <5E2373BD-4156-4484-A8B7-6BAAFBF3B513@doughellmann.com> On Dec 19, 2014, at 7:17 AM, Flavio Percoco wrote: > Greetings, > > DISCLAIMER: The following comments are neither finger pointing the > author of this work nor the keystone team. > > RANT: We should really stop assuming everyone is using a global `CONF` > object. Moreover, we should really stop using it, especially in > libraries. > > That said, here's a gentle note for all of us: > > If I understood the flow of changes correctly, keystoneclient recently > introduced a auth_section[0] option, which needs to be registered in > order for it to work properly. In keystoneclient, it's been correctly > added a function[1] to register this option in a conf object. > > keystonemiddleware was then updated to support the above and a call to > the register function[1] was then added to the `auth_token` module[2]. > > The above, unfortunately, broke Zaqar's auth because Zaqar is not > using the global `CONF` object which means it has to register > keystonemiddleware's options itself. Since the option was registered > in the global conf instead of the conf object passed to > `AuthProtocol`, the new `auth_section` option is not bein registered > as keystoneclient excepts. > > So, as a gentle reminder to everyone, please, lets not assume all > projects are using the global `CONF` object and make sure all libraries > provide a good way to register the required options. I think either > secretly registering options or exposing a function to let consumers > do so is fine. > > I hate complaining without helping to solve the problem so, here's[3] a > workaround to provide a, hopefully, better way to do this. Note that > this shouldn't be the definitive fix and that we also implemented a > workaround in zaqar as well. That change will fix the issue, but a better solution is to have the code in keystoneclient that wants the options handle the registration at runtime. It looks like keystoneclient/auth/conf.py:load_from_conf_options() is at least one place that?s needed, there may be others. Doug > > Cheers, > Flavio > > [0] https://github.com/openstack/python-keystoneclient/blob/41afe3c963fa01f61b67c44e572eee34b0972382/keystoneclient/auth/conf.py#L20 > [1] https://github.com/openstack/python-keystoneclient/blob/41afe3c963fa01f61b67c44e572eee34b0972382/keystoneclient/auth/conf.py#L49 > [2] https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token.py#L356 > [3] https://review.openstack.org/143063 > > -- > @flaper87 > Flavio Percoco > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dguryanov at parallels.com Fri Dec 19 14:11:57 2014 From: dguryanov at parallels.com (Dmitry Guryanov) Date: Fri, 19 Dec 2014 17:11:57 +0300 Subject: [openstack-dev] [Nova] Providing instance's guest OS with data (ssh keys, root password, hostname) Message-ID: <6511590.S4nEjVVVbR@dblinov.sw.ru> Hello, If I understood correctly, there are 3 ways to provide guest OS with some data (SSH keys, for example): 1. mount guest root fs on host (with libguestfs) and copy data there. 2. config drive and cloud-init 3. nova metadata service and cloud-init All 3 methods do almost the same thing and can be enabled or disabled in nova config file. So which one is preferred? How do people usually configure their openstack clusters? I'm asking, because we are going to extend nova/libvirt driver to support our virtualization solution (parallels driver in libvirt) and it seems it will not work as is and requires some development. Which method is first-priority and used by most people? -- Dmitry Guryanov From berrange at redhat.com Fri Dec 19 14:17:34 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Fri, 19 Dec 2014 14:17:34 +0000 Subject: [openstack-dev] [Nova] Providing instance's guest OS with data (ssh keys, root password, hostname) In-Reply-To: <6511590.S4nEjVVVbR@dblinov.sw.ru> References: <6511590.S4nEjVVVbR@dblinov.sw.ru> Message-ID: <20141219141734.GB9585@redhat.com> On Fri, Dec 19, 2014 at 05:11:57PM +0300, Dmitry Guryanov wrote: > Hello, > > If I understood correctly, there are 3 ways to provide guest OS with some data > (SSH keys, for example): > > 1. mount guest root fs on host (with libguestfs) and copy data there. > 2. config drive and cloud-init > 3. nova metadata service and cloud-init > > > All 3 methods do almost the same thing and can be enabled or disabled in nova > config file. So which one is preferred? How do people usually configure their > openstack clusters? > > I'm asking, because we are going to extend nova/libvirt driver to support our > virtualization solution (parallels driver in libvirt) and it seems it will not > work as is and requires some development. Which method is first-priority and > used by most people? I'd probably prioritize in this order: 1. config drive and cloud-init 2. nova metadata service and cloud-init 3. mount guest root fs on host (with libguestfs) and copy data there. but there's not much to choose between 1 & 2. NB, option 3 isn't actually hardcoded to use libguestfs - it falls back to using loop devices / local mounts, albeit less secure, so not really recommended. At some point option 3 may be removed from Nova entirely since the first two options are preferred & more reliable in general. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From dguryanov at parallels.com Fri Dec 19 14:27:18 2014 From: dguryanov at parallels.com (Dmitry Guryanov) Date: Fri, 19 Dec 2014 17:27:18 +0300 Subject: [openstack-dev] Fwd: Re: [Nova] Providing instance's guest OS with data (ssh keys, root password, hostname) Message-ID: <4395610.L4DGZyRQ6H@dblinov.sw.ru> ---------- Forwarded Message ---------- Subject: Re: [openstack-dev] [Nova] Providing instance's guest OS with data (ssh keys, root password, hostname) Date: Friday 19 December 2014, 14:17:34 From: Daniel P. Berrange To: OpenStack Development Mailing List (not for usage questions) Dmitry GuryanovOn Fri, Dec 19, 2014 at 05:11:57PM +0300, wrote: > Hello, > > If I understood correctly, there are 3 ways to provide guest OS with some data > (SSH keys, for example): > > 1. mount guest root fs on host (with libguestfs) and copy data there. > 2. config drive and cloud-init > 3. nova metadata service and cloud-init > > > All 3 methods do almost the same thing and can be enabled or disabled in nova > config file. So which one is preferred? How do people usually configure their > openstack clusters? > > I'm asking, because we are going to extend nova/libvirt driver to support our > virtualization solution (parallels driver in libvirt) and it seems it will not > work as is and requires some development. Which method is first-priority and > used by most people? I'd probably prioritize in this order: 1. config drive and cloud-init 2. nova metadata service and cloud-init 3. mount guest root fs on host (with libguestfs) and copy data there. but there's not much to choose between 1 & 2. NB, option 3 isn't actually hardcoded to use libguestfs - it falls back to using loop devices / local mounts, albeit less secure, so not really recommended. At some point option 3 may be removed from Nova entirely since the first two options are preferred & more reliable in general. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ----------------------------------------- -- Dmitry Guryanov From dguryanov at parallels.com Fri Dec 19 14:34:19 2014 From: dguryanov at parallels.com (Dmitry Guryanov) Date: Fri, 19 Dec 2014 17:34:19 +0300 Subject: [openstack-dev] [Nova] Providing instance's guest OS with data (ssh keys, root password, hostname) In-Reply-To: <20141219141734.GB9585@redhat.com> References: <6511590.S4nEjVVVbR@dblinov.sw.ru> <20141219141734.GB9585@redhat.com> Message-ID: <1946259.42zk8PVZa8@dblinov.sw.ru> On Friday 19 December 2014 14:17:34 Daniel P. Berrange wrote: > On Fri, Dec 19, 2014 at 05:11:57PM +0300, Dmitry Guryanov wrote: > > Hello, > > > > If I understood correctly, there are 3 ways to provide guest OS with some > > data (SSH keys, for example): > > > > 1. mount guest root fs on host (with libguestfs) and copy data there. > > 2. config drive and cloud-init > > 3. nova metadata service and cloud-init > > > > > > All 3 methods do almost the same thing and can be enabled or disabled in > > nova config file. So which one is preferred? How do people usually > > configure their openstack clusters? > > > > I'm asking, because we are going to extend nova/libvirt driver to support > > our virtualization solution (parallels driver in libvirt) and it seems it > > will not work as is and requires some development. Which method is > > first-priority and used by most people? > > I'd probably prioritize in this order: > > 1. config drive and cloud-init > 2. nova metadata service and cloud-init > 3. mount guest root fs on host (with libguestfs) and copy data there. > > but there's not much to choose between 1 & 2. Thanks! Config drive already works for VMs, need to check how it will work with containers, since we can't add cdrom there. > > NB, option 3 isn't actually hardcoded to use libguestfs - it falls back > to using loop devices / local mounts, albeit less secure, so not really > recommended. At some point option 3 may be removed from Nova entirely > since the first two options are preferred & more reliable in general. I see! I actually know that libguestfs is optional, just provided it as an example how nova mounts disks. BTW it will not reduce security level for containers, because we mount root fs on host to start it. > > Regards, > Daniel -- Dmitry Guryanov From dguryanov at parallels.com Fri Dec 19 14:34:54 2014 From: dguryanov at parallels.com (Dmitry Guryanov) Date: Fri, 19 Dec 2014 17:34:54 +0300 Subject: [openstack-dev] Fwd: Re: [Nova] Providing instance's guest OS with data (ssh keys, root password, hostname) In-Reply-To: <4395610.L4DGZyRQ6H@dblinov.sw.ru> References: <4395610.L4DGZyRQ6H@dblinov.sw.ru> Message-ID: <3988378.nfudatojZV@dblinov.sw.ru> On Friday 19 December 2014 17:27:18 Dmitry Guryanov wrote: Sorry, forwarded to wrong list > ---------- Forwarded Message ---------- > > Subject: Re: [openstack-dev] [Nova] Providing instance's guest OS with data > (ssh keys, root password, hostname) > Date: Friday 19 December 2014, 14:17:34 > From: Daniel P. Berrange > To: OpenStack Development Mailing List (not for usage questions) dev at lists.openstack.org> > > Dmitry GuryanovOn Fri, Dec 19, 2014 at 05:11:57PM +0300, wrote: > > Hello, > > > > If I understood correctly, there are 3 ways to provide guest OS with some > > data > > > (SSH keys, for example): > > > > 1. mount guest root fs on host (with libguestfs) and copy data there. > > 2. config drive and cloud-init > > 3. nova metadata service and cloud-init > > > > > > All 3 methods do almost the same thing and can be enabled or disabled in > > nova > > > config file. So which one is preferred? How do people usually configure > > their > > > openstack clusters? > > > > I'm asking, because we are going to extend nova/libvirt driver to support > > our > > > virtualization solution (parallels driver in libvirt) and it seems it will > > not > > > work as is and requires some development. Which method is first-priority > > and used by most people? > > I'd probably prioritize in this order: > > 1. config drive and cloud-init > 2. nova metadata service and cloud-init > 3. mount guest root fs on host (with libguestfs) and copy data there. > > but there's not much to choose between 1 & 2. > > NB, option 3 isn't actually hardcoded to use libguestfs - it falls back > to using loop devices / local mounts, albeit less secure, so not really > recommended. At some point option 3 may be removed from Nova entirely > since the first two options are preferred & more reliable in general. > > Regards, > Daniel -- Dmitry Guryanov From berrange at redhat.com Fri Dec 19 14:38:29 2014 From: berrange at redhat.com (Daniel P. Berrange) Date: Fri, 19 Dec 2014 14:38:29 +0000 Subject: [openstack-dev] [Nova] Providing instance's guest OS with data (ssh keys, root password, hostname) In-Reply-To: <1946259.42zk8PVZa8@dblinov.sw.ru> References: <6511590.S4nEjVVVbR@dblinov.sw.ru> <20141219141734.GB9585@redhat.com> <1946259.42zk8PVZa8@dblinov.sw.ru> Message-ID: <20141219143829.GC9585@redhat.com> On Fri, Dec 19, 2014 at 05:34:19PM +0300, Dmitry Guryanov wrote: > On Friday 19 December 2014 14:17:34 Daniel P. Berrange wrote: > > On Fri, Dec 19, 2014 at 05:11:57PM +0300, Dmitry Guryanov wrote: > > > Hello, > > > > > > If I understood correctly, there are 3 ways to provide guest OS with some > > > data (SSH keys, for example): > > > > > > 1. mount guest root fs on host (with libguestfs) and copy data there. > > > 2. config drive and cloud-init > > > 3. nova metadata service and cloud-init > > > > > > > > > All 3 methods do almost the same thing and can be enabled or disabled in > > > nova config file. So which one is preferred? How do people usually > > > configure their openstack clusters? > > > > > > I'm asking, because we are going to extend nova/libvirt driver to support > > > our virtualization solution (parallels driver in libvirt) and it seems it > > > will not work as is and requires some development. Which method is > > > first-priority and used by most people? > > > > I'd probably prioritize in this order: > > > > 1. config drive and cloud-init > > 2. nova metadata service and cloud-init > > 3. mount guest root fs on host (with libguestfs) and copy data there. > > > > but there's not much to choose between 1 & 2. > > Thanks! Config drive already works for VMs, need to check how it will work > with containers, since we can't add cdrom there. There are currently two variables wrt config drive - device type - cdrom vs disk - filesystem - vfat vs iso9660 For your fully virt machines I'd probably just stick with the default that ibvirt already supports. When we discussed this for LXC, we came to the conclusion that for containers we shouldn't try to expose a block device at all. Instead just mount the contents of the config drive at the directory location that cloud-init wants the data (it was somewhere under /var/ but I can't remember where right now). I think the same makes sense for parallels' container based guests. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From baoli at cisco.com Fri Dec 19 14:53:55 2014 From: baoli at cisco.com (Robert Li (baoli)) Date: Fri, 19 Dec 2014 14:53:55 +0000 Subject: [openstack-dev] [nova][sriov] SRIOV related specs pending for approval In-Reply-To: Message-ID: Hi Joe, See this thread on the SR-IOV CI from Irena and Sandhya: http://lists.openstack.org/pipermail/openstack-dev/2014-November/050658.html http://lists.openstack.org/pipermail/openstack-dev/2014-November/050755.html I believe that Intel is building a CI system to test SR-IOV as well. Thanks, Robert On 12/18/14, 9:13 PM, "Joe Gordon" > wrote: On Thu, Dec 18, 2014 at 2:18 PM, Robert Li (baoli) > wrote: Hi, During the Kilo summit, the folks in the pci passthrough and SR-IOV groups discussed what we?d like to achieve in this cycle, and the result was documented in this Etherpad: https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough To get the work going, we?ve submitted a few design specs: Nova: Live migration with macvtap SR-IOV https://blueprints.launchpad.net/nova/+spec/sriov-live-migration Nova: sriov interface attach/detach https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach Nova: Api specify vnic_type https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type Neutron-Network settings support for vnic-type https://blueprints.launchpad.net/neutron/+spec/network-settings-support-vnic-type Nova: SRIOV scheduling with stateless offloads https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads Now that the specs deadline is approaching, I?d like to bring them up in here for exception considerations. A lot of works have been put into them. And we?d like to see them get through for Kilo. We haven't started the spec exception process yet. Regarding CI for PCI passthrough and SR-IOV, see the attached thread. Can you share this via a link to something on http://lists.openstack.org/pipermail/openstack-dev/ thanks, Robert _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From flavio at redhat.com Fri Dec 19 14:58:05 2014 From: flavio at redhat.com (Flavio Percoco) Date: Fri, 19 Dec 2014 15:58:05 +0100 Subject: [openstack-dev] [all] Lets not assume everyone is using the global `CONF` object (zaqar broken by latest keystoneclient release 1.0) In-Reply-To: <5E2373BD-4156-4484-A8B7-6BAAFBF3B513@doughellmann.com> References: <20141219121700.GA13255@redhat.com> <5E2373BD-4156-4484-A8B7-6BAAFBF3B513@doughellmann.com> Message-ID: <20141219145805.GB13255@redhat.com> On 19/12/14 09:07 -0500, Doug Hellmann wrote: > >On Dec 19, 2014, at 7:17 AM, Flavio Percoco wrote: > >> Greetings, >> >> DISCLAIMER: The following comments are neither finger pointing the >> author of this work nor the keystone team. >> >> RANT: We should really stop assuming everyone is using a global `CONF` >> object. Moreover, we should really stop using it, especially in >> libraries. >> >> That said, here's a gentle note for all of us: >> >> If I understood the flow of changes correctly, keystoneclient recently >> introduced a auth_section[0] option, which needs to be registered in >> order for it to work properly. In keystoneclient, it's been correctly >> added a function[1] to register this option in a conf object. >> >> keystonemiddleware was then updated to support the above and a call to >> the register function[1] was then added to the `auth_token` module[2]. >> >> The above, unfortunately, broke Zaqar's auth because Zaqar is not >> using the global `CONF` object which means it has to register >> keystonemiddleware's options itself. Since the option was registered >> in the global conf instead of the conf object passed to >> `AuthProtocol`, the new `auth_section` option is not bein registered >> as keystoneclient excepts. >> >> So, as a gentle reminder to everyone, please, lets not assume all >> projects are using the global `CONF` object and make sure all libraries >> provide a good way to register the required options. I think either >> secretly registering options or exposing a function to let consumers >> do so is fine. >> >> I hate complaining without helping to solve the problem so, here's[3] a >> workaround to provide a, hopefully, better way to do this. Note that >> this shouldn't be the definitive fix and that we also implemented a >> workaround in zaqar as well. > >That change will fix the issue, but a better solution is to have the code in keystoneclient that wants the options handle the registration at runtime. It looks like keystoneclient/auth/conf.py:load_from_conf_options() is at least one place that?s needed, there may be others. Fully agree! -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From eduard.matei at cloudfounders.com Fri Dec 19 15:00:23 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Fri, 19 Dec 2014 17:00:23 +0200 Subject: [openstack-dev] [OpenStack-dev][nova-net]Floating ip assigned as /32 from the start of the range Message-ID: Hi, I'm trying to create a vm and assign it an ip in range 10.100.130.0/16. On the host, the ip is assigned to br100 as inet 10.100.0.3/32 scope global br100 instead of 10.100.130.X/16, so it's not reachable from the outside. The localrc.conf : FLOATING_RANGE=10.100.130.0/16 Any idea what to change? Thanks, Eduard -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dguryanov at parallels.com Fri Dec 19 15:36:32 2014 From: dguryanov at parallels.com (Dmitry Guryanov) Date: Fri, 19 Dec 2014 18:36:32 +0300 Subject: [openstack-dev] [Nova] Providing instance's guest OS with data (ssh keys, root password, hostname) In-Reply-To: <20141219143829.GC9585@redhat.com> References: <6511590.S4nEjVVVbR@dblinov.sw.ru> <1946259.42zk8PVZa8@dblinov.sw.ru> <20141219143829.GC9585@redhat.com> Message-ID: <2689930.3zaadKxLXW@dblinov.sw.ru> On Friday 19 December 2014 14:38:29 Daniel P. Berrange wrote: > On Fri, Dec 19, 2014 at 05:34:19PM +0300, Dmitry Guryanov wrote: > > On Friday 19 December 2014 14:17:34 Daniel P. Berrange wrote: > > > On Fri, Dec 19, 2014 at 05:11:57PM +0300, Dmitry Guryanov wrote: > > > > Hello, > > > > > > > > If I understood correctly, there are 3 ways to provide guest OS with > > > > some > > > > data (SSH keys, for example): > > > > > > > > 1. mount guest root fs on host (with libguestfs) and copy data there. > > > > 2. config drive and cloud-init > > > > 3. nova metadata service and cloud-init > > > > > > > > > > > > All 3 methods do almost the same thing and can be enabled or disabled > > > > in > > > > nova config file. So which one is preferred? How do people usually > > > > configure their openstack clusters? > > > > > > > > I'm asking, because we are going to extend nova/libvirt driver to > > > > support > > > > our virtualization solution (parallels driver in libvirt) and it seems > > > > it > > > > will not work as is and requires some development. Which method is > > > > first-priority and used by most people? > > > > > > I'd probably prioritize in this order: > > > 1. config drive and cloud-init > > > 2. nova metadata service and cloud-init > > > 3. mount guest root fs on host (with libguestfs) and copy data there. > > > > > > but there's not much to choose between 1 & 2. > > > > Thanks! Config drive already works for VMs, need to check how it will work > > with containers, since we can't add cdrom there. > > There are currently two variables wrt config drive > > - device type - cdrom vs disk > - filesystem - vfat vs iso9660 > > For your fully virt machines I'd probably just stick with the default > that ibvirt already supports. > > When we discussed this for LXC, we came to the conclusion that for > containers we shouldn't try to expose a block device at all. Instead > just mount the contents of the config drive at the directory location > that cloud-init wants the data (it was somewhere under /var/ but I > can't remember where right now). I think the same makes sense for > parallels' container based guests. That's good news, we have functions to mount host dir to container in PCS, so we just need to add it to libvirt driver. > > Regards, > Daniel -- Dmitry Guryanov From akuznetsova at mirantis.com Fri Dec 19 15:44:00 2014 From: akuznetsova at mirantis.com (Anastasia Kuznetsova) Date: Fri, 19 Dec 2014 18:44:00 +0300 Subject: [openstack-dev] [Mistral] Plans to load and performance testing In-Reply-To: References: Message-ID: Boris, Thanks for feedback! > But I belive that you should put bigger load here: https://etherpad.openstack.org/p/mistral-rally-testing-results As I said it is only beginning and I will increase the load and change its type. >As well concurrency should be at least 2-3 times bigger than times otherwise it won't generate proper load and you won't collect >enough data for statistical analyze. > >As well use "rps" runner that generates more real life load. >Plus it will be nice to share as well output of "rally task report" command. Thanks for the advice, I will consider it in further testing and reporting. Answering to your question about using Rally for integration testing, as I mentioned in our load testing plan published on wiki page, one of our final goals is to have a Rally gate in one of Mistral repositories, so we are interested in it and I already prepare first commits to Rally. Thanks, Anastasia Kuznetsova On Fri, Dec 19, 2014 at 4:51 PM, Boris Pavlovic wrote: > > Anastasia, > > Nice work on this. But I belive that you should put bigger load here: > https://etherpad.openstack.org/p/mistral-rally-testing-results > > As well concurrency should be at least 2-3 times bigger than times > otherwise it won't generate proper load and you won't collect enough data > for statistical analyze. > > As well use "rps" runner that generates more real life load. > Plus it will be nice to share as well output of "rally task report" > command. > > > By the way what do you think about using Rally scenarios (that you already > wrote) for integration testing as well? > > > Best regards, > Boris Pavlovic > > On Fri, Dec 19, 2014 at 2:39 PM, Anastasia Kuznetsova < > akuznetsova at mirantis.com> wrote: >> >> Hello everyone, >> >> I want to announce that Mistral team has started work on load and >> performance testing in this release cycle. >> >> Brief information about scope of our work can be found here: >> >> https://wiki.openstack.org/wiki/Mistral/Testing#Load_and_Performance_Testing >> >> First results are published here: >> https://etherpad.openstack.org/p/mistral-rally-testing-results >> >> Thanks, >> Anastasia Kuznetsova >> @ Mirantis Inc. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Fri Dec 19 15:55:43 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Fri, 19 Dec 2014 16:55:43 +0100 Subject: [openstack-dev] Cross distribution talks on Friday In-Reply-To: <545533E1.1070202@debian.org> References: <5454DC71.7040300@debian.org> <20141101152935.GA12516@tesla> <545533E1.1070202@debian.org> Message-ID: <54944A7F.6010500@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 01/11/14 20:26, Thomas Goirand wrote: > On 11/01/2014 11:29 PM, Kashyap Chamarthy wrote: >> On Sat, Nov 01, 2014 at 09:13:21PM +0800, Thomas Goirand wrote: >>> Hi, >>> >>> I was wondering if some distribution OpenStack package >>> maintainers would be interested to have some cross-distribution >>> discussion on Friday, during the contributors sessions. >>> >>> I'll be happy to discuss with Ubuntu people, but not only. I'd >>> like to meet also guys working with RedHat and Gentoo. I'm sure >>> we may have some interesting things to share. >>> >>> >>> OpenStack release management team would also be welcome. >>> >>> If you are interested, please reply to this mail. >> >> You might also want to start an etherpad instance with some >> initial agenda notes and throw the URL here to guage interest. > > Here's the etherpad: > https://etherpad.openstack.org/p/cross_distro_talks > >> On a related note, a bunch of Fedora/RHEL/CentOS/Scientific >> Linux packagers are planning[1] to meetup at the summit to >> discuss RDO project packaging aspects, etc. >> >> [1] https://etherpad.openstack.org/p/RDO_Meetup_Summit > > Well, if you guys are only talking about RPM packaging, as I'm > doing only Debian stuff [1], I'm only mildly interested. If we're > going to talk more about packaging in general, then maybe. > > On 11/02/2014 01:45 AM, Adam Young wrote: >> Getting the whole "PBR version" issues cleared up would be huge. > > I'm not sure what issue you are talking about, as I believe > there's none. This has been discussed for about a year and a half > before we finally had the OSLO_PACKAGE_VERSION to play with, when > this was designed a few years ago. I'm now very happy about the way > PBR does things, and wouldn't like it to change anything. > Currently, what I do in Debian is (from the debian/rules file > included from openstak-pkg-tools): > > DEBVERS ?= $(shell dpkg-parsechangelog | sed -n -e 's/^Version: > //p') VERSION ?= $(shell echo '$(DEBVERS)' | sed -e > 's/^[[:digit:]]*://' -e 's/[-].*//') export > OSLO_PACKAGE_VERSION=$(VERSION) Note that OSLO_PACKAGE_VERSION is not public. Instead, we should use PBR_VERSION: http://docs.openstack.org/developer/pbr/packagers.html#versioning > > You may not be familiar with dpkg-parsechangelog, so let me > explain. Basically, if my debian/changelog has on top 2014.2~rc2, > the debian/rules file will exports 2014.2~rc2 into the > OSLO_PACKAGE_VERSION shell variable before doing "python setup.py > install", and then PBR is smart enough to use that. > > I am not at all a RedHat packaging specialist, but from the old > time when I did some RPM porting from my Debian packages, I believe > that in your .spec file, this should translate into something like > this: > > %install export OSLO_PACKAGE_VERSION=%{version} %{__python} > setup.py install -O1 --skip-build --root %{buildroot} > > Then everything should be ok and PBR will become your friend. I > hope this helps and that I'm not writing any RPM packaging mistake! > :) > > Cheers, > > Thomas Goirand (zigo) > > [1] Here's the list of packages I maintain in Debian (only for > OpenStack): > https://qa.debian.org/developer.php?login=openstack-devel at lists.alioth.debian.org > > > > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUlEp/AAoJEC5aWaUY1u57f7cH+wVU4GQ324bfHp01kp8aG1fh wIN6NoXnn9zuxtu1DcU+x+UhkPa9/nW61EPMJlHva+HVtmW4dY5obNSnMSP1l1cY cTgY660nwxZByheGCv97FfzFQnetuNZpJ4E7k05EzrwsyOuW8IBPYyPhaRKj18Je E5wVh6LqMQEYdrz0dWQ2YmuEkHHeOL4aNi/LCmNWP1vc5uaRTuXIIFIOz7dcvm/x tW/s4fAlBBWEsjiat/WYzbKSNyVYcJYXwpPTBaNAMGygRJwRp5gAYBHqgD6FpuEN i36hLQ6+dkDEMg0h3uHMU/UJ4qARhlZmRC/Hj9GMdDD9YGLLsDo/lzllm/iODWs= =TGl4 -----END PGP SIGNATURE----- From amit.gandhi at RACKSPACE.COM Fri Dec 19 16:17:27 2014 From: amit.gandhi at RACKSPACE.COM (Amit Gandhi) Date: Fri, 19 Dec 2014 16:17:27 +0000 Subject: [openstack-dev] [api] Analysis of current API design In-Reply-To: <36742172-3C0F-4195-993F-D935000726EC@rackspace.com> References: <36742172-3C0F-4195-993F-D935000726EC@rackspace.com> Message-ID: How do the allocation of the service types in the service catalog get created. For example, looking at the link provided below for service catalogs [1], you have with Rackspace a service type of rax:queues (which is running zaqar). However in devstack, zaqar is listed as ?messaging?. FWIW i think the rackspace entry came before the devstack entry, but there is now an inconsistency. How do new openstack related projects (that are not incubated/graduated) appear in the service catalog with a consistent service type name that can be used across providers with the confidence it refers to the same set of api's? Is it just an assumption, or do we need a catalogue somewhere listing what each service type is associated with? Does adding it to Devstack pretty much stake claim to the service type? Thanks Amit. [1] https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog > On Dec 18, 2014, at 11:19 PM, Everett Toews wrote: > > Hi All, > > At the recent API WG meeting [1] we discussed the need for more analysis of current API design. > > We need to get better at doing analysis of current API design as part of our guideline proposals. We are not creating these guidelines in a vacuum. The current design should be analyzed and taken into account. > > Naturally the type of analysis will vary from guideline to guideline but backing your proposals with some kind of analysis will only make them better. Let?s take some examples. > > 1. Anne Gentle and I want to improve the consistency of service catalogs across cloud providers both public and private. This is going to require the analysis of many providers and we?ve got a start on it here [2]. Hopefully a guideline for the service catalog should fall out of the analysis of the many providers. > > 2. There?s a guideline for metadata up for review [3]. I wasn?t aware of all of the places where the concept of metadata is used in OpenStack so I did some analysis [4]. I found that the representation was pretty consistent but how metadata was CRUDed wasn?t as consistent. I hope that information can help the review along. > > 3. This Guideline for collection resources' representation structures [5] basically codifies in a guideline what was found in the analysis. Good stuff and it has definitely helped the review along. > > For more information about analysis of current API design see #1 of How to Contribute [5] > > Any thoughts or feedback on the above? > > Thanks, > Everett > > [1] http://eavesdrop.openstack.org/meetings/api_wg/2014/api_wg.2014-12-18-16.00.log.html > [2] https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog > [3] https://review.openstack.org/#/c/141229/ > [4] https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Metadata > [5] https://review.openstack.org/#/c/133660/ > [6] https://wiki.openstack.org/wiki/API_Working_Group#How_to_Contribute > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amit.gandhi at RACKSPACE.COM Fri Dec 19 16:19:46 2014 From: amit.gandhi at RACKSPACE.COM (Amit Gandhi) Date: Fri, 19 Dec 2014 16:19:46 +0000 Subject: [openstack-dev] [python-*client] py33 jobs seem to be failing In-Reply-To: <20141217203834.GD2497@yuggoth.org> References: <20141217203834.GD2497@yuggoth.org> Message-ID: Is there an ETA for this fix? Thanks Amit. > On Dec 17, 2014, at 3:38 PM, Jeremy Stanley wrote: > > On 2014-12-17 11:09:59 -0500 (-0500), Steve Martinelli wrote: > [...] >> The stack trace leads me to believe that docutils or sphinx is the >> culprit, but neither has released a new version in the time the >> bug has been around, so I'm not sure what the root cause of the >> problem is. > > It's an unforeseen interaction between new PBR changes to support > Setuptools 8 and the way docutils supports Py3K by running 2to3 > during installation (entrypoint scanning causes pre-translated > docutils to be loaded into the execution space through the egg-info > writer PBR grew to be able to record Git SHA details outside of > version strings). A solution is currently being developed. > -- > Jeremy Stanley > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From me at not.mn Fri Dec 19 16:21:36 2014 From: me at not.mn (John Dickinson) Date: Fri, 19 Dec 2014 08:21:36 -0800 Subject: [openstack-dev] Swift 2.2.1 released Message-ID: <10FC818F-3E38-496B-9D70-F75F8B74D446@not.mn> I'm happy to announce the release of Swift 2.2.1. The work of 28 contributors (including 8 first-time contributors), this release is definitely operator-centric. I recommend that you upgrade; as always you can upgrade to this release with no customer downtime. Get the release: https://launchpad.net/swift/kilo/2.2.1 Full change log: https://github.com/openstack/swift/blob/master/CHANGELOG Below I've highlighted a few of the more significant updates in this release. * Swift now rejects object names with unicode surrogates. These unicode code points are not able to be encoded as UTF-8, so they are now formally rejected. * Storage node error limits now survive a ring reload. Each Swift proxy server tracks errors when talking to a storage node. If a storage node sends too many errors, no further requests are sent to that node for a time. However, previously this error tracking was cleared with a ring reload, and a ring reload could happen frequently if some servers were being gradually added to the cluster. Now, the error tracking is not lost on ring reload, and error tracking is aggregated across storage polices. Basically, this means that the proxy server has a more accurate view of the health of the cluster and your cluster will be less stressed when you have failures and capacity adjustments at the same time. * Clean up empty account and container partitions directories if they are empty. This keeps the system healthy and prevents a large number of empty directories from (significantly) slowing down the replication process. * Swift now includes a full translation for Simplified Chinese (zh_CN locale). I'd like to thank all of the Swift contributors for helping with this release. I'd especially like to thank the first-time contributors listed below: Cedric Dos Santos Martin Geisler Filippo Giunchedi Gregory Haynes Daisuke Morita Hisashi Osanai Shilla Saebi Pearl Yajing Tan Thank you, and have a happy holiday season. John -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From davanum at gmail.com Fri Dec 19 16:27:54 2014 From: davanum at gmail.com (Davanum Srinivas) Date: Fri, 19 Dec 2014 11:27:54 -0500 Subject: [openstack-dev] [python-*client] py33 jobs seem to be failing In-Reply-To: References: <20141217203834.GD2497@yuggoth.org> Message-ID: Amit, from chatter on infra irc,. should be almost done if not already done. if you see any jobs fail recheck, you may want to hop onto irc. thanks, dims On Fri, Dec 19, 2014 at 11:19 AM, Amit Gandhi wrote: > Is there an ETA for this fix? > > Thanks > Amit. > > >> On Dec 17, 2014, at 3:38 PM, Jeremy Stanley wrote: >> >> On 2014-12-17 11:09:59 -0500 (-0500), Steve Martinelli wrote: >> [...] >>> The stack trace leads me to believe that docutils or sphinx is the >>> culprit, but neither has released a new version in the time the >>> bug has been around, so I'm not sure what the root cause of the >>> problem is. >> >> It's an unforeseen interaction between new PBR changes to support >> Setuptools 8 and the way docutils supports Py3K by running 2to3 >> during installation (entrypoint scanning causes pre-translated >> docutils to be loaded into the execution space through the egg-info >> writer PBR grew to be able to record Git SHA details outside of >> version strings). A solution is currently being developed. >> -- >> Jeremy Stanley >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From stevemar at ca.ibm.com Fri Dec 19 16:32:02 2014 From: stevemar at ca.ibm.com (Steve Martinelli) Date: Fri, 19 Dec 2014 11:32:02 -0500 Subject: [openstack-dev] [python-*client] py33 jobs seem to be failing In-Reply-To: References: <20141217203834.GD2497@yuggoth.org> Message-ID: Seems like https://review.openstack.org/#/c/142561/ fixed the issue. As there haven't been any hits in logstash in roughly 24hrs, and we're getting clean builds in python-keystoneclient and openstackclient. http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTmFtZUVycm9yOiBuYW1lICdTdGFuZGFyZEVycm9yJyBpcyBub3QgZGVmaW5lZFwiIGJ1aWxkX3N0YXR1czonRkFJTFVSRSciLCJmaWVsZHMiOlsibWVzc2FnZSIsImJ1aWxkX25hbWUiLCJidWlsZF9zdGF0dXMiXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDE5MDA2NjI4MjM5fQ== Steve Amit Gandhi wrote on 12/19/2014 11:19:46 AM: > From: Amit Gandhi > To: "OpenStack Development Mailing List (not for usage questions)" > > Date: 12/19/2014 11:28 AM > Subject: Re: [openstack-dev] [python-*client] py33 jobs seem to be failing > > Is there an ETA for this fix? > > Thanks > Amit. > > > > On Dec 17, 2014, at 3:38 PM, Jeremy Stanley wrote: > > > > On 2014-12-17 11:09:59 -0500 (-0500), Steve Martinelli wrote: > > [...] > >> The stack trace leads me to believe that docutils or sphinx is the > >> culprit, but neither has released a new version in the time the > >> bug has been around, so I'm not sure what the root cause of the > >> problem is. > > > > It's an unforeseen interaction between new PBR changes to support > > Setuptools 8 and the way docutils supports Py3K by running 2to3 > > during installation (entrypoint scanning causes pre-translated > > docutils to be loaded into the execution space through the egg-info > > writer PBR grew to be able to record Git SHA details outside of > > version strings). A solution is currently being developed. > > -- > > Jeremy Stanley > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Dec 19 16:46:17 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 19 Dec 2014 16:46:17 +0000 Subject: [openstack-dev] Git client vulnerability In-Reply-To: <20141219133405.GA20977@gmail.com> References: <54941B7A.2020905@x-ion.de> <20141219131948.GH2497@yuggoth.org> <20141219133405.GA20977@gmail.com> Message-ID: <20141219164616.GI2497@yuggoth.org> On 2014-12-19 13:34:06 +0000 (+0000), Louis Taylor wrote: > On Fri, Dec 19, 2014 at 01:19:48PM +0000, Jeremy Stanley wrote: > > Please re-read that advisory[1]. GitHub's _servers_ were not > > affected as this is a client-side vulnerability. What GitHub did was > > release fixed versions of their "GitHub for Windows" and "GitHub for > > Mac" _client_ tools. > > Github's servers were patched such that is is now not possible to host a > malicious repository on github servers, and attempts to push one will be > rejected. This is mentioned in the advisory. Yes, thanks, I phrased that poorly. GitHub's servers were not vulnerable, but you are correct that they have added some mitigation within their service to help shield as-of-yet unpatched clients from the announced vulnerability. -- Jeremy Stanley From anteaya at anteaya.info Fri Dec 19 16:59:31 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Fri, 19 Dec 2014 11:59:31 -0500 Subject: [openstack-dev] The state of nova-network to neutron migration Message-ID: <54945973.1010904@anteaya.info> Rather than waste your time making excuses let me state where we are and where I would like to get to, also sharing my thoughts about how you can get involved if you want to see this happen as badly as I have been told you do. Where we are: * a great deal of foundation work has been accomplished to achieve parity with nova-network and neutron to the extent that those involved are ready for migration plans to be formulated and be put in place * a summit session happened with notes and intentions[0] * people took responsibility and promptly got swamped with other responsibilities * spec deadlines arose and in neutron's case have passed * currently a neutron spec [1] is a work in progress (and it needs significant work still) and a nova spec is required and doesn't have a first draft or a champion Where I would like to get to: * I need people in addition to Oleg Bondarev to be available to help come up with ideas and words to describe them to create the specs in a very short amount of time (Oleg is doing great work and is a fabulous person, yay Oleg, he just can't do this alone) * specifically I need a contact on the nova side of this complex problem, similar to Oleg on the neutron side * we need to have a way for people involved with this effort to find each other, talk to each other and track progress * we need to have representation at both nova and neutron weekly meetings to communicate status and needs We are at K-2 and our current status is insufficient to expect this work will be accomplished by the end of K-3. I will be championing this work, in whatever state, so at least it doesn't fall off the map. If you would like to help this effort please get in contact. I will be thinking of ways to further this work and will be communicating to those who identify as affected by these decisions in the most effective methods of which I am capable. Thank you to all who have gotten us as far as well have gotten in this effort, it has been a long haul and you have all done great work. Let's keep going and finish this. Thank you, Anita. [0] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron [1] https://review.openstack.org/#/c/142456/ From jp at jamezpolley.com Fri Dec 19 17:10:41 2014 From: jp at jamezpolley.com (James Polley) Date: Fri, 19 Dec 2014 18:10:41 +0100 Subject: [openstack-dev] [TripleO] CI/CD report - 2014-12-12 - 2014-12-19 Message-ID: Two major CI outages this week 2014-12-12 - 2014-12-15 - pip install MySQL-python failing on fedora - There was an updated mariadb-devel package, which caused pip install of the python bindings to fail as gcc could not build using the provided headers. - derekh put in a workaround on the 15th but we have to wait until upstream provides a fixed package for a permanent resolution 2014-12-17 - failures in many projects on py33 tests - Caused by an unexpected interaction between new features in pbr and the way docutils handles python3 compatibility - derekh resolved this by tweaking the build process to not build pbr - just download the latest pbr from upstream As always, more details can be found at https://etherpad.openstack.org/p/tripleo-ci-breakages The HP2 region is still struggling along trying to be built. I've created a trello board at https://trello.com/b/MXbIP2qe/tripleo-cd to track current roadblocks + the current outstanding patches we're using to build HP2. If you're a CD admin and would like to help get HP2 up and running, take a look at the board (and ping me when you hit something I've written in a way that only makes sense if you already understand the problem). If you're not a CD admin, a few of the patches need some simple tidyups. -------------- next part -------------- An HTML attachment was scrubbed... URL: From randall.burt at RACKSPACE.COM Fri Dec 19 17:22:21 2014 From: randall.burt at RACKSPACE.COM (Randall Burt) Date: Fri, 19 Dec 2014 17:22:21 +0000 Subject: [openstack-dev] [Heat] How can I write at milestone section of blueprint? In-Reply-To: <20141219101748.GD22503@t430slt.redhat.com> References: <20141219170200.0619.E1E9C6FF@jp.fujitsu.com> <20141219101748.GD22503@t430slt.redhat.com> Message-ID: <5CD1F2A3-597C-42FB-B5F6-F9AE026FEDFB@rackspace.com> There should already be blueprints in launchpad for very similar functionality. For example: https://blueprints.launchpad.net/heat/+spec/lifecycle-callbacks. While that specifies Heat sending notifications to the outside world, there has been discussion around debugging that would allow the receiver to send notifications back. I only point this out so you can see there should be similar blueprints and specs that you can reference and use as examples. On Dec 19, 2014, at 4:17 AM, Steven Hardy wrote: > On Fri, Dec 19, 2014 at 05:02:04PM +0900, Yasunori Goto wrote: >> >> Hello, >> >> This is the first mail at Openstack community, > > Welcome! :) > >> and I have a small question about how to write blueprint for Heat. >> >> Currently our team would like to propose 2 interfaces >> for users operation in HOT. >> (One is "Event handler" which is to notify user's defined event to heat. >> Another is definitions of action when heat catches the above notification.) >> So, I'm preparing the blueprint for it. > > Please include details of the exact use-case, e.g the problem you're trying > to solve (not just the proposed solution), as it's possible we can suggest > solutions based on exiting interfaces. > >> However, I can not find how I can write at the milestone section of blueprint. >> >> Heat blueprint template has a section for Milestones. >> "Milestones -- Target Milestone for completeion:" >> >> But I don't think I can decide it by myself. >> In my understanding, it should be decided by PTL. > > Normally, it's decided by when the person submitting the spec expects to > finish writing the code by. The PTL doesn't really have much control over > that ;) > >> In addition, probably the above our request will not finish >> by Kilo. I suppose it will be "L" version or later. > > So to clarify, you want to propose the feature, but you're not planning on > working on it (e.g implementing it) yourself? > >> So, what should I write at this section? >> "Kilo-x", "L version", or empty? > > As has already been mentioned, it doesn't matter that much - I see it as a > statement of intent from developers. If you're just requesting a feature, > you can even leave it blank if you want and we'll update it when an > assignee is found (e.g during the spec review). > > Thanks, > > Steve > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mestery at mestery.com Fri Dec 19 17:28:28 2014 From: mestery at mestery.com (Kyle Mestery) Date: Fri, 19 Dec 2014 11:28:28 -0600 Subject: [openstack-dev] [Openstack-operators] The state of nova-network to neutron migration In-Reply-To: <54945973.1010904@anteaya.info> References: <54945973.1010904@anteaya.info> Message-ID: On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno wrote: > > Rather than waste your time making excuses let me state where we are and > where I would like to get to, also sharing my thoughts about how you can > get involved if you want to see this happen as badly as I have been told > you do. > > Where we are: > * a great deal of foundation work has been accomplished to achieve > parity with nova-network and neutron to the extent that those involved > are ready for migration plans to be formulated and be put in place > * a summit session happened with notes and intentions[0] > * people took responsibility and promptly got swamped with other > responsibilities > * spec deadlines arose and in neutron's case have passed > * currently a neutron spec [1] is a work in progress (and it needs > significant work still) and a nova spec is required and doesn't have a > first draft or a champion > > Where I would like to get to: > * I need people in addition to Oleg Bondarev to be available to help > come up with ideas and words to describe them to create the specs in a > very short amount of time (Oleg is doing great work and is a fabulous > person, yay Oleg, he just can't do this alone) > * specifically I need a contact on the nova side of this complex > problem, similar to Oleg on the neutron side > * we need to have a way for people involved with this effort to find > each other, talk to each other and track progress > * we need to have representation at both nova and neutron weekly > meetings to communicate status and needs > > We are at K-2 and our current status is insufficient to expect this work > will be accomplished by the end of K-3. I will be championing this work, > in whatever state, so at least it doesn't fall off the map. If you would > like to help this effort please get in contact. I will be thinking of > ways to further this work and will be communicating to those who > identify as affected by these decisions in the most effective methods of > which I am capable. > > Thank you to all who have gotten us as far as well have gotten in this > effort, it has been a long haul and you have all done great work. Let's > keep going and finish this. > > Thank you, > Anita. > > Thank you for volunteering to drive this effort Anita, I am very happy about this. I support you 100%. I'd like to point out that we really need a point of contact on the nova side, similar to Oleg on the Neutron side. IMHO, this is step 1 here to continue moving this forward. Thanks, Kyle > [0] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron > [1] https://review.openstack.org/#/c/142456/ > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Dec 19 17:41:21 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 19 Dec 2014 17:41:21 +0000 Subject: [openstack-dev] [python-*client] py33 jobs seem to be failing In-Reply-To: References: <20141217203834.GD2497@yuggoth.org> Message-ID: <20141219174121.GJ2497@yuggoth.org> On 2014-12-19 11:32:02 -0500 (-0500), Steve Martinelli wrote: > Seems like https://review.openstack.org/#/c/142561/ fixed the issue. > > As there haven't been any hits in logstash in roughly 24hrs, and > we're getting clean builds in python-keystoneclient and > openstackclient. [...] Apparently there are some PBR-based projects still using nose which is failing Python 3.x jobs for similar reasons as docutils was. We just discovered this a few minutes ago and are trying to figure out a similar solution. -- Jeremy Stanley From dougw at a10networks.com Fri Dec 19 18:33:14 2014 From: dougw at a10networks.com (Doug Wiegley) Date: Fri, 19 Dec 2014 18:33:14 +0000 Subject: [openstack-dev] [neutron][lbaas] meetings during holidays Message-ID: Hi all, Anyone have big agenda items for the 12/23 or 12/30 meeting? If not, I?d suggest we cancel those two meetings, and bring up anything small during the on-demand portion of the neutron meetings. If I don?t hear anything by Monday, we will cancel those two meetings. Thanks, Doug From everett.toews at RACKSPACE.COM Fri Dec 19 18:57:48 2014 From: everett.toews at RACKSPACE.COM (Everett Toews) Date: Fri, 19 Dec 2014 18:57:48 +0000 Subject: [openstack-dev] [api] Analysis of current API design In-Reply-To: References: <36742172-3C0F-4195-993F-D935000726EC@rackspace.com> Message-ID: <70E9E1F0-3607-4681-BD39-38D5776F8212@rackspace.com> I thought the analysis on service catalogs might attract some attention. ;) More inline On Dec 19, 2014, at 10:17 AM, Amit Gandhi wrote: > How do the allocation of the service types in the service catalog get created. AFAICT it?s arbitrary. Provider picks the string used in the service type. > For example, looking at the link provided below for service catalogs [1], you have with Rackspace a service type of rax:queues (which is running zaqar). However in devstack, zaqar is listed as ?messaging?. FWIW i think the rackspace entry came before the devstack entry, but there is now an inconsistency. > > How do new openstack related projects (that are not incubated/graduated) appear in the service catalog with a consistent service type name that can be used across providers with the confidence it refers to the same set of api's? That?s what we?re hoping to achieve with guidelines around the service catalog. So when the provider goes to pick the strings used in the service catalog, there?s consistency. > Is it just an assumption, or do we need a catalogue somewhere listing what each service type is associated with? Yes. This is what would be part of the guideline. > Does adding it to Devstack pretty much stake claim to the service type? To date, this has been the case. The DevStack version of the service catalog sort of became a de facto standard. But not de facto enough and hence the inconsistency. It?d be great to hear thoughts from Adam Y, Dolph M, and Dean T on the subject. I don?t think I have the full picture. Thanks, Everett From pcm at cisco.com Fri Dec 19 19:01:10 2014 From: pcm at cisco.com (Paul Michali (pcm)) Date: Fri, 19 Dec 2014 19:01:10 +0000 Subject: [openstack-dev] [neutron][vpnaas] Sub-team meetings on Dec 20th and 27th? Message-ID: <837EA5B8-4C8E-4B0C-98BC-A174122156E7@cisco.com> Does anyone have agenda items to discuss for the next two meetings during the holidays? If so, please let me know (and add them to the Wiki page), and we?ll hold the meeting. Otherwise, we can continue on Jan 6th, and any pop-up items can be addressed on the mailing list or Neutron IRC. Please let me know by Monday, if you?d like us to meet. Regards, PCM (Paul Michali) MAIL ?..?. pcm at cisco.com IRC ??..? pc_m (irc.freenode.com) TW ???... @pmichali GPG Key ? 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dtroyer at gmail.com Fri Dec 19 19:01:48 2014 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 19 Dec 2014 13:01:48 -0600 Subject: [openstack-dev] [api] Analysis of current API design In-Reply-To: References: <36742172-3C0F-4195-993F-D935000726EC@rackspace.com> Message-ID: On Fri, Dec 19, 2014 at 10:17 AM, Amit Gandhi wrote: > > > How do the allocation of the service types in the service catalog get > created. > Officially these are deployer-defined. And is why some clients allow you to configure the service name used. The service types are supposed to be consistent, but are also not in practice. > How do new openstack related projects (that are not incubated/graduated) > appear in the service catalog with a consistent service type name that can > be used across providers with the confidence it refers to the same set of > api's? Is it just an assumption, or do we need a catalogue somewhere > listing what each service type is associated with? Does adding it to > Devstack pretty much stake claim to the service type? DevStack has become a de facto source for defaults for a lot of things, and this is not a good thing. The DevStack defaults are chosen for developer and CI testing use and do not necessarily take in to consideration any actual deployment considerations. dt -- Dean Troyer dtroyer at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From thingee at gmail.com Fri Dec 19 19:07:52 2014 From: thingee at gmail.com (Mike Perez) Date: Fri, 19 Dec 2014 11:07:52 -0800 Subject: [openstack-dev] [cinder] [driver] DB operations In-Reply-To: References: Message-ID: <20141219190752.GA7641@gmail.com> On 01:20 Fri 19 Dec , Duncan Thomas wrote: > So our general advice has historical been 'drivers should not be accessing > the db directly'. I haven't had chance to look at your driver code yet, > I've been on vacation, but my suggestion is that if you absolutely must > store something in the admin metadata rather than somewhere that is covered > by the model update (generally provider location and provider auth) then > writing some helper methods that wrap the context bump and db call would be > better than accessing it directly from the driver. > > Duncan Thomas > On Dec 18, 2014 11:41 PM, "Amit Das" wrote: > > > So what is the proper way to run these DB operations from within a driver ? Drivers not doing db changes is also documented in the "How to contribute a driver" wiki page. https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver -- Mike Perez From ramy.asselin at hp.com Fri Dec 19 19:43:50 2014 From: ramy.asselin at hp.com (Asselin, Ramy) Date: Fri, 19 Dec 2014 19:43:50 +0000 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: References: <5486D947.4090209@hp.com> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4240C5@G4W3223.americas.hpqcorp.net> Message-ID: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A424BC0@G4W3223.americas.hpqcorp.net> Eduard, If you run this command, you can see which jobs are registered: >telnet localhost 4730 >status There are 3 numbers per job: queued, running and workers that can run job. Make sure the job is listed & last ?workers? is non-zero. To run the job again without submitting a patch set, leave a ?recheck? comment on the patch & make sure your zuul layout.yaml is configured to trigger off that comment. For example [1]. Be sure to use the sandbox repository. [2] I?m not aware of other ways. Ramy [1] https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L20 [2] https://github.com/openstack-dev/sandbox From: Eduard Matei [mailto:eduard.matei at cloudfounders.com] Sent: Friday, December 19, 2014 3:36 AM To: Asselin, Ramy Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Hi all, After a little struggle with the config scripts i managed to get a working setup that is able to process openstack-dev/sandbox and run noop-check-comunication. Then, i tried enabling dsvm-tempest-full job but it keeps returning "NOT_REGISTERED" 2014-12-19 12:07:14,683 INFO zuul.IndependentPipelineManager: Change depends on changes [] 2014-12-19 12:07:14,683 INFO zuul.Gearman: Launch job noop-check-communication for change with dependent changes [] 2014-12-19 12:07:14,693 INFO zuul.Gearman: Launch job dsvm-tempest-full for change with dependent changes [] 2014-12-19 12:07:14,694 ERROR zuul.Gearman: Job is not registered with Gearman 2014-12-19 12:07:14,694 INFO zuul.Gearman: Build complete, result NOT_REGISTERED 2014-12-19 12:07:14,765 INFO zuul.Gearman: Build name: build:noop-check-communication unique: 333c6ea077324a788e3c37a313d872c5> started 2014-12-19 12:07:14,910 INFO zuul.Gearman: Build name: build:noop-check-communication unique: 333c6ea077324a788e3c37a313d872c5> complete, result SUCCESS 2014-12-19 12:07:14,916 INFO zuul.IndependentPipelineManager: Reporting change , actions: [, {'verified': -1}>] Nodepoold's log show no reference to dsvm-tempest-full and neither jenkins' logs. Any idea how to enable this job? Also, i got the "Cloud provider" setup and i can access it from the jenkins master. Any idea how i can manually trigger dsvm-tempest-full job to run and test the cloud provider without having to push a review to Gerrit? Thanks, Eduard On Thu, Dec 18, 2014 at 7:52 PM, Eduard Matei > wrote: Thanks for the input. I managed to get another master working (on Ubuntu 13.10), again with some issues since it was already setup. I'm now working towards setting up the slave. Will add comments to those reviews. Thanks, Eduard On Thu, Dec 18, 2014 at 7:42 PM, Asselin, Ramy > wrote: Yes, Ubuntu 12.04 is tested as mentioned in the readme [1]. Note that the referenced script is just a wrapper that pulls all the latest from various locations in openstack-infra, e.g. [2]. Ubuntu 14.04 support is WIP [3] FYI, there?s a spec to get an in-tree 3rd party ci solution [4]. Please add your comments if this interests you. [1] https://github.com/rasselin/os-ext-testing/blob/master/README.md [2] https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L29 [3] https://review.openstack.org/#/c/141518/ [4] https://review.openstack.org/#/c/139745/ From: Punith S [mailto:punith.s at cloudbyte.com] Sent: Thursday, December 18, 2014 3:12 AM To: OpenStack Development Mailing List (not for usage questions); Eduard Matei Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Hi Eduard we tried running https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh on ubuntu master 12.04, and it appears to be working fine on 12.04. thanks On Thu, Dec 18, 2014 at 1:57 PM, Eduard Matei > wrote: Hi, Seems i can't install using puppet on the jenkins master using install_master.sh from https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh because it's running Ubuntu 11.10 and it appears unsupported. I managed to install puppet manually on master and everything else fails So i'm trying to manually install zuul and nodepool and jenkins job builder, see where i end up. The slave looks complete, got some errors on running install_slave so i ran parts of the script manually, changing some params and it appears installed but no way to test it without the master. Any ideas welcome. Thanks, Eduard On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy > wrote: Manually running the script requires a few environment settings. Take a look at the README here: https://github.com/openstack-infra/devstack-gate Regarding cinder, I?m using this repo to run our cinder jobs (fork from jaypipes). https://github.com/rasselin/os-ext-testing Note that this solution doesn?t use the Jenkins gerrit trigger pluggin, but zuul. There?s a sample job for cinder here. It?s in Jenkins Job Builder format. https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample You can ask more questions in IRC freenode #openstack-cinder. (irc# asselin) Ramy From: Eduard Matei [mailto:eduard.matei at cloudfounders.com] Sent: Tuesday, December 16, 2014 12:41 AM To: Bailey, Darragh Cc: OpenStack Development Mailing List (not for usage questions); OpenStack Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Hi, Can someone point me to some working documentation on how to setup third party CI? (joinfu's instructions don't seem to work, and manually running devstack-gate scripts fails: Running gate_hook Job timeout set to: 163 minutes timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory ERROR: the main setup script run by this job failed - exit code: 127 please look at the relevant log files to determine the root cause Cleaning up host ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) Build step 'Execute shell' marked build as failure. I have a working Jenkins slave with devstack and our internal libraries, i have Gerrit Trigger Plugin working and triggering on patches created, i just need the actual job contents so that it can get to comment with the test results. Thanks, Eduard On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei > wrote: Hi Darragh, thanks for your input I double checked the job settings and fixed it: - build triggers is set to Gerrit event - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin and tested separately) - Trigger on: Patchset Created - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: Type: Path, Pattern: ** (was Type Plain on both) Now the job is triggered by commit on openstack-dev/sandbox :) Regarding the Query and Trigger Gerrit Patches, i found my patch using query: status:open project:openstack-dev/sandbox change:139585 and i can trigger it manually and it executes the job. But i still have the problem: what should the job do? It doesn't actually do anything, it doesn't run tests or comment on the patch. Do you have an example of job? Thanks, Eduard On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh > wrote: Hi Eduard, I would check the trigger settings in the job, particularly which "type" of pattern matching is being used for the branches. Found it tends to be the spot that catches most people out when configuring jobs with the Gerrit Trigger plugin. If you're looking to trigger against all branches then you would want "Type: Path" and "Pattern: **" appearing in the UI. If you have sufficient access using the 'Query and Trigger Gerrit Patches' page accessible from the main view will make it easier to confirm that your Jenkins instance can actually see changes in gerrit for the given project (which should mean that it can see the corresponding events as well). Can also use the same page to re-trigger for PatchsetCreated events to see if you've set the patterns on the job correctly. Regards, Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" - Unknown On 08/12/14 14:33, Eduard Matei wrote: > Resending this to dev ML as it seems i get quicker response :) > > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: > Patchset Created", chose as server the configured Gerrit server that > was previously tested, then added the project openstack-dev/sandbox > and saved. > I made a change on dev sandbox repo but couldn't trigger my job. > > Any ideas? > > Thanks, > Eduard > > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei > > >> wrote: > > Hello everyone, > > Thanks to the latest changes to the creation of service accounts > process we're one step closer to setting up our own CI platform > for Cinder. > > So far we've got: > - Jenkins master (with Gerrit plugin) and slave (with DevStack and > our storage solution) > - Service account configured and tested (can manually connect to > review.openstack.org and get events > and publish comments) > > Next step would be to set up a job to do the actual testing, this > is where we're stuck. > Can someone please point us to a clear example on how a job should > look like (preferably for testing Cinder on Kilo)? Most links > we've found are broken, or tools/scripts are no longer working. > Also, we cannot change the Jenkins master too much (it's owned by > Ops team and they need a list of tools/scripts to review before > installing/running so we're not allowed to experiment). > > Thanks, > Eduard > > -- > > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom > they are addressed.If you are not the named addressee or an > employee or agent responsible for delivering this message to the > named addressee, you are hereby notified that you are not > authorized to read, print, retain, copy or disseminate this > message or any part of it. If you have received this email in > error we request you to notify us by reply e-mail and to delete > all electronic files of the message. If you are not the intended > recipient you are notified that disclosing, copying, distributing > or taking any action in reliance on the contents of this > information is strictly prohibited. E-mail transmission cannot be > guaranteed to be secure or error free as information could be > intercepted, corrupted, lost, destroyed, arrive late or > incomplete, or contain viruses. The sender therefore does not > accept liability for any errors or omissions in the content of > this message, and shall have no liability for any loss or damage > suffered by the user, which arise as a result of e-mail transmission. > > > > > -- > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom they > are addressed.If you are not the named addressee or an employee or > agent responsible for delivering this message to the named addressee, > you are hereby notified that you are not authorized to read, print, > retain, copy or disseminate this message or any part of it. If you > have received this email in error we request you to notify us by reply > e-mail and to delete all electronic files of the message. If you are > not the intended recipient you are notified that disclosing, copying, > distributing or taking any action in reliance on the contents of this > information is strictly prohibited. E-mail transmission cannot be > guaranteed to be secure or error free as information could be > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or > contain viruses. The sender therefore does not accept liability for > any errors or omissions in the content of this message, and shall have > no liability for any loss or damage suffered by the user, which arise > as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- regards, punith s cloudbyte.com -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From devananda.vdv at gmail.com Fri Dec 19 19:49:23 2014 From: devananda.vdv at gmail.com (Devananda van der Veen) Date: Fri, 19 Dec 2014 19:49:23 +0000 Subject: [openstack-dev] [Ironic] maintaining our stable branches Message-ID: Hi folks! We now have control over our own stable branch maintenance, which is good, because the stable maintenance team is not responsible for non-integrated releases of projects (eg, both of our previous releases). Also, to note, with the Big Tent changes starting to occur, such responsibilities will be more distributed in the future. [0] For the remainder of this cycle, we'll reuse the ironic-milestone group for stable branch maintenance [1]; when Kilo is released, we'll need to create an ironic-stable-maint gerrit group and move to that, or generally do what ever the process looks like at that point. In any case, for now I've seeded the group with Adam Gandelman and myself (since we were already tracking Ironic's stable branches). If any other cores would like to help with this, please ping me, I'm happy to add folks but don't want to assume that all cores want the responsibility. We should also decide and document, explicitly, what support we're giving to the Icehouse and Juno releases. My sense is that we should drop support for Icehouse, but I'll put that on the weekly meeting agenda for after the holidays. -Devananda [0] http://lists.openstack.org/pipermail/openstack-dev/2014-November/050390.html [1] https://review.openstack.org/#/c/143112/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nati.ueno at gmail.com Fri Dec 19 20:29:45 2014 From: nati.ueno at gmail.com (Nati Ueno) Date: Fri, 19 Dec 2014 12:29:45 -0800 Subject: [openstack-dev] [libvirt][vpnaas] IRC Meeting Clash In-Reply-To: <20141219101216.GD4410@redhat.com> References: <20141219101216.GD4410@redhat.com> Message-ID: Hi folks I'm from vpnaas team. Sorry, we didn't know that slot was already booked. We will use another irc channel. > Paul let's find another available slot. 2014-12-19 2:12 GMT-08:00 Daniel P. Berrange : > On Fri, Dec 19, 2014 at 10:05:44AM +0000, Matthew Gilliard wrote: >> Hello, >> >> At the moment, both Libvirt [1] and VPNaaS [2] are down as having >> meetings in #openstack-meeting-3 at 1500UTC on Tuesdays. Of course, >> there can be only one - and it looks as if the VPN meeting is the one >> that actually takes place there. >> >> What's the status of the libvirt meetings? Have they moved, or are >> they not happening any more? > > They happen, but when there are no agenda items that need discussing > they're done pretty quickly. > > Given that VPN meeting was only added ot the wiki a week ago, I think > it should be the meeting that changes time or place. > > NB identifying free slots from the wiki is a total PITA. It is easier to > install the iCal calendar feed, which gives you a visual representation > of what the free/busy slots are during the week. > > Regards, > Daniel > -- > |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| > |: http://libvirt.org -o- http://virt-manager.org :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Nachi Ueno email:nati.ueno at gmail.com twitter:http://twitter.com/nati From greg at greghaynes.net Fri Dec 19 20:43:28 2014 From: greg at greghaynes.net (Gregory Haynes) Date: Fri, 19 Dec 2014 20:43:28 +0000 Subject: [openstack-dev] [TripleO] CI/CD report - 2014-12-12 - 2014-12-19 In-Reply-To: References: Message-ID: <1419021723-sup-6568@greghaynes0.opus.gah> Excerpts from James Polley's message of 2014-12-19 17:10:41 +0000: > Two major CI outages this week > > 2014-12-12 - 2014-12-15 - pip install MySQL-python failing on fedora > - There was an updated mariadb-devel package, which caused pip install of > the python bindings to fail as gcc could not build using the provided > headers. > - derekh put in a workaround on the 15th but we have to wait until > upstream provides a fixed package for a permanent resolution > > 2014-12-17 - failures in many projects on py33 tests > - Caused by an unexpected interaction between new features in pbr and the > way docutils handles python3 compatibility > - derekh resolved this by tweaking the build process to not build pbr - > just download the latest pbr from upstream I am a bad person and forgot to update our CI outage etherpad, but we had another outage that was caused by the setuptools PEP440 breakage: https://review.openstack.org/#/c/141659/ We might be able to revert this now if the world is fixed.... From dzimine at stackstorm.com Fri Dec 19 21:01:01 2014 From: dzimine at stackstorm.com (Dmitri Zimine) Date: Fri, 19 Dec 2014 13:01:01 -0800 Subject: [openstack-dev] [Mistral] For-each In-Reply-To: References: Message-ID: <31AB780F-4868-424C-A642-13D89C3D6AAA@stackstorm.com> Thanks Angus. One obvious thing is we either make it somewhat consistent, or name it differently. These looks similar, at least on the surface. I wonder if the feedback we?ve got so far (for-each is confusing because it brings wrong expectations) is applicable to Heat, too. Another observation on naming consistency - mistral uses dash, like `for-each`. Heat uses _underscores when naming YAML keys. So does TOSCA standard. We should have thought about this earlier but it may be not late to fix it while v2 spec is still forming. DZ. On Dec 18, 2014, at 9:07 PM, Angus Salkeld wrote: > On Mon, Dec 15, 2014 at 8:00 PM, Nikolay Makhotkin wrote: > Hi, > > Here is the doc with suggestions on specification for for-each feature. > > You are free to comment and ask questions. > > https://docs.google.com/document/d/1iw0OgQcU0LV_i3Lnbax9NqAJ397zSYA3PMvl6F_uqm0/edit?usp=sharing > > > > Just as a drive by comment, there is a Heat spec for a "for-each": https://review.openstack.org/#/c/140849/ > (there hasn't been a lot of feedback for it yet tho') > > Nice to have these somewhat consistent. > > -Angus > > > -- > Best Regards,Nikolay > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcm at cisco.com Fri Dec 19 21:23:35 2014 From: pcm at cisco.com (Paul Michali (pcm)) Date: Fri, 19 Dec 2014 21:23:35 +0000 Subject: [openstack-dev] [libvirt][vpnaas] IRC Meeting Clash In-Reply-To: References: <20141219101216.GD4410@redhat.com> Message-ID: Sorry, I had checked when I set up the VPN meeting and thought that meeting-3 was available on the wiki, but apparently not. I moved VPNaaS to meeting-4, which is not in use and updated the Wiki. Nachi, let me know what your thoughts about skipping or holding meetings next week. Regards, PCM (Paul Michali) MAIL ?..?. pcm at cisco.com IRC ??..? pc_m (irc.freenode.com) TW ???... @pmichali GPG Key ? 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 On Dec 19, 2014, at 3:29 PM, Nati Ueno wrote: > Hi folks > > I'm from vpnaas team. > Sorry, we didn't know that slot was already booked. > We will use another irc channel. > >> Paul > let's find another available slot. > > > 2014-12-19 2:12 GMT-08:00 Daniel P. Berrange : >> On Fri, Dec 19, 2014 at 10:05:44AM +0000, Matthew Gilliard wrote: >>> Hello, >>> >>> At the moment, both Libvirt [1] and VPNaaS [2] are down as having >>> meetings in #openstack-meeting-3 at 1500UTC on Tuesdays. Of course, >>> there can be only one - and it looks as if the VPN meeting is the one >>> that actually takes place there. >>> >>> What's the status of the libvirt meetings? Have they moved, or are >>> they not happening any more? >> >> They happen, but when there are no agenda items that need discussing >> they're done pretty quickly. >> >> Given that VPN meeting was only added ot the wiki a week ago, I think >> it should be the meeting that changes time or place. >> >> NB identifying free slots from the wiki is a total PITA. It is easier to >> install the iCal calendar feed, which gives you a visual representation >> of what the free/busy slots are during the week. >> >> Regards, >> Daniel >> -- >> |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| >> |: http://libvirt.org -o- http://virt-manager.org :| >> |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| >> |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Nachi Ueno > email:nati.ueno at gmail.com > twitter:http://twitter.com/nati -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pcm at cisco.com Fri Dec 19 22:16:09 2014 From: pcm at cisco.com (Paul Michali (pcm)) Date: Fri, 19 Dec 2014 22:16:09 +0000 Subject: [openstack-dev] [neutron][fwaas] neutron/agent/firewall.py Message-ID: Curious? This has a FirewallDriver and NoopFirewallDriver. Should this be moved into the neutron_fwaas repo? Regards, PCM (Paul Michali) MAIL ?..?. pcm at cisco.com IRC ??..? pc_m (irc.freenode.com) TW ???... @pmichali GPG Key ? 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From mgagne at iweb.com Fri Dec 19 22:23:05 2014 From: mgagne at iweb.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=) Date: Fri, 19 Dec 2014 17:23:05 -0500 Subject: [openstack-dev] [neutron][fwaas] neutron/agent/firewall.py In-Reply-To: References: Message-ID: <5494A549.2090507@iweb.com> On 2014-12-19 5:16 PM, Paul Michali (pcm) wrote: > > This has a FirewallDriver and NoopFirewallDriver. Should this be moved > into the neutron_fwaas repo? > AFAIK, FirewallDriver is used to implement SecurityGroup: See: - https://github.com/openstack/neutron/blob/master/neutron/agent/firewall.py#L26-L29 - https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L45 - https://github.com/openstack/neutron/blob/master/neutron/plugins/hyperv/agent/security_groups_driver.py#L25 This class looks to not be used by neutron-fwaas -- Mathieu From nati.ueno at gmail.com Fri Dec 19 22:36:54 2014 From: nati.ueno at gmail.com (Nati Ueno) Date: Fri, 19 Dec 2014 14:36:54 -0800 Subject: [openstack-dev] [neutron][vpnaas] Sub-team meetings on Dec 20th and 27th? In-Reply-To: <837EA5B8-4C8E-4B0C-98BC-A174122156E7@cisco.com> References: <837EA5B8-4C8E-4B0C-98BC-A174122156E7@cisco.com> Message-ID: +1 for skip let's have a vacation :) 2014-12-19 11:01 GMT-08:00 Paul Michali (pcm) : > Does anyone have agenda items to discuss for the next two meetings during > the holidays? > > If so, please let me know (and add them to the Wiki page), and we?ll hold > the meeting. Otherwise, we can continue on Jan 6th, and any pop-up items can > be addressed on the mailing list or Neutron IRC. > > Please let me know by Monday, if you?d like us to meet. > > > Regards, > > PCM (Paul Michali) > > MAIL ?..?. pcm at cisco.com > IRC ??..? pc_m (irc.freenode.com) > TW ???... @pmichali > GPG Key ? 4525ECC253E31A83 > Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Nachi Ueno email:nati.ueno at gmail.com twitter:http://twitter.com/nati From skandasw at cisco.com Fri Dec 19 22:50:04 2014 From: skandasw at cisco.com (Sridar Kandaswamy (skandasw)) Date: Fri, 19 Dec 2014 22:50:04 +0000 Subject: [openstack-dev] [neutron][fwaas] neutron/agent/firewall.py In-Reply-To: <5494A549.2090507@iweb.com> Message-ID: +1 Mathieu. Paul, this is not related to FWaaS. Thanks Sridar On 12/19/14, 2:23 PM, "Mathieu Gagn?" wrote: >On 2014-12-19 5:16 PM, Paul Michali (pcm) wrote: >> >> This has a FirewallDriver and NoopFirewallDriver. Should this be moved >> into the neutron_fwaas repo? >> > >AFAIK, FirewallDriver is used to implement SecurityGroup: > >See: >- >https://github.com/openstack/neutron/blob/master/neutron/agent/firewall.py >#L26-L29 >- >https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptab >les_firewall.py#L45 >- >https://github.com/openstack/neutron/blob/master/neutron/plugins/hyperv/ag >ent/security_groups_driver.py#L25 > >This class looks to not be used by neutron-fwaas > >-- >Mathieu > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From anne at openstack.org Fri Dec 19 23:30:29 2014 From: anne at openstack.org (Anne Gentle) Date: Fri, 19 Dec 2014 17:30:29 -0600 Subject: [openstack-dev] End of year gratitude Message-ID: I don't gush much, but when I do, it's end-of-year reflection gushing. We have some of the most interesting docs in the world. I'm proud of what we've accomplished as a community, as reviewers, as coaches, as tool makers, and as writers. To encourage and motivate nearly 200 separate contributors in six months is no small feat. I hope you can celebrate with friends, family, pets, and wildlife, taking a nice, well-earned break. Warmly, Anne .--._.--.--.__.--.--.__.--.--.__.--.--._.--. _(_ _Y_ _Y_ _Y_ _Y_ _)_ [___] [___] [___] [___] [___] [___] /:' \ /:' \ /:' \ /:' \ /:' \ /:' \ |:: | |:: | |:: | |:: | |:: | |:: | \::. / \::. / \::. / \::. / \::. / \::. / \::./ \::./ \::./ \::./ \::./ \::./ '=' '=' '=' '=' '=' '=' -------------- next part -------------- An HTML attachment was scrubbed... URL: From dzimine at stackstorm.com Sat Dec 20 00:54:23 2014 From: dzimine at stackstorm.com (Dmitri Zimine) Date: Fri, 19 Dec 2014 16:54:23 -0800 Subject: [openstack-dev] [Mistral] For-each In-Reply-To: <31AB780F-4868-424C-A642-13D89C3D6AAA@stackstorm.com> References: <31AB780F-4868-424C-A642-13D89C3D6AAA@stackstorm.com> Message-ID: <82E688DB-515D-46F7-9FCB-F077E3DFDF26@stackstorm.com> Folks, I appended some more ideas on making for-each loop more readable / less confusing in the document. It?s not rocking the boat (yet) - all the key agreements done that far, stay so far. It?s refinements. Please take a look, leave comments, +1 / -1 for various ideas, and leave your ideas there, too. https://docs.google.com/document/d/1iw0OgQcU0LV_i3Lnbax9NqAJ397zSYA3PMvl6F_uqm0/edit#heading=h.5hjdjqxsgfle DZ. On Dec 19, 2014, at 1:01 PM, Dmitri Zimine wrote: > Thanks Angus. > > One obvious thing is we either make it somewhat consistent, or name it differently. > These looks similar, at least on the surface. I wonder if the feedback we?ve got so far (for-each is confusing because it brings wrong expectations) is applicable to Heat, too. > > Another observation on naming consistency - mistral uses dash, like `for-each`. > Heat uses _underscores when naming YAML keys. > So does TOSCA standard. We should have thought about this earlier but it may be not late to fix it while v2 spec is still forming. > > DZ. > > > On Dec 18, 2014, at 9:07 PM, Angus Salkeld wrote: > >> On Mon, Dec 15, 2014 at 8:00 PM, Nikolay Makhotkin wrote: >> Hi, >> >> Here is the doc with suggestions on specification for for-each feature. >> >> You are free to comment and ask questions. >> >> https://docs.google.com/document/d/1iw0OgQcU0LV_i3Lnbax9NqAJ397zSYA3PMvl6F_uqm0/edit?usp=sharing >> >> >> >> Just as a drive by comment, there is a Heat spec for a "for-each": https://review.openstack.org/#/c/140849/ >> (there hasn't been a lot of feedback for it yet tho') >> >> Nice to have these somewhat consistent. >> >> -Angus >> >> >> -- >> Best Regards,Nikolay >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amit.das at cloudbyte.com Sat Dec 20 02:37:13 2014 From: amit.das at cloudbyte.com (Amit Das) Date: Sat, 20 Dec 2014 08:07:13 +0530 Subject: [openstack-dev] [cinder] [driver] DB operations In-Reply-To: References: Message-ID: Thanks Duncan. Do you mean hepler methods in the specific driver class? On 19 Dec 2014 14:51, "Duncan Thomas" wrote: > So our general advice has historical been 'drivers should not be accessing > the db directly'. I haven't had chance to look at your driver code yet, > I've been on vacation, but my suggestion is that if you absolutely must > store something in the admin metadata rather than somewhere that is covered > by the model update (generally provider location and provider auth) then > writing some helper methods that wrap the context bump and db call would be > better than accessing it directly from the driver. > > Duncan Thomas > On Dec 18, 2014 11:41 PM, "Amit Das" wrote: > >> Hi Stackers, >> >> I have been developing a Cinder driver for CloudByte storage and have >> come across some scenarios where the driver needs to do create, read & >> update operations on cinder database (volume_admin_metadata table). This is >> required to establish a mapping between OpenStack IDs with the backend >> storage IDs. >> >> Now, I have got some review comments w.r.t the usage of DB related >> operations esp. w.r.t raising the context to admin. >> >> In short, it has been advised not to use "*context.get_admin_context()*". >> >> >> https://review.openstack.org/#/c/102511/15/cinder/volume/drivers/cloudbyte/cloudbyte.py >> >> However, i get errors trying to use the default context as shown below: >> >> *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher File >> "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 103, in >> is_admin_context* >> *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher return >> context.is_admin* >> *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher >> AttributeError: 'module' object has no attribute 'is_admin'* >> >> So what is the proper way to run these DB operations from within a driver >> ? >> >> >> Regards, >> Amit >> *CloudByte Inc.* >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dzimine at stackstorm.com Sat Dec 20 05:35:54 2014 From: dzimine at stackstorm.com (Dmitri Zimine) Date: Fri, 19 Dec 2014 21:35:54 -0800 Subject: [openstack-dev] [Mistral] Plans to load and performance testing In-Reply-To: References: Message-ID: Anastasia, any start is a good start. > 1 api 1 engine 1 executor, list-workbooks what exactly doest it mean: 1) is mistral deployed on 3 boxes with component per box, or all three are processes on the same box? 2) is list-workbooks test running while workflow executions going on? How many? what?s the character of the load 3) when it says 60% success what exactly does it mean, what kind of failures? 4) what is the durability criteria, how long do we expect Mistral to withstand the load. Let?s discuss this in details on the next IRC meeting? Thanks again for getting this started. DZ. On Dec 19, 2014, at 7:44 AM, Anastasia Kuznetsova wrote: > Boris, > > Thanks for feedback! > > > But I belive that you should put bigger load here: https://etherpad.openstack.org/p/mistral-rally-testing-results > > As I said it is only beginning and I will increase the load and change its type. > > >As well concurrency should be at least 2-3 times bigger than times otherwise it won't generate proper load and you won't collect >enough data for statistical analyze. > > > >As well use "rps" runner that generates more real life load. > >Plus it will be nice to share as well output of "rally task report" command. > > Thanks for the advice, I will consider it in further testing and reporting. > > Answering to your question about using Rally for integration testing, as I mentioned in our load testing plan published on wiki page, one of our final goals is to have a Rally gate in one of Mistral repositories, so we are interested in it and I already prepare first commits to Rally. > > Thanks, > Anastasia Kuznetsova > > On Fri, Dec 19, 2014 at 4:51 PM, Boris Pavlovic wrote: > Anastasia, > > Nice work on this. But I belive that you should put bigger load here: https://etherpad.openstack.org/p/mistral-rally-testing-results > > As well concurrency should be at least 2-3 times bigger than times otherwise it won't generate proper load and you won't collect enough data for statistical analyze. > > As well use "rps" runner that generates more real life load. > Plus it will be nice to share as well output of "rally task report" command. > > > By the way what do you think about using Rally scenarios (that you already wrote) for integration testing as well? > > > Best regards, > Boris Pavlovic > > On Fri, Dec 19, 2014 at 2:39 PM, Anastasia Kuznetsova wrote: > Hello everyone, > > I want to announce that Mistral team has started work on load and performance testing in this release cycle. > > Brief information about scope of our work can be found here: > https://wiki.openstack.org/wiki/Mistral/Testing#Load_and_Performance_Testing > > First results are published here: > https://etherpad.openstack.org/p/mistral-rally-testing-results > > Thanks, > Anastasia Kuznetsova > @ Mirantis Inc. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From duncan.thomas at gmail.com Sat Dec 20 14:05:22 2014 From: duncan.thomas at gmail.com (Duncan Thomas) Date: Sat, 20 Dec 2014 06:05:22 -0800 Subject: [openstack-dev] [cinder] [driver] DB operations In-Reply-To: References: Message-ID: No, I mean that if drivers are going to access database, then they should do it via a defined interface that limits what they can do to a sane set of operations. I'd still prefer that they didn't need extra access beyond the model update, but I don't know if that is possible. Duncan Thomas On Dec 19, 2014 6:43 PM, "Amit Das" wrote: > Thanks Duncan. > Do you mean hepler methods in the specific driver class? > On 19 Dec 2014 14:51, "Duncan Thomas" wrote: > >> So our general advice has historical been 'drivers should not be >> accessing the db directly'. I haven't had chance to look at your driver >> code yet, I've been on vacation, but my suggestion is that if you >> absolutely must store something in the admin metadata rather than somewhere >> that is covered by the model update (generally provider location and >> provider auth) then writing some helper methods that wrap the context bump >> and db call would be better than accessing it directly from the driver. >> >> Duncan Thomas >> On Dec 18, 2014 11:41 PM, "Amit Das" wrote: >> >>> Hi Stackers, >>> >>> I have been developing a Cinder driver for CloudByte storage and have >>> come across some scenarios where the driver needs to do create, read & >>> update operations on cinder database (volume_admin_metadata table). This is >>> required to establish a mapping between OpenStack IDs with the backend >>> storage IDs. >>> >>> Now, I have got some review comments w.r.t the usage of DB related >>> operations esp. w.r.t raising the context to admin. >>> >>> In short, it has been advised not to use "*context.get_admin_context()* >>> ". >>> >>> >>> https://review.openstack.org/#/c/102511/15/cinder/volume/drivers/cloudbyte/cloudbyte.py >>> >>> However, i get errors trying to use the default context as shown below: >>> >>> *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher File >>> "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 103, in >>> is_admin_context* >>> *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher return >>> context.is_admin* >>> *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher >>> AttributeError: 'module' object has no attribute 'is_admin'* >>> >>> So what is the proper way to run these DB operations from within a >>> driver ? >>> >>> >>> Regards, >>> Amit >>> *CloudByte Inc.* >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amit.das at cloudbyte.com Sat Dec 20 14:36:06 2014 From: amit.das at cloudbyte.com (Amit Das) Date: Sat, 20 Dec 2014 20:06:06 +0530 Subject: [openstack-dev] [cinder] [driver] DB operations In-Reply-To: References: Message-ID: Got it Duncan. I will re-check if I can arrive at any solution without accessing the database. Regards, Amit *CloudByte Inc.* On Sat, Dec 20, 2014 at 7:35 PM, Duncan Thomas wrote: > No, I mean that if drivers are going to access database, then they should > do it via a defined interface that limits what they can do to a sane set of > operations. I'd still prefer that they didn't need extra access beyond the > model update, but I don't know if that is possible. > > Duncan Thomas > On Dec 19, 2014 6:43 PM, "Amit Das" wrote: > >> Thanks Duncan. >> Do you mean hepler methods in the specific driver class? >> On 19 Dec 2014 14:51, "Duncan Thomas" wrote: >> >>> So our general advice has historical been 'drivers should not be >>> accessing the db directly'. I haven't had chance to look at your driver >>> code yet, I've been on vacation, but my suggestion is that if you >>> absolutely must store something in the admin metadata rather than somewhere >>> that is covered by the model update (generally provider location and >>> provider auth) then writing some helper methods that wrap the context bump >>> and db call would be better than accessing it directly from the driver. >>> >>> Duncan Thomas >>> On Dec 18, 2014 11:41 PM, "Amit Das" wrote: >>> >>>> Hi Stackers, >>>> >>>> I have been developing a Cinder driver for CloudByte storage and have >>>> come across some scenarios where the driver needs to do create, read & >>>> update operations on cinder database (volume_admin_metadata table). This is >>>> required to establish a mapping between OpenStack IDs with the backend >>>> storage IDs. >>>> >>>> Now, I have got some review comments w.r.t the usage of DB related >>>> operations esp. w.r.t raising the context to admin. >>>> >>>> In short, it has been advised not to use "*context.get_admin_context()* >>>> ". >>>> >>>> >>>> https://review.openstack.org/#/c/102511/15/cinder/volume/drivers/cloudbyte/cloudbyte.py >>>> >>>> However, i get errors trying to use the default context as shown below: >>>> >>>> *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher File >>>> "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 103, in >>>> is_admin_context* >>>> *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher return >>>> context.is_admin* >>>> *2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher >>>> AttributeError: 'module' object has no attribute 'is_admin'* >>>> >>>> So what is the proper way to run these DB operations from within a >>>> driver ? >>>> >>>> >>>> Regards, >>>> Amit >>>> *CloudByte Inc.* >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pawel.skowron at intel.com Sat Dec 20 19:27:12 2014 From: pawel.skowron at intel.com (Skowron, Pawel) Date: Sat, 20 Dec 2014 19:27:12 +0000 Subject: [openstack-dev] [Fuel] Relocation of freshly deployed OpenStack by Fuel Message-ID: -Need a little guidance with Mirantis version of OpenStack. We want move freshly deployed cloud, without running instances but with HA option to other physical location. The other location means different ranges of public network. And I really want move my installation without cloud redeployment. What I think is required to change is public network settings. The public network settings can be divided in two different areas: 1) Floating ip range for external access to running VM instances 2) Fuel reserved pool for service endpoints (virtual ips and staticly assigned ips) The first one 1) I believe but I haven't tested that _is not a problem_ but any insight will be invaluable. I think it would be possible change to floating network ranges, as an admin in OpenStack itself. I will just add another "network" as external network. But the second issue 2) is I am worried about. What I found the virtual ips (vip) are assigned to one of controller (primary role of HA) and written in haproxy/pacemaker configuration. To allow access from public network by this ips I would probably need to reconfigure all HA support services which have hardcoded vips in its configuration files, but it looks very complicated and fragile. I have even found that public_vip is used in nova.conf (to get access to glance). So the relocation will require reconfiguration of nova and maybe other openstack services. In the case of KeyStone it would be a real problem (ips are stored in database). Has someone any experience with this kind of scenario and would be kind to share it ? Please help. I have used Fuel 6.0 technical preview. Pawel Skowron pawel.skowron at intel.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Sat Dec 20 20:25:58 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Sat, 20 Dec 2014 20:25:58 +0000 Subject: [openstack-dev] [horizon] static files handling, bower/ References: <5492DD6F.2020604@sheep.art.pl> Message-ID: This is a good proposal, though I'm unclear on how the static_settings.py file is populated by a developer (as opposed to a packager, which you described). Richard On Fri Dec 19 2014 at 12:59:37 AM Radomir Dopieralski < openstack at sheep.art.pl> wrote: > Hello, > > revisiting the package management for the Horizon's static files again, > I would like to propose a particular solution. Hopefully it will allow > us to both simplify the whole setup, and use the popular tools for the > job, without losing too much of benefits of our current process. > > The changes we would need to make are as follows: > > * get rid of XStatic entirely; > * add to the repository a configuration file for Bower, with all the > required bower packages listed and their versions specified; > * add to the repository a static_settings.py file, with a single > variable defined, STATICFILES_DIRS. That variable would be initialized > to a list of pairs mapping filesystem directories to URLs within the > /static tree. By default it would only have a single mapping, pointing > to where Bower installs all the stuff by default; > * add a line "from static_settings import STATICFILES_DIRS" to the > settings.py file; > * add jobs both to run_tests.sh and any gate scripts, that would run Bower; > * add a check on the gate that makes sure that all direct and indirect > dependencies of all required Bower packages are listed in its > configuration files (pretty much what we have for requirements.txt now); > > That's all. Now, how that would be used. > > 1. The developers will just use Bower the way they would normally use > it, being able to install and test any of the libraries in any versions > they like. The only additional thing is that they would need to add any > additional libraries or changed versions to the Bower configuration file > before they push their patch for review and merge. > > 2. The packagers can read the list of all required packages from the > Bower configuration file, and make sure they have all the required > libraries packages in the required versions. > > Next, they replace the static_settings.py file with one they have > prepared manually or automatically. The file lists the locations of all > the library directories, and, in the case when the directory structure > differs from what Bower provides, even mappings between subdirectories > and individual files. > > 3. Security patches need to go into the Bower packages directly, which > is good for the whole community. > > 4. If we aver need a library that is not packaged for Bower, we will > package it just as we had with the XStatic packages, only for Bower, > which has much larger user base and more chance of other projects also > using that package and helping with its testing. > > What do you think? Do you see any disastrous problems with this system? > -- > Radomir Dopieralski > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From majopela at redhat.com Sat Dec 20 22:16:40 2014 From: majopela at redhat.com (=?utf-8?Q?Miguel_=C3=81ngel_Ajo?=) Date: Sat, 20 Dec 2014 23:16:40 +0100 Subject: [openstack-dev] [neutron][fwaas] neutron/agent/firewall.py In-Reply-To: References: Message-ID: <8B026DF24916425299031EC18BFABEB3@redhat.com> Correct, this is for the security groups implementation Miguel ?ngel Ajo On Friday, 19 de December de 2014 at 23:50, Sridar Kandaswamy (skandasw) wrote: > +1 Mathieu. Paul, this is not related to FWaaS. > > Thanks > > Sridar > > On 12/19/14, 2:23 PM, "Mathieu Gagn?" wrote: > > > On 2014-12-19 5:16 PM, Paul Michali (pcm) wrote: > > > > > > This has a FirewallDriver and NoopFirewallDriver. Should this be moved > > > into the neutron_fwaas repo? > > > > > > > > > AFAIK, FirewallDriver is used to implement SecurityGroup: > > > > See: > > - > > https://github.com/openstack/neutron/blob/master/neutron/agent/firewall.py > > #L26-L29 > > - > > https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptab > > les_firewall.py#L45 > > - > > https://github.com/openstack/neutron/blob/master/neutron/plugins/hyperv/ag > > ent/security_groups_driver.py#L25 > > > > This class looks to not be used by neutron-fwaas > > > > -- > > Mathieu > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org (mailto:OpenStack-dev at lists.openstack.org) > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org (mailto:OpenStack-dev at lists.openstack.org) > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dannchoi at cisco.com Sat Dec 20 22:32:14 2014 From: dannchoi at cisco.com (Danny Choi (dannchoi)) Date: Sat, 20 Dec 2014 22:32:14 +0000 Subject: [openstack-dev] [qa] Fail to launch VM due to maximum number of ports exceeded Message-ID: Hi, The default quota for port is 50. +----------------------------------+--------------------+---------+ localadmin at qa4:~/devstack$ neutron quota-show --tenant-id 1b2e5efaeeeb46f2922849b483f09ec1 +---------------------+-------+ | Field | Value | +---------------------+-------+ | floatingip | 50 | | network | 10 | | port | 50 | <<<<< 50 | router | 10 | | security_group | 10 | | security_group_rule | 100 | | subnet | 10 | +---------------------+-------+ Total number of ports used so far is 40. localadmin at qa4:~/devstack$ nova list +--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------------------------------+ | 595940bd-3fb1-4ad3-8cc0-29329b464471 | VM-1 | ACTIVE | - | Running | private_net30=30.0.0.44 | | 192ce36d-bc76-427a-a374-1f8e8933938f | VM-2 | ACTIVE | - | Running | private_net30=30.0.0.45 | | 10ad850e-ed9d-42d9-8743-b8eda4107edc | cirros--10ad850e-ed9d-42d9-8743-b8eda4107edc | ACTIVE | - | Running | private_net20=20.0.0.38; private=10.0.0.52 | | 18209b40-09e7-4718-b04f-40a01a8e5993 | cirros--18209b40-09e7-4718-b04f-40a01a8e5993 | ACTIVE | - | Running | private_net20=20.0.0.40; private=10.0.0.54 | | 1ededa1e-c820-4915-adf2-4be8eedaf012 | cirros--1ededa1e-c820-4915-adf2-4be8eedaf012 | ACTIVE | - | Running | private_net20=20.0.0.41; private=10.0.0.55 | | 3688262e-d00f-4263-91a7-785c40f4ae0f | cirros--3688262e-d00f-4263-91a7-785c40f4ae0f | ACTIVE | - | Running | private_net20=20.0.0.34; private=10.0.0.49 | | 4620663f-e6e0-4af2-84c0-6108279cbbed | cirros--4620663f-e6e0-4af2-84c0-6108279cbbed | ACTIVE | - | Running | private_net20=20.0.0.37; private=10.0.0.51 | | 8f8252a3-fa23-47fc-8b32-7f7328ecfba2 | cirros--8f8252a3-fa23-47fc-8b32-7f7328ecfba2 | ACTIVE | - | Running | private_net20=20.0.0.39; private=10.0.0.53 | | a228f33b-0388-464e-af49-b55af9601f56 | cirros--a228f33b-0388-464e-af49-b55af9601f56 | ACTIVE | - | Running | private_net20=20.0.0.42; private=10.0.0.56 | | def5a255-0c9d-4df0-af02-3944bf5af2db | cirros--def5a255-0c9d-4df0-af02-3944bf5af2db | ACTIVE | - | Running | private_net20=20.0.0.36; private=10.0.0.50 | | e1470813-bf4c-4989-9a11-62da47a5c4b4 | cirros--e1470813-bf4c-4989-9a11-62da47a5c4b4 | ACTIVE | - | Running | private_net20=20.0.0.33; private=10.0.0.48 | | f63390fa-2169-45c0-bb02-e42633a08b8f | cirros--f63390fa-2169-45c0-bb02-e42633a08b8f | ACTIVE | - | Running | private_net20=20.0.0.35; private=10.0.0.47 | | 2c34956d-4bf9-45e5-a9de-84d3095ee719 | vm--2c34956d-4bf9-45e5-a9de-84d3095ee719 | ACTIVE | - | Running | private_net30=30.0.0.39; private_net50=50.0.0.29; private_net40=40.0.0.29 | | 680c55f5-527b-49e3-847c-7794e1f8e7a8 | vm--680c55f5-527b-49e3-847c-7794e1f8e7a8 | ACTIVE | - | Running | private_net30=30.0.0.41; private_net50=50.0.0.30; private_net40=40.0.0.31 | | ade4c14b-baf7-4e57-948e-095689f73ce3 | vm--ade4c14b-baf7-4e57-948e-095689f73ce3 | ACTIVE | - | Running | private_net30=30.0.0.43; private_net50=50.0.0.32; private_net40=40.0.0.33 | | c91e426a-ed68-4659-89f6-df6d1154bb16 | vm--c91e426a-ed68-4659-89f6-df6d1154bb16 | ACTIVE | - | Running | private_net30=30.0.0.42; private_net50=50.0.0.33; private_net40=40.0.0.32 | | cedd9984-79f0-46b3-897d-b301cfa74a1a | vm--cedd9984-79f0-46b3-897d-b301cfa74a1a | ACTIVE | - | Running | private_net30=30.0.0.40; private_net50=50.0.0.31; private_net40=40.0.0.30 | | ec83e53f-556f-4e66-ab85-15a9e1ba9d28 | vm--ec83e53f-556f-4e66-ab85-15a9e1ba9d28 | ACTIVE | - | Running | private_net30=30.0.0.38; private_net50=50.0.0.28; private_net40=40.0.0.28 | +--------------------------------------+----------------------------------------------+--------+------------+-------------+?????????????????????????????????????+ When attempt to launch another VM, I hit the max number of ports exceeded error: localadmin at qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=6bb3cef1-52ea-4dcb-9962-95c5c39b03cb VM-3 ERROR (Forbidden): Maximum number of ports exceeded (HTTP 403) (Request-ID: req-4b0d0d7b-25fa-468b-8d2b-5aa48bb58853) I expected this to be successful since the number of ports used is only 40, below the limit of 50. Any idea? Thanks, Danny -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Sat Dec 20 23:56:11 2014 From: tpb at dyncloud.net (Tom Barron) Date: Sat, 20 Dec 2014 18:56:11 -0500 Subject: [openstack-dev] [cinder] ratio: created to attached Message-ID: <54960C9B.3000705@dyncloud.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Does anyone have real world experience, even data, to speak to the question: in an OpenStack cloud, what is the likely ratio of (created) cinder volumes to attached cinder volumes? Thanks, Tom Barron -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBAgAGBQJUlgybAAoJEGeKBzAeUxEHqKwIAJjL5TCP7s+Ev8RNr+5bWARF zy3I216qejKdlM+a9Vxkl6ZWHMklWEhpMmQiUDMvEitRSlHpIHyhh1RfZbl4W9Fe GVXn04sXIuoNPgbFkkPIwE/45CJC1kGIBDub/pr9PmNv9mzAf3asLCHje8n3voWh d30If5SlPiaVoc0QNrq0paK7Yl1hh5jLa2zeV4qu4teRts/GjySJI7bR0k/TW5n4 e2EKxf9MhbxzjQ6QsgvWzxmryVIKRSY9z8Eg/qt7AfXF4Kx++MNo8VbX3AuOu1XV cnHlmuGqVq71uMjWXCeqK8HyAP8nkn2cKnJXhRYli6qSwf9LxzjC+kMLn364IX4= =AZ0i -----END PGP SIGNATURE----- From sumitnaiksatam at gmail.com Sun Dec 21 03:13:41 2014 From: sumitnaiksatam at gmail.com (Sumit Naiksatam) Date: Sat, 20 Dec 2014 19:13:41 -0800 Subject: [openstack-dev] [neutron][FWaaS] No weekly IRC meetings on Dec 24th and 31st Message-ID: Hi, We will skip the meetings for the next two weeks since most team members are not available to meet. Please continue to keep the discussions going over the mailing and lists and the IRC channel. Check back on the wiki page for the next meeting and agenda [1]. Thanks, ~Sumit. [1] https://wiki.openstack.org/wiki/Meetings/FWaaS From tnurlygayanov at mirantis.com Sun Dec 21 04:02:05 2014 From: tnurlygayanov at mirantis.com (Timur Nurlygayanov) Date: Sun, 21 Dec 2014 07:02:05 +0300 Subject: [openstack-dev] [qa] Fail to launch VM due to maximum number of ports exceeded In-Reply-To: References: Message-ID: Hi Danny, what about the global ports count and quotas? On Sun, Dec 21, 2014 at 1:32 AM, Danny Choi (dannchoi) wrote: > Hi, > > The default quota for port is 50. > > +----------------------------------+--------------------+---------+ > > localadmin at qa4:~/devstack$ neutron quota-show --tenant-id > 1b2e5efaeeeb46f2922849b483f09ec1 > > +---------------------+-------+ > > | Field | Value | > > +---------------------+-------+ > > | floatingip | 50 | > > | network | 10 | > > | port | 50 | <<<<< 50 > > | router | 10 | > > | security_group | 10 | > > | security_group_rule | 100 | > > | subnet | 10 | > > +---------------------+-------+ > > Total number of ports used so far is 40. > > localadmin at qa4:~/devstack$ nova list > > > +--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------------------------------+ > > | ID | Name > | Status | Task State | Power State | Networks > | > > > +--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------------------------------+ > > | 595940bd-3fb1-4ad3-8cc0-29329b464471 | VM-1 > | ACTIVE | - | Running | private_net30=30.0.0.44 > | > > | 192ce36d-bc76-427a-a374-1f8e8933938f | VM-2 > | ACTIVE | - | Running | private_net30=30.0.0.45 > | > > | 10ad850e-ed9d-42d9-8743-b8eda4107edc | > cirros--10ad850e-ed9d-42d9-8743-b8eda4107edc | ACTIVE | - | > Running | private_net20=20.0.0.38; private=10.0.0.52 > | > > | 18209b40-09e7-4718-b04f-40a01a8e5993 | > cirros--18209b40-09e7-4718-b04f-40a01a8e5993 | ACTIVE | - | > Running | private_net20=20.0.0.40; private=10.0.0.54 > | > > | 1ededa1e-c820-4915-adf2-4be8eedaf012 | > cirros--1ededa1e-c820-4915-adf2-4be8eedaf012 | ACTIVE | - | > Running | private_net20=20.0.0.41; private=10.0.0.55 > | > > | 3688262e-d00f-4263-91a7-785c40f4ae0f | > cirros--3688262e-d00f-4263-91a7-785c40f4ae0f | ACTIVE | - | > Running | private_net20=20.0.0.34; private=10.0.0.49 > | > > | 4620663f-e6e0-4af2-84c0-6108279cbbed | > cirros--4620663f-e6e0-4af2-84c0-6108279cbbed | ACTIVE | - | > Running | private_net20=20.0.0.37; private=10.0.0.51 > | > > | 8f8252a3-fa23-47fc-8b32-7f7328ecfba2 | > cirros--8f8252a3-fa23-47fc-8b32-7f7328ecfba2 | ACTIVE | - | > Running | private_net20=20.0.0.39; private=10.0.0.53 > | > > | a228f33b-0388-464e-af49-b55af9601f56 | > cirros--a228f33b-0388-464e-af49-b55af9601f56 | ACTIVE | - | > Running | private_net20=20.0.0.42; private=10.0.0.56 > | > > | def5a255-0c9d-4df0-af02-3944bf5af2db | > cirros--def5a255-0c9d-4df0-af02-3944bf5af2db | ACTIVE | - | > Running | private_net20=20.0.0.36; private=10.0.0.50 > | > > | e1470813-bf4c-4989-9a11-62da47a5c4b4 | > cirros--e1470813-bf4c-4989-9a11-62da47a5c4b4 | ACTIVE | - | > Running | private_net20=20.0.0.33; private=10.0.0.48 > | > > | f63390fa-2169-45c0-bb02-e42633a08b8f | > cirros--f63390fa-2169-45c0-bb02-e42633a08b8f | ACTIVE | - | > Running | private_net20=20.0.0.35; private=10.0.0.47 > | > > | 2c34956d-4bf9-45e5-a9de-84d3095ee719 | > vm--2c34956d-4bf9-45e5-a9de-84d3095ee719 | ACTIVE | - | > Running | private_net30=30.0.0.39; private_net50=50.0.0.29; > private_net40=40.0.0.29 | > > | 680c55f5-527b-49e3-847c-7794e1f8e7a8 | > vm--680c55f5-527b-49e3-847c-7794e1f8e7a8 | ACTIVE | - | > Running | private_net30=30.0.0.41; private_net50=50.0.0.30; > private_net40=40.0.0.31 | > > | ade4c14b-baf7-4e57-948e-095689f73ce3 | > vm--ade4c14b-baf7-4e57-948e-095689f73ce3 | ACTIVE | - | > Running | private_net30=30.0.0.43; private_net50=50.0.0.32; > private_net40=40.0.0.33 | > > | c91e426a-ed68-4659-89f6-df6d1154bb16 | > vm--c91e426a-ed68-4659-89f6-df6d1154bb16 | ACTIVE | - | > Running | private_net30=30.0.0.42; private_net50=50.0.0.33; > private_net40=40.0.0.32 | > > | cedd9984-79f0-46b3-897d-b301cfa74a1a | > vm--cedd9984-79f0-46b3-897d-b301cfa74a1a | ACTIVE | - | > Running | private_net30=30.0.0.40; private_net50=50.0.0.31; > private_net40=40.0.0.30 | > > | ec83e53f-556f-4e66-ab85-15a9e1ba9d28 | > vm--ec83e53f-556f-4e66-ab85-15a9e1ba9d28 | ACTIVE | - | > Running | private_net30=30.0.0.38; private_net50=50.0.0.28; > private_net40=40.0.0.28 | > > > +--------------------------------------+----------------------------------------------+--------+------------+-------------+?????????????????????????????????????+ > > > When attempt to launch another VM, I hit the max number of ports > exceeded error: > > > localadmin at qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec > --flavor 1 --nic net-id=6bb3cef1-52ea-4dcb-9962-95c5c39b03cb VM-3 > > *ERROR (Forbidden): Maximum number of ports exceeded (HTTP 403)* > (Request-ID: req-4b0d0d7b-25fa-468b-8d2b-5aa48bb58853) > > > I expected this to be successful since the number of ports used is only > 40, below the limit of 50. > > > Any idea? > > > Thanks, > > Danny > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Timur, Senior QA Engineer OpenStack Projects Mirantis Inc My OpenStack summit schedule: http://kilodesignsummit.sched.org/timur.nurlygayanov#.VFSrD8mhhOI -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkolesni at redhat.com Sun Dec 21 05:47:20 2014 From: mkolesni at redhat.com (Mike Kolesnik) Date: Sun, 21 Dec 2014 00:47:20 -0500 (EST) Subject: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution In-Reply-To: References: <1921367349.598949.1418923701155.JavaMail.zimbra@redhat.com> Message-ID: <222924753.117715.1419140840856.JavaMail.zimbra@redhat.com> Hi Mathieu, Comments inline Regards, Mike ----- Original Message ----- > Mike, > > I'm not even sure that your solution works without being able to bind > a router HA port to several hosts. > What's happening currently is that you : > > 1.create the router on two l3agent. > 2. those l3agent trigger the sync_router() on the l3plugin. > 3. l3plugin.sync_routers() will trigger l2plugin.update_port(host=l3agent). > 4. ML2 will bind the port to the host mentioned in the last update_port(). > > From a l2pop perspective, this will result in creating only one tunnel > to the host lastly specified. > I can't find any code that forces that only the master router binds > its router port. So we don't even know if the host which binds the > router port is hosting the master router or the slave one, and so if > l2pop is creating the tunnel to the master or to the slave. > > Can you confirm that the above sequence is correct? or am I missing > something? Are you referring to the alternative solution? In that case it seems that you're correct so that there would need to be awareness of the master router at some level there as well. I can't say for sure as I've been thinking on the proposed solution with no FDBs so there would be some issues with the alternative that need to be ironed out. > > Without the capacity to bind a port to several hosts, l2pop won't be > able to create tunnel correctly, that's the reason why I was saying > that a prerequisite for a smart solution would be to first fix the bug > : > https://bugs.launchpad.net/neutron/+bug/1367391 > > DVR Had the same issue. Their workaround was to create a new > port_binding tables, that manages the capacity for one DVR port to be > bound to several host. > As mentioned in the bug 1367391, this adding a technical debt in ML2, > which has to be tackle down in priority from my POV. I agree that this would simplify work but even without this bug fixed we can achieve either solution. We have already knowledge of the agents hosting a router so this is completely doable without waiting for fix for bug 1367391. Also from my understanding the bug 1367391 is targeted at DVR only, not at HA router ports. > > > On Thu, Dec 18, 2014 at 6:28 PM, Mike Kolesnik wrote: > > Hi Mathieu, > > > > Thanks for the quick reply, some comments inline.. > > > > Regards, > > Mike > > > > ----- Original Message ----- > >> Hi mike, > >> > >> thanks for working on this bug : > >> > >> On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton wrote: > >> > > >> > > >> > On 12/18/14, 2:06 PM, "Mike Kolesnik" wrote: > >> > > >> >>Hi Neutron community members. > >> >> > >> >>I wanted to query the community about a proposal of how to fix HA > >> >>routers > >> >>not > >> >>working with L2Population (bug 1365476[1]). > >> >>This bug is important to fix especially if we want to have HA routers > >> >>and > >> >>DVR > >> >>routers working together. > >> >> > >> >>[1] https://bugs.launchpad.net/neutron/+bug/1365476 > >> >> > >> >>What's happening now? > >> >>* HA routers use distributed ports, i.e. the port with the same IP & MAC > >> >> details is applied on all nodes where an L3 agent is hosting this > >> >>router. > >> >>* Currently, the port details have a binding pointing to an arbitrary > >> >>node > >> >> and this is not updated. > >> >>* L2pop takes this "potentially stale" information and uses it to > >> >>create: > >> >> 1. A tunnel to the node. > >> >> 2. An FDB entry that directs traffic for that port to that node. > >> >> 3. If ARP responder is on, ARP requests will not traverse the network. > >> >>* Problem is, the master router wouldn't necessarily be running on the > >> >> reported agent. > >> >> This means that traffic would not reach the master node but some > >> >>arbitrary > >> >> node where the router master might be running, but might be in another > >> >> state (standby, fail). > >> >> > >> >>What is proposed? > >> >>Basically the idea is not to do L2Pop for HA router ports that reside on > >> >>the > >> >>tenant network. > >> >>Instead, we would create a tunnel to each node hosting the HA router so > >> >>that > >> >>the normal learning switch functionality would take care of switching > >> >>the > >> >>traffic to the master router. > >> > > >> > In Neutron we just ensure that the MAC address is unique per network. > >> > Could a duplicate MAC address cause problems here? > >> > >> gary, AFAIU, from a Neutron POV, there is only one port, which is the > >> router Port, which is plugged twice. One time per port. > >> I think that the capacity to bind a port to several host is also a > >> prerequisite for a clean solution here. This will be provided by > >> patches to this bug : > >> https://bugs.launchpad.net/neutron/+bug/1367391 > >> > >> > >> >>This way no matter where the master router is currently running, the > >> >>data > >> >>plane would know how to forward traffic to it. > >> >>This solution requires changes on the controller only. > >> >> > >> >>What's to gain? > >> >>* Data plane only solution, independent of the control plane. > >> >>* Lowest failover time (same as HA routers today). > >> >>* High backport potential: > >> >> * No APIs changed/added. > >> >> * No configuration changes. > >> >> * No DB changes. > >> >> * Changes localized to a single file and limited in scope. > >> >> > >> >>What's the alternative? > >> >>An alternative solution would be to have the controller update the port > >> >>binding > >> >>on the single port so that the plain old L2Pop happens and notifies > >> >>about > >> >>the > >> >>location of the master router. > >> >>This basically negates all the benefits of the proposed solution, but is > >> >>wider. > >> >>This solution depends on the report-ha-router-master spec which is > >> >>currently in > >> >>the implementation phase. > >> >> > >> >>It's important to note that these two solutions don't collide and could > >> >>be done > >> >>independently. The one I'm proposing just makes more sense from an HA > >> >>viewpoint > >> >>because of it's benefits which fit the HA methodology of being fast & > >> >>having as > >> >>little outside dependency as possible. > >> >>It could be done as an initial solution which solves the bug for > >> >>mechanism > >> >>drivers that support normal learning switch (OVS), and later kept as an > >> >>optimization to the more general, controller based, solution which will > >> >>solve > >> >>the issue for any mechanism driver working with L2Pop (Linux Bridge, > >> >>possibly > >> >>others). > >> >> > >> >>Would love to hear your thoughts on the subject. > >> > >> You will have to clearly update the doc to mention that deployment > >> with Linuxbridge+l2pop are not compatible with HA. > > > > Yes this should be added and this is already the situation right now. > > However if anyone would like to work on a LB fix (the general one or some > > specific one) I would gladly help with reviewing it. > > > >> > >> Moreover, this solution is downgrading the l2pop solution, by > >> disabling the ARP-responder when VMs want to talk to a HA router. > >> This means that ARP requests will be duplicated to every overlay > >> tunnel to feed the OVS Mac learning table. > >> This is something that we were trying to avoid with l2pop. But may be > >> this is acceptable. > > > > Yes basically you're correct, however this would be only limited to those > > tunnels that connect to the nodes where the HA router is hosted, so we > > would still limit the amount of traffic that is sent across the underlay. > > > > Also bear in mind that ARP is actually good (at least in OVS case) since > > it helps the VM locate on which tunnel the master is, so once it receives > > the ARP response it records a flow that directs the traffic to the correct > > tunnel, so we just get hit by the one ARP broadcast but it's sort of a > > necessary evil in order to locate the master.. > > > >> > >> I know that ofagent is also using l2pop, I would like to know if > >> ofagent deployment will be compatible with the workaround that you are > >> proposing. > > > > I would like to know that too, hopefully someone from OFagent can shed > > some light. > > > >> > >> My concern is that, with DVR, there are at least two major features > >> that are not compatible with Linuxbridge. > >> Linuxbridge is not running in the gate. I don't know if anybody is > >> running a 3rd party testing with Linuxbridge deployments. If anybody > >> does, it would be great to have it voting on gerrit! > >> > >> But I really wonder what is the future of linuxbridge compatibility? > >> should we keep on improving OVS solution without taking into account > >> the linuxbridge implementation? > > > > I don't know actually, but my capability is to fix it for OVS the best > > way possible. > > As I said the situation for LB won't become worse than it already is, > > legacy routers would till function as always.. This fix also will not > > block fixing LB in any other way since it can be easily adjusted (if > > necessary) to work only for supporting mechanisms (OVS AFAIK). > > > > Also if anyone is willing to pick up the glove and implement > > the general controller based fix, or something more focused on LB I will > > happily help review what I can. > > > >> > >> Regards, > >> > >> Mathieu > >> > >> >> > >> >>Regards, > >> >>Mike > >> >> > >> >>_______________________________________________ > >> >>OpenStack-dev mailing list > >> >>OpenStack-dev at lists.openstack.org > >> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > >> > > >> > _______________________________________________ > >> > OpenStack-dev mailing list > >> > OpenStack-dev at lists.openstack.org > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mkolesni at redhat.com Sun Dec 21 06:08:44 2014 From: mkolesni at redhat.com (Mike Kolesnik) Date: Sun, 21 Dec 2014 01:08:44 -0500 (EST) Subject: [openstack-dev] Request for comments for a possible solution In-Reply-To: <289BD3B977EE7247AB06499D1B4AF1C426A988D8@G5W2725.americas.hpqcorp.net> References: <1921367349.598949.1418923701155.JavaMail.zimbra@redhat.com> <289BD3B977EE7247AB06499D1B4AF1C426A988D8@G5W2725.americas.hpqcorp.net> Message-ID: <331109643.118064.1419142124864.JavaMail.zimbra@redhat.com> Hi Vivek, Replies inline. Regards, Mike ----- Original Message ----- > Hi Mike, > > Few clarifications inline [Vivek] > > -----Original Message----- > From: Mike Kolesnik [mailto:mkolesni at redhat.com] > Sent: Thursday, December 18, 2014 10:58 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for > comments for a possible solution > > Hi Mathieu, > > Thanks for the quick reply, some comments inline.. > > Regards, > Mike > > ----- Original Message ----- > > Hi mike, > > > > thanks for working on this bug : > > > > On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton wrote: > > > > > > > > > On 12/18/14, 2:06 PM, "Mike Kolesnik" wrote: > > > > > >>Hi Neutron community members. > > >> > > >>I wanted to query the community about a proposal of how to fix HA > > >>routers not working with L2Population (bug 1365476[1]). > > >>This bug is important to fix especially if we want to have HA > > >>routers and DVR routers working together. > > >> > > >>[1] https://bugs.launchpad.net/neutron/+bug/1365476 > > >> > > >>What's happening now? > > >>* HA routers use distributed ports, i.e. the port with the same IP & > > >>MAC > > >> details is applied on all nodes where an L3 agent is hosting this > > >>router. > > >>* Currently, the port details have a binding pointing to an > > >>arbitrary node > > >> and this is not updated. > > >>* L2pop takes this "potentially stale" information and uses it to create: > > >> 1. A tunnel to the node. > > >> 2. An FDB entry that directs traffic for that port to that node. > > >> 3. If ARP responder is on, ARP requests will not traverse the network. > > >>* Problem is, the master router wouldn't necessarily be running on > > >>the > > >> reported agent. > > >> This means that traffic would not reach the master node but some > > >>arbitrary > > >> node where the router master might be running, but might be in > > >>another > > >> state (standby, fail). > > >> > > >>What is proposed? > > >>Basically the idea is not to do L2Pop for HA router ports that > > >>reside on the tenant network. > > >>Instead, we would create a tunnel to each node hosting the HA router > > >>so that the normal learning switch functionality would take care of > > >>switching the traffic to the master router. > > > > > > In Neutron we just ensure that the MAC address is unique per network. > > > Could a duplicate MAC address cause problems here? > > > > gary, AFAIU, from a Neutron POV, there is only one port, which is the > > router Port, which is plugged twice. One time per port. > > I think that the capacity to bind a port to several host is also a > > prerequisite for a clean solution here. This will be provided by > > patches to this bug : > > https://bugs.launchpad.net/neutron/+bug/1367391 > > > > > > >>This way no matter where the master router is currently running, the > > >>data plane would know how to forward traffic to it. > > >>This solution requires changes on the controller only. > > >> > > >>What's to gain? > > >>* Data plane only solution, independent of the control plane. > > >>* Lowest failover time (same as HA routers today). > > >>* High backport potential: > > >> * No APIs changed/added. > > >> * No configuration changes. > > >> * No DB changes. > > >> * Changes localized to a single file and limited in scope. > > >> > > >>What's the alternative? > > >>An alternative solution would be to have the controller update the > > >>port binding on the single port so that the plain old L2Pop happens > > >>and notifies about the location of the master router. > > >>This basically negates all the benefits of the proposed solution, > > >>but is wider. > > >>This solution depends on the report-ha-router-master spec which is > > >>currently in the implementation phase. > > >> > > >>It's important to note that these two solutions don't collide and > > >>could be done independently. The one I'm proposing just makes more > > >>sense from an HA viewpoint because of it's benefits which fit the HA > > >>methodology of being fast & having as little outside dependency as > > >>possible. > > >>It could be done as an initial solution which solves the bug for > > >>mechanism drivers that support normal learning switch (OVS), and > > >>later kept as an optimization to the more general, controller based, > > >>solution which will solve the issue for any mechanism driver working > > >>with L2Pop (Linux Bridge, possibly others). > > >> > > >>Would love to hear your thoughts on the subject. > > > > You will have to clearly update the doc to mention that deployment > > with Linuxbridge+l2pop are not compatible with HA. > > Yes this should be added and this is already the situation right now. > However if anyone would like to work on a LB fix (the general one or some > specific one) I would gladly help with reviewing it. > > > > > Moreover, this solution is downgrading the l2pop solution, by > > disabling the ARP-responder when VMs want to talk to a HA router. > > This means that ARP requests will be duplicated to every overlay > > tunnel to feed the OVS Mac learning table. > > This is something that we were trying to avoid with l2pop. But may be > > this is acceptable. > > Yes basically you're correct, however this would be only limited to those > tunnels that connect to the nodes where the HA router is hosted, so we would > still limit the amount of traffic that is sent across the underlay. > > Also bear in mind that ARP is actually good (at least in OVS case) since it > helps the VM locate on which tunnel the master is, so once it receives the > ARP response it records a flow that directs the traffic to the correct > tunnel, so we just get hit by the one ARP broadcast but it's sort of a > necessary evil in order to locate the master.. > > [Vivek] When the failover happens, the VMs would be actually sending traffic > to the old master node. > They won't be getting any response back. > > At this time does the VMs redo an ARP request for the HA Router? > And that again sets up the learned rules correctly again in br-tun, so that > the routed traffic > from VM continues on to the new master.. As Methieu said the new master router sends a gratuitous ARP so all switches on the path are updated to it's new presence. >From OVS perspective this means new/updated flows to send traffic to the router through the tunnel which the ARP came from. Also bear in mind that any kind of communication originating from the master router will cause the switch to learn it's location. > > > > > > I know that ofagent is also using l2pop, I would like to know if > > ofagent deployment will be compatible with the workaround that you are > > proposing. > > I would like to know that too, hopefully someone from OFagent can shed some > light. > > > > > My concern is that, with DVR, there are at least two major features > > that are not compatible with Linuxbridge. > > Linuxbridge is not running in the gate. I don't know if anybody is > > running a 3rd party testing with Linuxbridge deployments. If anybody > > does, it would be great to have it voting on gerrit! > > > > But I really wonder what is the future of linuxbridge compatibility? > > should we keep on improving OVS solution without taking into account > > the linuxbridge implementation? > > I don't know actually, but my capability is to fix it for OVS the best way > possible. > As I said the situation for LB won't become worse than it already is, legacy > routers would till function as always.. This fix also will not block fixing > LB in any other way since it can be easily adjusted (if > necessary) to work only for supporting mechanisms (OVS AFAIK). > > Also if anyone is willing to pick up the glove and implement the general > controller based fix, or something more focused on LB I will happily help > review what I can. > > [Vivek] Also by this proposal, will the HA Router be able to co-operate with > DVR which actually mandates L2-Pop? Yes the plan is to enable HA routers and DVR to be deployed together with L2Pop on. This is achievable by both solutions. > > -- > Thanks, > > Vivek > > > > > Regards, > > > > Mathieu > > > > >> > > >>Regards, > > >>Mike > > >> > > >>_______________________________________________ > > >>OpenStack-dev mailing list > > >>OpenStack-dev at lists.openstack.org > > >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From EvgenyF at Radware.com Sun Dec 21 10:14:25 2014 From: EvgenyF at Radware.com (Evgeny Fedoruk) Date: Sun, 21 Dec 2014 10:14:25 +0000 Subject: [openstack-dev] [neutron][lbaas] Canceling lbaas meeting 12/16 In-Reply-To: References: Message-ID: Hi Doug, How are you? I have a question regarding https://review.openstack.org/#/c/141247/ change set Extension changes are not part of this change. I also see the whole extension mechanism is out of the new repository. I may be missed something. Are we replacing the mechanism with something else? Or we will add it separately in other change set? Thanks, Evg -----Original Message----- From: Doug Wiegley [mailto:dougw at a10networks.com] Sent: Sunday, December 14, 2014 7:46 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [neutron][lbaas] Canceling lbaas meeting 12/16 Unless someone has an urgent agenda item, and due to the mid-cycle for Octavia, which has a bunch of overlap with the lbaas team, let?s cancel this week. If you have post-split lbaas v2 questions, please find me in #openstack-lbaas. The only announcement was going to be: If you are waiting to re-submit/submit lbaasv2 changes for the new repo, please monitor this review, or make your change dependent on it: https://review.openstack.org/#/c/141247/ Thanks, Doug _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From vivekanandan.narasimhan at hp.com Sun Dec 21 11:14:04 2014 From: vivekanandan.narasimhan at hp.com (Narasimhan, Vivekanandan) Date: Sun, 21 Dec 2014 11:14:04 +0000 Subject: [openstack-dev] Request for comments for a possible solution In-Reply-To: <222924753.117715.1419140840856.JavaMail.zimbra@redhat.com> References: <1921367349.598949.1418923701155.JavaMail.zimbra@redhat.com> <222924753.117715.1419140840856.JavaMail.zimbra@redhat.com> Message-ID: <289BD3B977EE7247AB06499D1B4AF1C426A99C01@G5W2725.americas.hpqcorp.net> Hi Mike, Just one comment [Vivek] -----Original Message----- From: Mike Kolesnik [mailto:mkolesni at redhat.com] Sent: Sunday, December 21, 2014 11:17 AM To: OpenStack Development Mailing List (not for usage questions) Cc: Robert Kukura Subject: Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution Hi Mathieu, Comments inline Regards, Mike ----- Original Message ----- > Mike, > > I'm not even sure that your solution works without being able to bind > a router HA port to several hosts. > What's happening currently is that you : > > 1.create the router on two l3agent. > 2. those l3agent trigger the sync_router() on the l3plugin. > 3. l3plugin.sync_routers() will trigger l2plugin.update_port(host=l3agent). > 4. ML2 will bind the port to the host mentioned in the last update_port(). > > From a l2pop perspective, this will result in creating only one tunnel > to the host lastly specified. > I can't find any code that forces that only the master router binds > its router port. So we don't even know if the host which binds the > router port is hosting the master router or the slave one, and so if > l2pop is creating the tunnel to the master or to the slave. > > Can you confirm that the above sequence is correct? or am I missing > something? Are you referring to the alternative solution? In that case it seems that you're correct so that there would need to be awareness of the master router at some level there as well. I can't say for sure as I've been thinking on the proposed solution with no FDBs so there would be some issues with the alternative that need to be ironed out. > > Without the capacity to bind a port to several hosts, l2pop won't be > able to create tunnel correctly, that's the reason why I was saying > that a prerequisite for a smart solution would be to first fix the bug > : > https://bugs.launchpad.net/neutron/+bug/1367391 > > DVR Had the same issue. Their workaround was to create a new > port_binding tables, that manages the capacity for one DVR port to be > bound to several host. > As mentioned in the bug 1367391, this adding a technical debt in ML2, > which has to be tackle down in priority from my POV. I agree that this would simplify work but even without this bug fixed we can achieve either solution. We have already knowledge of the agents hosting a router so this is completely doable without waiting for fix for bug 1367391. Also from my understanding the bug 1367391 is targeted at DVR only, not at HA router ports. [Vivek] Currently yes, but Bob's concept embraces all replicated ports and so HA router ports will play into it :) -- Thanks, Vivek > > > On Thu, Dec 18, 2014 at 6:28 PM, Mike Kolesnik wrote: > > Hi Mathieu, > > > > Thanks for the quick reply, some comments inline.. > > > > Regards, > > Mike > > > > ----- Original Message ----- > >> Hi mike, > >> > >> thanks for working on this bug : > >> > >> On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton wrote: > >> > > >> > > >> > On 12/18/14, 2:06 PM, "Mike Kolesnik" wrote: > >> > > >> >>Hi Neutron community members. > >> >> > >> >>I wanted to query the community about a proposal of how to fix HA > >> >>routers not working with L2Population (bug 1365476[1]). > >> >>This bug is important to fix especially if we want to have HA > >> >>routers and DVR routers working together. > >> >> > >> >>[1] https://bugs.launchpad.net/neutron/+bug/1365476 > >> >> > >> >>What's happening now? > >> >>* HA routers use distributed ports, i.e. the port with the same > >> >>IP & MAC > >> >> details is applied on all nodes where an L3 agent is hosting > >> >>this router. > >> >>* Currently, the port details have a binding pointing to an > >> >>arbitrary node > >> >> and this is not updated. > >> >>* L2pop takes this "potentially stale" information and uses it to > >> >>create: > >> >> 1. A tunnel to the node. > >> >> 2. An FDB entry that directs traffic for that port to that node. > >> >> 3. If ARP responder is on, ARP requests will not traverse the network. > >> >>* Problem is, the master router wouldn't necessarily be running > >> >>on the > >> >> reported agent. > >> >> This means that traffic would not reach the master node but > >> >>some arbitrary > >> >> node where the router master might be running, but might be in > >> >>another > >> >> state (standby, fail). > >> >> > >> >>What is proposed? > >> >>Basically the idea is not to do L2Pop for HA router ports that > >> >>reside on the tenant network. > >> >>Instead, we would create a tunnel to each node hosting the HA > >> >>router so that the normal learning switch functionality would > >> >>take care of switching the traffic to the master router. > >> > > >> > In Neutron we just ensure that the MAC address is unique per network. > >> > Could a duplicate MAC address cause problems here? > >> > >> gary, AFAIU, from a Neutron POV, there is only one port, which is > >> the router Port, which is plugged twice. One time per port. > >> I think that the capacity to bind a port to several host is also a > >> prerequisite for a clean solution here. This will be provided by > >> patches to this bug : > >> https://bugs.launchpad.net/neutron/+bug/1367391 > >> > >> > >> >>This way no matter where the master router is currently running, > >> >>the data plane would know how to forward traffic to it. > >> >>This solution requires changes on the controller only. > >> >> > >> >>What's to gain? > >> >>* Data plane only solution, independent of the control plane. > >> >>* Lowest failover time (same as HA routers today). > >> >>* High backport potential: > >> >> * No APIs changed/added. > >> >> * No configuration changes. > >> >> * No DB changes. > >> >> * Changes localized to a single file and limited in scope. > >> >> > >> >>What's the alternative? > >> >>An alternative solution would be to have the controller update > >> >>the port binding on the single port so that the plain old L2Pop > >> >>happens and notifies about the location of the master router. > >> >>This basically negates all the benefits of the proposed solution, > >> >>but is wider. > >> >>This solution depends on the report-ha-router-master spec which > >> >>is currently in the implementation phase. > >> >> > >> >>It's important to note that these two solutions don't collide and > >> >>could be done independently. The one I'm proposing just makes > >> >>more sense from an HA viewpoint because of it's benefits which > >> >>fit the HA methodology of being fast & having as little outside > >> >>dependency as possible. > >> >>It could be done as an initial solution which solves the bug for > >> >>mechanism drivers that support normal learning switch (OVS), and > >> >>later kept as an optimization to the more general, controller > >> >>based, solution which will solve the issue for any mechanism > >> >>driver working with L2Pop (Linux Bridge, possibly others). > >> >> > >> >>Would love to hear your thoughts on the subject. > >> > >> You will have to clearly update the doc to mention that deployment > >> with Linuxbridge+l2pop are not compatible with HA. > > > > Yes this should be added and this is already the situation right now. > > However if anyone would like to work on a LB fix (the general one or > > some specific one) I would gladly help with reviewing it. > > > >> > >> Moreover, this solution is downgrading the l2pop solution, by > >> disabling the ARP-responder when VMs want to talk to a HA router. > >> This means that ARP requests will be duplicated to every overlay > >> tunnel to feed the OVS Mac learning table. > >> This is something that we were trying to avoid with l2pop. But may > >> be this is acceptable. > > > > Yes basically you're correct, however this would be only limited to > > those tunnels that connect to the nodes where the HA router is > > hosted, so we would still limit the amount of traffic that is sent across the underlay. > > > > Also bear in mind that ARP is actually good (at least in OVS case) > > since it helps the VM locate on which tunnel the master is, so once > > it receives the ARP response it records a flow that directs the > > traffic to the correct tunnel, so we just get hit by the one ARP > > broadcast but it's sort of a necessary evil in order to locate the master.. > > > >> > >> I know that ofagent is also using l2pop, I would like to know if > >> ofagent deployment will be compatible with the workaround that you > >> are proposing. > > > > I would like to know that too, hopefully someone from OFagent can > > shed some light. > > > >> > >> My concern is that, with DVR, there are at least two major features > >> that are not compatible with Linuxbridge. > >> Linuxbridge is not running in the gate. I don't know if anybody is > >> running a 3rd party testing with Linuxbridge deployments. If > >> anybody does, it would be great to have it voting on gerrit! > >> > >> But I really wonder what is the future of linuxbridge compatibility? > >> should we keep on improving OVS solution without taking into > >> account the linuxbridge implementation? > > > > I don't know actually, but my capability is to fix it for OVS the > > best way possible. > > As I said the situation for LB won't become worse than it already > > is, legacy routers would till function as always.. This fix also > > will not block fixing LB in any other way since it can be easily > > adjusted (if > > necessary) to work only for supporting mechanisms (OVS AFAIK). > > > > Also if anyone is willing to pick up the glove and implement the > > general controller based fix, or something more focused on LB I will > > happily help review what I can. > > > >> > >> Regards, > >> > >> Mathieu > >> > >> >> > >> >>Regards, > >> >>Mike > >> >> > >> >>_______________________________________________ > >> >>OpenStack-dev mailing list > >> >>OpenStack-dev at lists.openstack.org > >> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > >> > > >> > _______________________________________________ > >> > OpenStack-dev mailing list > >> > OpenStack-dev at lists.openstack.org > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zzelle at gmail.com Sun Dec 21 13:26:35 2014 From: zzelle at gmail.com (ZZelle) Date: Sun, 21 Dec 2014 14:26:35 +0100 Subject: [openstack-dev] [qa] Fail to launch VM due to maximum number of ports exceeded In-Reply-To: References: Message-ID: Hi Danny, Port quota includes compute ports (for vms) + network ports (for dhcps/routers). You seems to have 40 compute ports + 5 networks x 2 (1 port for the dhcp and 1 port for the router) ==> 50 ports. Cedric ZZelle at IRC On Sun, Dec 21, 2014 at 5:02 AM, Timur Nurlygayanov < tnurlygayanov at mirantis.com> wrote: > Hi Danny, > > what about the global ports count and quotas? > > On Sun, Dec 21, 2014 at 1:32 AM, Danny Choi (dannchoi) > wrote: > >> Hi, >> >> The default quota for port is 50. >> >> +----------------------------------+--------------------+---------+ >> >> localadmin at qa4:~/devstack$ neutron quota-show --tenant-id >> 1b2e5efaeeeb46f2922849b483f09ec1 >> >> +---------------------+-------+ >> >> | Field | Value | >> >> +---------------------+-------+ >> >> | floatingip | 50 | >> >> | network | 10 | >> >> | port | 50 | <<<<< 50 >> >> | router | 10 | >> >> | security_group | 10 | >> >> | security_group_rule | 100 | >> >> | subnet | 10 | >> >> +---------------------+-------+ >> >> Total number of ports used so far is 40. >> >> localadmin at qa4:~/devstack$ nova list >> >> >> +--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------------------------------+ >> >> | ID | Name >> | Status | Task State | Power State | Networks >> | >> >> >> +--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------------------------------+ >> >> | 595940bd-3fb1-4ad3-8cc0-29329b464471 | VM-1 >> | ACTIVE | - | Running | private_net30=30.0.0.44 >> | >> >> | 192ce36d-bc76-427a-a374-1f8e8933938f | VM-2 >> | ACTIVE | - | Running | private_net30=30.0.0.45 >> | >> >> | 10ad850e-ed9d-42d9-8743-b8eda4107edc | >> cirros--10ad850e-ed9d-42d9-8743-b8eda4107edc | ACTIVE | - | >> Running | private_net20=20.0.0.38; private=10.0.0.52 >> | >> >> | 18209b40-09e7-4718-b04f-40a01a8e5993 | >> cirros--18209b40-09e7-4718-b04f-40a01a8e5993 | ACTIVE | - | >> Running | private_net20=20.0.0.40; private=10.0.0.54 >> | >> >> | 1ededa1e-c820-4915-adf2-4be8eedaf012 | >> cirros--1ededa1e-c820-4915-adf2-4be8eedaf012 | ACTIVE | - | >> Running | private_net20=20.0.0.41; private=10.0.0.55 >> | >> >> | 3688262e-d00f-4263-91a7-785c40f4ae0f | >> cirros--3688262e-d00f-4263-91a7-785c40f4ae0f | ACTIVE | - | >> Running | private_net20=20.0.0.34; private=10.0.0.49 >> | >> >> | 4620663f-e6e0-4af2-84c0-6108279cbbed | >> cirros--4620663f-e6e0-4af2-84c0-6108279cbbed | ACTIVE | - | >> Running | private_net20=20.0.0.37; private=10.0.0.51 >> | >> >> | 8f8252a3-fa23-47fc-8b32-7f7328ecfba2 | >> cirros--8f8252a3-fa23-47fc-8b32-7f7328ecfba2 | ACTIVE | - | >> Running | private_net20=20.0.0.39; private=10.0.0.53 >> | >> >> | a228f33b-0388-464e-af49-b55af9601f56 | >> cirros--a228f33b-0388-464e-af49-b55af9601f56 | ACTIVE | - | >> Running | private_net20=20.0.0.42; private=10.0.0.56 >> | >> >> | def5a255-0c9d-4df0-af02-3944bf5af2db | >> cirros--def5a255-0c9d-4df0-af02-3944bf5af2db | ACTIVE | - | >> Running | private_net20=20.0.0.36; private=10.0.0.50 >> | >> >> | e1470813-bf4c-4989-9a11-62da47a5c4b4 | >> cirros--e1470813-bf4c-4989-9a11-62da47a5c4b4 | ACTIVE | - | >> Running | private_net20=20.0.0.33; private=10.0.0.48 >> | >> >> | f63390fa-2169-45c0-bb02-e42633a08b8f | >> cirros--f63390fa-2169-45c0-bb02-e42633a08b8f | ACTIVE | - | >> Running | private_net20=20.0.0.35; private=10.0.0.47 >> | >> >> | 2c34956d-4bf9-45e5-a9de-84d3095ee719 | >> vm--2c34956d-4bf9-45e5-a9de-84d3095ee719 | ACTIVE | - | >> Running | private_net30=30.0.0.39; private_net50=50.0.0.29; >> private_net40=40.0.0.29 | >> >> | 680c55f5-527b-49e3-847c-7794e1f8e7a8 | >> vm--680c55f5-527b-49e3-847c-7794e1f8e7a8 | ACTIVE | - | >> Running | private_net30=30.0.0.41; private_net50=50.0.0.30; >> private_net40=40.0.0.31 | >> >> | ade4c14b-baf7-4e57-948e-095689f73ce3 | >> vm--ade4c14b-baf7-4e57-948e-095689f73ce3 | ACTIVE | - | >> Running | private_net30=30.0.0.43; private_net50=50.0.0.32; >> private_net40=40.0.0.33 | >> >> | c91e426a-ed68-4659-89f6-df6d1154bb16 | >> vm--c91e426a-ed68-4659-89f6-df6d1154bb16 | ACTIVE | - | >> Running | private_net30=30.0.0.42; private_net50=50.0.0.33; >> private_net40=40.0.0.32 | >> >> | cedd9984-79f0-46b3-897d-b301cfa74a1a | >> vm--cedd9984-79f0-46b3-897d-b301cfa74a1a | ACTIVE | - | >> Running | private_net30=30.0.0.40; private_net50=50.0.0.31; >> private_net40=40.0.0.30 | >> >> | ec83e53f-556f-4e66-ab85-15a9e1ba9d28 | >> vm--ec83e53f-556f-4e66-ab85-15a9e1ba9d28 | ACTIVE | - | >> Running | private_net30=30.0.0.38; private_net50=50.0.0.28; >> private_net40=40.0.0.28 | >> >> >> +--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------------------------------------------------------------+ >> >> >> When attempt to launch another VM, I hit the max number of ports >> exceeded error: >> >> >> localadmin at qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec >> --flavor 1 --nic net-id=6bb3cef1-52ea-4dcb-9962-95c5c39b03cb VM-3 >> >> *ERROR (Forbidden): Maximum number of ports exceeded (HTTP 403)* >> (Request-ID: req-4b0d0d7b-25fa-468b-8d2b-5aa48bb58853) >> >> >> I expected this to be successful since the number of ports used is only >> 40, below the limit of 50. >> >> >> Any idea? >> >> >> Thanks, >> >> Danny >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > > Timur, > Senior QA Engineer > OpenStack Projects > Mirantis Inc > > My OpenStack summit schedule: > http://kilodesignsummit.sched.org/timur.nurlygayanov#.VFSrD8mhhOI > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dannchoi at cisco.com Sun Dec 21 14:02:24 2014 From: dannchoi at cisco.com (Danny Choi (dannchoi)) Date: Sun, 21 Dec 2014 14:02:24 +0000 Subject: [openstack-dev] [qa] Fail to launch VM due to maximum number of ports exceeded Message-ID: Hi Timur, When you said the ?global? counts, I assumed you refer to the admin tenant. BTW, I?m launching VMs in the demo tenant. localadmin at qa4:~/devstack$ keystone --os-tenant-name admin --os-username admin tenant-list +----------------------------------+--------------------+---------+ | id | name | enabled | +----------------------------------+--------------------+---------+ | 84827057a7444354b0bff11566ccb80b | admin | True | | 5977ba64a5734395a7dc1f8f1dbbac7c | alt_demo | True | | 1b2e5efaeeeb46f2922849b483f09ec1 | demo | True | | 7dbe65974f144993ad3fb165ced85a0e | invisible_to_admin | True | | eef9dee7066f4a30be32eaa67f2e40c9 | service | True | +----------------------------------+--------------------+????+ localadmin at qa4:~/devstack$ keystone --os-tenant-name admin --os-username admin user-list +----------------------------------+----------+---------+----------------------+ | id | name | enabled | email | +----------------------------------+----------+---------+----------------------+ | 9d5fd9947d154a2db396fce177f1f83c | admin | True | | | bf51d29350b04a00aef1e701f1f6bb81 | alt_demo | True | alt_demo at example.com | | 668cf3505aba4e45b965cf2963942df9 | cinder | True | | | 4ddc6d36192c4c34bea3865b4286c90d | demo | True | demo at example.com | | f37bf45d6d0e4168bb3c18d07dbb39fc | glance | True | | | 20376173b10147b6a2111f976bf4e397 | heat | True | | | cf8bf98325964d04a4a3708e36d5f09d | neutron | True | | | fec102e33eb64c9e8866a5bd0f718d37 | nova | True | | +----------------------------------+----------+---------+----------------------+ localadmin at qa4:~/devstack$ neutron --os-tenant-name admin --os-username admin quota-show --tenant-id 84827057a7444354b0bff11566ccb80b +---------------------+-------+ | Field | Value | +---------------------+-------+ | floatingip | 50 | | network | 10 | | port | 50 | | router | 10 | | security_group | 10 | | security_group_rule | 100 | | subnet | 10 | +---------------------+-------+ localadmin at qa4:~/devstack$ nova --os-tenant-name admin --os-username admin quota-show --tenant 84827057a7444354b0bff11566ccb80b --user 9d5fd9947d154a2db396fce177f1f83c +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 10 | | cores | 20 | | ram | 51200 | | floating_ips | 10 | | fixed_ips | -1 | | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes | 255 | | key_pairs | 100 | | security_groups | 10 | | security_group_rules | 20 | | server_groups | 10 | | server_group_members | 10 | +-----------------------------+-------+ Danny ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? Date: Sun, 21 Dec 2014 07:02:05 +0300 From: Timur Nurlygayanov > To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [qa] Fail to launch VM due to maximum number of ports exceeded Message-ID: > Content-Type: text/plain; charset="utf-8" Hi Danny, what about the global ports count and quotas? On Sun, Dec 21, 2014 at 1:32 AM, Danny Choi (dannchoi) > wrote: Hi, The default quota for port is 50. +----------------------------------+--------------------+---------+ localadmin at qa4:~/devstack$ neutron quota-show --tenant-id 1b2e5efaeeeb46f2922849b483f09ec1 +---------------------+-------+ | Field | Value | +---------------------+-------+ | floatingip | 50 | | network | 10 | | port | 50 | <<<<< 50 | router | 10 | | security_group | 10 | | security_group_rule | 100 | | subnet | 10 | +---------------------+-------+ Total number of ports used so far is 40. localadmin at qa4:~/devstack$ nova list +--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------------------------------+ | 595940bd-3fb1-4ad3-8cc0-29329b464471 | VM-1 | ACTIVE | - | Running | private_net30=30.0.0.44 | | 192ce36d-bc76-427a-a374-1f8e8933938f | VM-2 | ACTIVE | - | Running | private_net30=30.0.0.45 | | 10ad850e-ed9d-42d9-8743-b8eda4107edc | cirros--10ad850e-ed9d-42d9-8743-b8eda4107edc | ACTIVE | - | Running | private_net20=20.0.0.38; private=10.0.0.52 | | 18209b40-09e7-4718-b04f-40a01a8e5993 | cirros--18209b40-09e7-4718-b04f-40a01a8e5993 | ACTIVE | - | Running | private_net20=20.0.0.40; private=10.0.0.54 | | 1ededa1e-c820-4915-adf2-4be8eedaf012 | cirros--1ededa1e-c820-4915-adf2-4be8eedaf012 | ACTIVE | - | Running | private_net20=20.0.0.41; private=10.0.0.55 | | 3688262e-d00f-4263-91a7-785c40f4ae0f | cirros--3688262e-d00f-4263-91a7-785c40f4ae0f | ACTIVE | - | Running | private_net20=20.0.0.34; private=10.0.0.49 | | 4620663f-e6e0-4af2-84c0-6108279cbbed | cirros--4620663f-e6e0-4af2-84c0-6108279cbbed | ACTIVE | - | Running | private_net20=20.0.0.37; private=10.0.0.51 | | 8f8252a3-fa23-47fc-8b32-7f7328ecfba2 | cirros--8f8252a3-fa23-47fc-8b32-7f7328ecfba2 | ACTIVE | - | Running | private_net20=20.0.0.39; private=10.0.0.53 | | a228f33b-0388-464e-af49-b55af9601f56 | cirros--a228f33b-0388-464e-af49-b55af9601f56 | ACTIVE | - | Running | private_net20=20.0.0.42; private=10.0.0.56 | | def5a255-0c9d-4df0-af02-3944bf5af2db | cirros--def5a255-0c9d-4df0-af02-3944bf5af2db | ACTIVE | - | Running | private_net20=20.0.0.36; private=10.0.0.50 | | e1470813-bf4c-4989-9a11-62da47a5c4b4 | cirros--e1470813-bf4c-4989-9a11-62da47a5c4b4 | ACTIVE | - | Running | private_net20=20.0.0.33; private=10.0.0.48 | | f63390fa-2169-45c0-bb02-e42633a08b8f | cirros--f63390fa-2169-45c0-bb02-e42633a08b8f | ACTIVE | - | Running | private_net20=20.0.0.35; private=10.0.0.47 | | 2c34956d-4bf9-45e5-a9de-84d3095ee719 | vm--2c34956d-4bf9-45e5-a9de-84d3095ee719 | ACTIVE | - | Running | private_net30=30.0.0.39; private_net50=50.0.0.29; private_net40=40.0.0.29 | | 680c55f5-527b-49e3-847c-7794e1f8e7a8 | vm--680c55f5-527b-49e3-847c-7794e1f8e7a8 | ACTIVE | - | Running | private_net30=30.0.0.41; private_net50=50.0.0.30; private_net40=40.0.0.31 | | ade4c14b-baf7-4e57-948e-095689f73ce3 | vm--ade4c14b-baf7-4e57-948e-095689f73ce3 | ACTIVE | - | Running | private_net30=30.0.0.43; private_net50=50.0.0.32; private_net40=40.0.0.33 | | c91e426a-ed68-4659-89f6-df6d1154bb16 | vm--c91e426a-ed68-4659-89f6-df6d1154bb16 | ACTIVE | - | Running | private_net30=30.0.0.42; private_net50=50.0.0.33; private_net40=40.0.0.32 | | cedd9984-79f0-46b3-897d-b301cfa74a1a | vm--cedd9984-79f0-46b3-897d-b301cfa74a1a | ACTIVE | - | Running | private_net30=30.0.0.40; private_net50=50.0.0.31; private_net40=40.0.0.30 | | ec83e53f-556f-4e66-ab85-15a9e1ba9d28 | vm--ec83e53f-556f-4e66-ab85-15a9e1ba9d28 | ACTIVE | - | Running | private_net30=30.0.0.38; private_net50=50.0.0.28; private_net40=40.0.0.28 | +--------------------------------------+----------------------------------------------+--------+------------+-------------+?????????????????????????????????????+ When attempt to launch another VM, I hit the max number of ports exceeded error: localadmin at qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=6bb3cef1-52ea-4dcb-9962-95c5c39b03cb VM-3 *ERROR (Forbidden): Maximum number of ports exceeded (HTTP 403)* (Request-ID: req-4b0d0d7b-25fa-468b-8d2b-5aa48bb58853) I expected this to be successful since the number of ports used is only 40, below the limit of 50. Any idea? Thanks, Danny _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dannchoi at cisco.com Sun Dec 21 16:28:34 2014 From: dannchoi at cisco.com (Danny Choi (dannchoi)) Date: Sun, 21 Dec 2014 16:28:34 +0000 Subject: [openstack-dev] [qa] host aggregate's availability zone Message-ID: Hi, I have a multi-node setup with 2 compute hosts, qa5 and qa6. I created 2 host-aggregate, each with its own availability zone, and assigned one compute host: localadmin at qa4:~/devstack$ nova aggregate-details host-aggregate-zone-1 +----+-----------------------+-------------------+-------+--------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+-----------------------+-------------------+-------+--------------------------+ | 9 | host-aggregate-zone-1 | az-1 | 'qa5' | 'availability_zone=az-1' | +----+-----------------------+-------------------+-------+--------------------------+ localadmin at qa4:~/devstack$ nova aggregate-details host-aggregate-zone-2 +----+-----------------------+-------------------+-------+--------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+-----------------------+-------------------+-------+--------------------------+ | 10 | host-aggregate-zone-2 | az-2 | 'qa6' | 'availability_zone=az-2' | +----+-----------------------+-------------------+-------+?????????????+ My intent is to control at which compute host to launch a VM via the host-aggregate?s availability-zone parameter. To test, for vm-1, I specify --availiability-zone=az-1, and --availiability-zone=az-2 for vm-2: localadmin at qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-1 vm-1 +--------------------------------------+----------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000066 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | kxot3ZBZcBH6 | | config_drive | | | created | 2014-12-21T15:59:03Z | | flavor | m1.tiny (1) | | hostId | | | id | 854acae9-b718-4ea5-bc28-e0bc46378b60 | | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | | key_name | - | | metadata | {} | | name | vm-1 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 84827057a7444354b0bff11566ccb80b | | updated | 2014-12-21T15:59:03Z | | user_id | 9d5fd9947d154a2db396fce177f1f83c | +--------------------------------------+----------------------------------------------------------------+ localadmin at qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-2 vm-2 +--------------------------------------+----------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000067 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | 2kXQpV2u9TVv | | config_drive | | | created | 2014-12-21T15:59:55Z | | flavor | m1.tiny (1) | | hostId | | | id | ce1b5dca-a844-4c59-bb00-39a617646c59 | | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | | key_name | - | | metadata | {} | | name | vm-2 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 84827057a7444354b0bff11566ccb80b | | updated | 2014-12-21T15:59:55Z | | user_id | 9d5fd9947d154a2db396fce177f1f83c | +--------------------------------------+????????????????????????????????+ However, both VMs ended up at compute host qa5: localadmin at qa4:~/devstack$ nova hypervisor-servers q +--------------------------------------+-------------------+---------------+---------------------+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--------------------------------------+-------------------+---------------+---------------------+ | 854acae9-b718-4ea5-bc28-e0bc46378b60 | instance-00000066 | 1 | qa5 | | ce1b5dca-a844-4c59-bb00-39a617646c59 | instance-00000067 | 1 | qa5 | +--------------------------------------+-------------------+---------------+---------------------+ localadmin at qa4:~/devstack$ nova show vm-1 +--------------------------------------+----------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | az-1 | | OS-EXT-SRV-ATTR:host | qa5 | | OS-EXT-SRV-ATTR:hypervisor_hostname | qa5 | | OS-EXT-SRV-ATTR:instance_name | instance-00000066 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2014-12-21T16:03:15.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-12-21T15:59:03Z | | flavor | m1.tiny (1) | | hostId | 89119faac9345b51f185bd8b6c2e091644f1544cd523067ecce64613 | | id | 854acae9-b718-4ea5-bc28-e0bc46378b60 | | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | | key_name | - | | metadata | {} | | name | vm-1 | | os-extended-volumes:volumes_attached | [] | | private network | 10.0.0.70 | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id | 84827057a7444354b0bff11566ccb80b | | updated | 2014-12-21T15:59:11Z | | user_id | 9d5fd9947d154a2db396fce177f1f83c | +--------------------------------------+----------------------------------------------------------------+ localadmin at qa4:~/devstack$ nova show vm-2 +--------------------------------------+----------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | az-1 | | OS-EXT-SRV-ATTR:host | qa5 | | OS-EXT-SRV-ATTR:hypervisor_hostname | qa5 | | OS-EXT-SRV-ATTR:instance_name | instance-00000067 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | spawning | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-12-21T15:59:55Z | | flavor | m1.tiny (1) | | hostId | 89119faac9345b51f185bd8b6c2e091644f1544cd523067ecce64613 | | id | ce1b5dca-a844-4c59-bb00-39a617646c59 | | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | | key_name | - | | metadata | {} | | name | vm-2 | | os-extended-volumes:volumes_attached | [] | | private network | 10.0.0.71 | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 84827057a7444354b0bff11566ccb80b | | updated | 2014-12-21T15:59:56Z | | user_id | 9d5fd9947d154a2db396fce177f1f83c | +--------------------------------------+----------------------------------------------------------------+ Is it supposed to work this way? Do I missed something here? Thanks, Danny -------------- next part -------------- An HTML attachment was scrubbed... URL: From cropper.joe at gmail.com Sun Dec 21 17:42:02 2014 From: cropper.joe at gmail.com (Joe Cropper) Date: Sun, 21 Dec 2014 11:42:02 -0600 Subject: [openstack-dev] [qa] host aggregate's availability zone In-Reply-To: References: Message-ID: Did you enable the AvailabilityZoneFilter in nova.conf that the scheduler uses? And enable the FilterScheduler? These are two common issues related to this. - Joe > On Dec 21, 2014, at 10:28 AM, Danny Choi (dannchoi) wrote: > > Hi, > > I have a multi-node setup with 2 compute hosts, qa5 and qa6. > > I created 2 host-aggregate, each with its own availability zone, and assigned one compute host: > > localadmin at qa4:~/devstack$ nova aggregate-details host-aggregate-zone-1 > +----+-----------------------+-------------------+-------+--------------------------+ > | Id | Name | Availability Zone | Hosts | Metadata | > +----+-----------------------+-------------------+-------+--------------------------+ > | 9 | host-aggregate-zone-1 | az-1 | 'qa5' | 'availability_zone=az-1' | > +----+-----------------------+-------------------+-------+--------------------------+ > localadmin at qa4:~/devstack$ nova aggregate-details host-aggregate-zone-2 > +----+-----------------------+-------------------+-------+--------------------------+ > | Id | Name | Availability Zone | Hosts | Metadata | > +----+-----------------------+-------------------+-------+--------------------------+ > | 10 | host-aggregate-zone-2 | az-2 | 'qa6' | 'availability_zone=az-2' | > +----+-----------------------+-------------------+-------+?????????????+ > > My intent is to control at which compute host to launch a VM via the host-aggregate?s availability-zone parameter. > > To test, for vm-1, I specify --availiability-zone=az-1, and --availiability-zone=az-2 for vm-2: > > localadmin at qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-1 vm-1 > +--------------------------------------+----------------------------------------------------------------+ > | Property | Value | > +--------------------------------------+----------------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | nova | > | OS-EXT-SRV-ATTR:host | - | > | OS-EXT-SRV-ATTR:hypervisor_hostname | - | > | OS-EXT-SRV-ATTR:instance_name | instance-00000066 | > | OS-EXT-STS:power_state | 0 | > | OS-EXT-STS:task_state | - | > | OS-EXT-STS:vm_state | building | > | OS-SRV-USG:launched_at | - | > | OS-SRV-USG:terminated_at | - | > | accessIPv4 | | > | accessIPv6 | | > | adminPass | kxot3ZBZcBH6 | > | config_drive | | > | created | 2014-12-21T15:59:03Z | > | flavor | m1.tiny (1) | > | hostId | | > | id | 854acae9-b718-4ea5-bc28-e0bc46378b60 | > | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | > | key_name | - | > | metadata | {} | > | name | vm-1 | > | os-extended-volumes:volumes_attached | [] | > | progress | 0 | > | security_groups | default | > | status | BUILD | > | tenant_id | 84827057a7444354b0bff11566ccb80b | > | updated | 2014-12-21T15:59:03Z | > | user_id | 9d5fd9947d154a2db396fce177f1f83c | > +--------------------------------------+----------------------------------------------------------------+ > localadmin at qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-2 vm-2 > +--------------------------------------+----------------------------------------------------------------+ > | Property | Value | > +--------------------------------------+----------------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | nova | > | OS-EXT-SRV-ATTR:host | - | > | OS-EXT-SRV-ATTR:hypervisor_hostname | - | > | OS-EXT-SRV-ATTR:instance_name | instance-00000067 | > | OS-EXT-STS:power_state | 0 | > | OS-EXT-STS:task_state | scheduling | > | OS-EXT-STS:vm_state | building | > | OS-SRV-USG:launched_at | - | > | OS-SRV-USG:terminated_at | - | > | accessIPv4 | | > | accessIPv6 | | > | adminPass | 2kXQpV2u9TVv | > | config_drive | | > | created | 2014-12-21T15:59:55Z | > | flavor | m1.tiny (1) | > | hostId | | > | id | ce1b5dca-a844-4c59-bb00-39a617646c59 | > | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | > | key_name | - | > | metadata | {} | > | name | vm-2 | > | os-extended-volumes:volumes_attached | [] | > | progress | 0 | > | security_groups | default | > | status | BUILD | > | tenant_id | 84827057a7444354b0bff11566ccb80b | > | updated | 2014-12-21T15:59:55Z | > | user_id | 9d5fd9947d154a2db396fce177f1f83c | > +--------------------------------------+????????????????????????????????+ > > However, both VMs ended up at compute host qa5: > > localadmin at qa4:~/devstack$ nova hypervisor-servers q > +--------------------------------------+-------------------+---------------+---------------------+ > | ID | Name | Hypervisor ID | Hypervisor Hostname | > +--------------------------------------+-------------------+---------------+---------------------+ > | 854acae9-b718-4ea5-bc28-e0bc46378b60 | instance-00000066 | 1 | qa5 | > | ce1b5dca-a844-4c59-bb00-39a617646c59 | instance-00000067 | 1 | qa5 | > +--------------------------------------+-------------------+---------------+---------------------+ > localadmin at qa4:~/devstack$ nova show vm-1 > +--------------------------------------+----------------------------------------------------------------+ > | Property | Value | > +--------------------------------------+----------------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | az-1 | > | OS-EXT-SRV-ATTR:host | qa5 | > | OS-EXT-SRV-ATTR:hypervisor_hostname | qa5 | > | OS-EXT-SRV-ATTR:instance_name | instance-00000066 | > | OS-EXT-STS:power_state | 1 | > | OS-EXT-STS:task_state | - | > | OS-EXT-STS:vm_state | active | > | OS-SRV-USG:launched_at | 2014-12-21T16:03:15.000000 | > | OS-SRV-USG:terminated_at | - | > | accessIPv4 | | > | accessIPv6 | | > | config_drive | | > | created | 2014-12-21T15:59:03Z | > | flavor | m1.tiny (1) | > | hostId | 89119faac9345b51f185bd8b6c2e091644f1544cd523067ecce64613 | > | id | 854acae9-b718-4ea5-bc28-e0bc46378b60 | > | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | > | key_name | - | > | metadata | {} | > | name | vm-1 | > | os-extended-volumes:volumes_attached | [] | > | private network | 10.0.0.70 | > | progress | 0 | > | security_groups | default | > | status | ACTIVE | > | tenant_id | 84827057a7444354b0bff11566ccb80b | > | updated | 2014-12-21T15:59:11Z | > | user_id | 9d5fd9947d154a2db396fce177f1f83c | > +--------------------------------------+----------------------------------------------------------------+ > localadmin at qa4:~/devstack$ nova show vm-2 > +--------------------------------------+----------------------------------------------------------------+ > | Property | Value | > +--------------------------------------+----------------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | az-1 | > | OS-EXT-SRV-ATTR:host | qa5 | > | OS-EXT-SRV-ATTR:hypervisor_hostname | qa5 | > | OS-EXT-SRV-ATTR:instance_name | instance-00000067 | > | OS-EXT-STS:power_state | 0 | > | OS-EXT-STS:task_state | spawning | > | OS-EXT-STS:vm_state | building | > | OS-SRV-USG:launched_at | - | > | OS-SRV-USG:terminated_at | - | > | accessIPv4 | | > | accessIPv6 | | > | config_drive | | > | created | 2014-12-21T15:59:55Z | > | flavor | m1.tiny (1) | > | hostId | 89119faac9345b51f185bd8b6c2e091644f1544cd523067ecce64613 | > | id | ce1b5dca-a844-4c59-bb00-39a617646c59 | > | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | > | key_name | - | > | metadata | {} | > | name | vm-2 | > | os-extended-volumes:volumes_attached | [] | > | private network | 10.0.0.71 | > | progress | 0 | > | security_groups | default | > | status | BUILD | > | tenant_id | 84827057a7444354b0bff11566ccb80b | > | updated | 2014-12-21T15:59:56Z | > | user_id | 9d5fd9947d154a2db396fce177f1f83c | > +--------------------------------------+----------------------------------------------------------------+ > > Is it supposed to work this way? Do I missed something here? > > Thanks, > Danny > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From brandon.logan at RACKSPACE.COM Sun Dec 21 17:49:58 2014 From: brandon.logan at RACKSPACE.COM (Brandon Logan) Date: Sun, 21 Dec 2014 17:49:58 +0000 Subject: [openstack-dev] [neutron][lbaas] Canceling lbaas meeting 12/16 In-Reply-To: References: Message-ID: <1419184201.13766.1.camel@localhost> The extensions are remaining in neutron until the Neutron WSGI Refactor is completed so it's easier for them to test all extensions and that nothing breaks. I do believe the plan is to move the extensions into the service repos once this is completed. Thanks, Brandon On Sun, 2014-12-21 at 10:14 +0000, Evgeny Fedoruk wrote: > Hi Doug, > How are you? > I have a question regarding https://review.openstack.org/#/c/141247/ change set > Extension changes are not part of this change. I also see the whole extension mechanism is out of the new repository. > I may be missed something. Are we replacing the mechanism with something else? Or we will add it separately in other change set? > > Thanks, > Evg > > -----Original Message----- > From: Doug Wiegley [mailto:dougw at a10networks.com] > Sent: Sunday, December 14, 2014 7:46 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: [openstack-dev] [neutron][lbaas] Canceling lbaas meeting 12/16 > > Unless someone has an urgent agenda item, and due to the mid-cycle for Octavia, which has a bunch of overlap with the lbaas team, let?s cancel this week. If you have post-split lbaas v2 questions, please find me in #openstack-lbaas. > > The only announcement was going to be: If you are waiting to re-submit/submit lbaasv2 changes for the new repo, please monitor this review, or make your change dependent on it: > > https://review.openstack.org/#/c/141247/ > > > Thanks, > Doug > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From anlin.kong at gmail.com Mon Dec 22 01:01:57 2014 From: anlin.kong at gmail.com (Lingxian Kong) Date: Mon, 22 Dec 2014 09:01:57 +0800 Subject: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy In-Reply-To: References: Message-ID: 2014-12-19 17:44 GMT+08:00 Alex Xu : > Hi, > > There is problem when evacuate instance. If the instance is in the server > group with affinity policy, the instance can't evacuate out the failed > compute node. > > I know there is soft affinity policy under development, but think of if the > instance in server group with hard affinity means no way to get it back when > compute node failed, it's really confuse. > > I guess there should be some people concern that will violate the affinity > policy. But I think the compute node already down, all the instance in that > server group are down also, so I think we needn't care about the policy > anymore. but what if the compute node is back to normal? There will be instances in the same server group with affinity policy, but located in different hosts. > > I wrote up a patch can fix this problem: > https://review.openstack.org/#/c/135607/ > > > We have some discussion on the gerrit (Thanks Sylvain for discuss with me), > but we still not sure we are on the right direction. So I bring this up at > here. > > Thanks > Alex > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards! ----------------------------------- Lingxian Kong From y-goto at jp.fujitsu.com Mon Dec 22 01:12:42 2014 From: y-goto at jp.fujitsu.com (Yasunori Goto) Date: Mon, 22 Dec 2014 10:12:42 +0900 Subject: [openstack-dev] [Heat] How can I write at milestone section of blueprint? In-Reply-To: <20141219101748.GD22503@t430slt.redhat.com> References: <20141219170200.0619.E1E9C6FF@jp.fujitsu.com> <20141219101748.GD22503@t430slt.redhat.com> Message-ID: <20141222101236.4EFD.E1E9C6FF@jp.fujitsu.com> > On Fri, Dec 19, 2014 at 05:02:04PM +0900, Yasunori Goto wrote: > > > > Hello, > > > > This is the first mail at Openstack community, > > Welcome! :) > > > and I have a small question about how to write blueprint for Heat. > > > > Currently our team would like to propose 2 interfaces > > for users operation in HOT. > > (One is "Event handler" which is to notify user's defined event to heat. > > Another is definitions of action when heat catches the above notification.) > > So, I'm preparing the blueprint for it. > > Please include details of the exact use-case, e.g the problem you're trying > to solve (not just the proposed solution), as it's possible we can suggest > solutions based on exiting interfaces. Ok, I'll try. > > However, I can not find how I can write at the milestone section of blueprint. > > > > Heat blueprint template has a section for Milestones. > > "Milestones -- Target Milestone for completeion:" > > > > But I don't think I can decide it by myself. > > In my understanding, it should be decided by PTL. > > Normally, it's decided by when the person submitting the spec expects to > finish writing the code by. The PTL doesn't really have much control over > that ;) I see. > > > In addition, probably the above our request will not finish > > by Kilo. I suppose it will be "L" version or later. > > So to clarify, you want to propose the feature, but you're not planning on > working on it (e.g implementing it) yourself? I want to develop it. > > > So, what should I write at this section? > > "Kilo-x", "L version", or empty? > > As has already been mentioned, it doesn't matter that much - I see it as a > statement of intent from developers. If you're just requesting a feature, > you can even leave it blank if you want and we'll update it when an > assignee is found (e.g during the spec review). Thanks for your comment. I'm very newbie of Openstack world. To be honest, I don't have confidence when I can finish it. (Though I have experience Linux kernel development, currently, I feel python/Openstack is more difficult than it for me yet.) But, anyway, I'll do my best, and I'll write something at Milestone section. Bye. > > Thanks, > > Steve > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Yasunori Goto From soulxu at gmail.com Mon Dec 22 01:21:54 2014 From: soulxu at gmail.com (Alex Xu) Date: Mon, 22 Dec 2014 09:21:54 +0800 Subject: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy In-Reply-To: References: Message-ID: 2014-12-22 9:01 GMT+08:00 Lingxian Kong : > 2014-12-19 17:44 GMT+08:00 Alex Xu : > > Hi, > > > > There is problem when evacuate instance. If the instance is in the server > > group with affinity policy, the instance can't evacuate out the failed > > compute node. > > > > I know there is soft affinity policy under development, but think of if > the > > instance in server group with hard affinity means no way to get it back > when > > compute node failed, it's really confuse. > > > > I guess there should be some people concern that will violate the > affinity > > policy. But I think the compute node already down, all the instance in > that > > server group are down also, so I think we needn't care about the policy > > anymore. > > but what if the compute node is back to normal? There will be > instances in the same server group with affinity policy, but located > in different hosts. > > If operator decide to evacuate the instance from the failed host, we should fence the failed host first. So the failed host shoudn't have chance to get back. > > > > I wrote up a patch can fix this problem: > > https://review.openstack.org/#/c/135607/ > > > > > > We have some discussion on the gerrit (Thanks Sylvain for discuss with > me), > > but we still not sure we are on the right direction. So I bring this up > at > > here. > > > > Thanks > > Alex > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Regards! > ----------------------------------- > Lingxian Kong > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Mon Dec 22 02:36:35 2014 From: anlin.kong at gmail.com (Lingxian Kong) Date: Mon, 22 Dec 2014 10:36:35 +0800 Subject: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy In-Reply-To: References: Message-ID: 2014-12-22 9:21 GMT+08:00 Alex Xu : > > > 2014-12-22 9:01 GMT+08:00 Lingxian Kong : >> >> >> but what if the compute node is back to normal? There will be >> instances in the same server group with affinity policy, but located >> in different hosts. >> > > If operator decide to evacuate the instance from the failed host, we should > fence the failed host first. Yes, actually. I mean the recommandation or prerequisite should be emphasized somewhere, e.g. the Operation Guide, otherwise it'll make things more confused. But the issue you are working around is indeed a problem we should solve. -- Regards! ----------------------------------- Lingxian Kong From cropper.joe at gmail.com Mon Dec 22 03:29:45 2014 From: cropper.joe at gmail.com (Joe Cropper) Date: Sun, 21 Dec 2014 21:29:45 -0600 Subject: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy In-Reply-To: References: Message-ID: <062F530F-1199-4767-B5BF-71708DC1A4DA@gmail.com> This is another great example of a use case in which these blueprints [1, 2] would be handy. They didn?t make the clip line for Kilo, but we?ll try again for L. I personally don?t think the scheduler should have ?special case? rules about when/when not to apply affinity policies, as that could be confusing for administrators. It would be simple to just remove it from the group, thereby allowing the administrator to rebuild the VM anywhere s/he wants? and then re-add the VM to the group once the environment is operational once again. [1] https://review.openstack.org/#/c/136487/ [2] https://review.openstack.org/#/c/139272/ - Joe On Dec 21, 2014, at 8:36 PM, Lingxian Kong wrote: > 2014-12-22 9:21 GMT+08:00 Alex Xu : >> >> >> 2014-12-22 9:01 GMT+08:00 Lingxian Kong : >>> > >>> >>> but what if the compute node is back to normal? There will be >>> instances in the same server group with affinity policy, but located >>> in different hosts. >>> >> >> If operator decide to evacuate the instance from the failed host, we should >> fence the failed host first. > > Yes, actually. I mean the recommandation or prerequisite should be > emphasized somewhere, e.g. the Operation Guide, otherwise it'll make > things more confused. But the issue you are working around is indeed a > problem we should solve. > > -- > Regards! > ----------------------------------- > Lingxian Kong > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From taget at linux.vnet.ibm.com Mon Dec 22 03:31:36 2014 From: taget at linux.vnet.ibm.com (Eli Qiao) Date: Mon, 22 Dec 2014 11:31:36 +0800 Subject: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy In-Reply-To: References: Message-ID: <54979098.4010403@linux.vnet.ibm.com> ? 2014?12?22? 10:36, Lingxian Kong ??: > 2014-12-22 9:21 GMT+08:00 Alex Xu : >> >> 2014-12-22 9:01 GMT+08:00 Lingxian Kong : >>> but what if the compute node is back to normal? There will be >>> instances in the same server group with affinity policy, but located >>> in different hosts. >>> >> If operator decide to evacuate the instance from the failed host, we should >> fence the failed host first. > Yes, actually. I mean the recommandation or prerequisite should be > emphasized somewhere, e.g. the Operation Guide, otherwise it'll make > things more confused. But the issue you are working around is indeed a > problem we should solve. > hi Alex, how about ask this in openstack-op mailing list, that will be much help. -- Thanks, Eli (Li Yong) Qiao From taget at linux.vnet.ibm.com Mon Dec 22 03:36:09 2014 From: taget at linux.vnet.ibm.com (Eli Qiao) Date: Mon, 22 Dec 2014 11:36:09 +0800 Subject: [openstack-dev] [nova][resend with correct subject prefix] ask for usage of quota reserve In-Reply-To: References: <54929143.2070903@linux.vnet.ibm.com> Message-ID: <549791A9.3060800@linux.vnet.ibm.com> ? 2014?12?18? 17:33, Chen CH Ji ??: > > AFAIK, quota will expire in 24 hours > > cfg.IntOpt('reservation_expire', > default=86400, > help='Number of seconds until a reservation expires'), > > Best Regards! > hi Kevin, but that is not reliable, user/op can change the default value. shall we just leave as the quota reservation there can do not commit/rollback ? I don't think there will be much more we can do. > > > Kevin (Chen) Ji ? ? > > Engineer, zVM Development, CSTL > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > Phone: +86-10-82454158 > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian > District, Beijing 100193, PRC > > Inactive hide details for "Eli Qiao(Li Yong Qiao)" ---12/18/2014 > 04:34:32 PM---hi all, can anyone tell if we call quotas.reserv"Eli > Qiao(Li Yong Qiao)" ---12/18/2014 04:34:32 PM---hi all, can anyone > tell if we call quotas.reserve() but never call > > From: "Eli Qiao(Li Yong Qiao)" > To: "OpenStack Development Mailing List (not for usage questions)" > > Date: 12/18/2014 04:34 PM > Subject: [openstack-dev] [nova][resend with correct subject prefix] > ask for usage of quota reserve > > ------------------------------------------------------------------------ > > > > hi all, > can anyone tell if we call quotas.reserve() but never call > quotas.commit() or quotas.rollback(). > what will happen? > > for example: > > 1. when doing resize, we call quotas.reserve() to reservea a delta > quota.(new_flavor - old_flavor) > 2. for some reasons, nova-compute crashed, and not chance to call > quotas.commit() or quotas.rollback() /(called by finish_resize in > nova/compute/manager.py)/ > 3. next time restart nova-compute server, is the delta quota still > reserved , or do we need any other operation on quotas? > > > Thanks in advance > -Eli. > > ps: this is related to patch : Handle RESIZE_PREP status when nova > compute do init_instance(_https://review.openstack.org/#/c/132827/)_ > > > -- > Thanks Eli Qiao(_qiaoly at cn.ibm.com_ > )_______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Eli (Li Yong) Qiao -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From xnhp0320 at gmail.com Mon Dec 22 04:37:06 2014 From: xnhp0320 at gmail.com (=?UTF-8?B?6LS66bmP?=) Date: Mon, 22 Dec 2014 12:37:06 +0800 Subject: [openstack-dev] Reply: [Nova] RemoteError: Remote error: OperationalError (OperationalError) (1048, "Column 'instance_uuid' cannot be null") Message-ID: Hi, I met the same problem three days ago. I found that my controller node has a lower version Nova (Nova 2) while the compute node has a relative new Nova version (2.1). I updated my controller node Nova, and the problem got solved. I also did some code investigation. It seems that the newer version code of Nova changes the process of building a new instance in compute node. Specifically, when the compute node invokes the instacne.save via rpc, it asks for more update operations in databases. I notice that the varables `updates` in instance.save (nova/objects/instance.py) has a key named 'numa_topology = None', and this key leads to this database update error in the old version of Nova. When I remove this key-value pair in 'updates', I did not see this error again. I did search the bug list but I don't find the corresponding bug, so I post it there, hope it can help. ------------------- Hi folks, when i launch instance use cirros image in the new openstack environment(juno version & centos7 OS base), the following piece is error logs from compute node. anybody meet the same error? ---------------------------- 2014-12-12 17:16:52.481 12966 ERROR nova.compute.manager [-] [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Failed to allocate network(s) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Traceback (most recent call last): 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2190, in _build_resources 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] requested_networks, security_groups) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1683, in _build_networks_for_instance 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] requested_networks, macs, security_groups, dhcp_options) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1717, in _allocate_network 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] instance.save(expected_task_state=[None]) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 189, in wrapper 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] ctxt, self, fn.__name__, args, kwargs) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 351, in object_action 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] objmethod=objmethod, args=args, kwargs=kwargs) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 152, in call 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] retry=self.retry) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in _send 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] timeout=timeout, retry=retry) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 408, in send 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] retry=retry) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 399, in _send 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] raise result 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] RemoteError: Remote error: OperationalError (OperationalError) (1048, "Column 'instance_uuid' cannot be null") 'UPDATE instance_extra SET updated_at=%s, instance_uuid=%s WHERE instance_extra.id = %s' (datetime.datetime(2014, 12, 12, 9, 16, 52, 434376), None, 5L) 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] [.u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 400, in _object_dispatch\n return getattr(target, method)(context, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 204, in wrapper\n return fn(self, ctxt, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 500, in save\n columns_to_join=_expected_cols(expected_attrs))\n', u' File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 746, in instance_update_and_get_original\n columns_to_join=columns_to_join)\n', u' File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 143, in wrapper\n return f(*args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2289, in instance_update_and_get_original\n columns_to_join=columns_to_join)\n', u' File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2380, in _instance_update\n session.add(instance_ref)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 470, in __exit__\n self.rollback()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__\n compat.reraise(exc_type, exc_value, exc_tb)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 467, in __exit__\n self.commit()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 377, in commit\n self._prepare_impl()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 357, in _prepare_impl\n self.session.flush()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1919, in flush\n self._flush(objects)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2037, in _flush\n transaction.rollback(_capture_exception=True)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__\n compat.reraise(exc_type, exc_value, exc_tb)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2001, in _flush\n flush_context.execute()\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 372, in execute\n rec.execute(self)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 526, in execute\n uow\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 60, in save_obj\n mapper, table, update)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 518, in _emit_update_statements\n execute(statement, params)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute\n return meth(self, multiparams, params)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 321, in _execute_on_connection\n return connection._execute_clauseelement(self, multiparams, params)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 826, in _execute_clauseelement\n compiled_sql, distilled_params\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context\n context)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1156, in _handle_dbapi_exception\n util.raise_from_cause(newraise, exc_info)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause\n reraise(type(exception), exception, tb=exc_tb)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context\n context)\n', u' File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute\n cursor.execute(statement, parameters)\n', u' File "/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 174, in execute\n self.errorhandler(self, exc, value)\n', u' File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler\n raise errorclass, errorvalue\n', u'OperationalError: (OperationalError) (1048, "Column \'instance_uuid\' cannot be null") \'UPDATE instance_extra SET updated_at=%s, instance_uuid=%s WHERE instance_extra.id = %s\' (datetime.datetime(2014, 12, 12, 9, 16, 52, 434376), None, 5L)\n']. 2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] 2014-12-12 17:16:52.515 12966 INFO nova.scheduler.client.report [-] Compute_service record updated for ('computenode.domain.com') 2014-12-12 17:16:52.517 12966 ERROR nova.compute.manager [-] [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Build of instance 67e215e0-2193-439d-89c4-be8c378df78d aborted: Failed to allocate the network(s), not rescheduling. 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Traceback (most recent call last): 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _do_build_and_run_instance 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] filter_properties) 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2129, in _build_and_run_instance 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] 'create.error', fault=e) 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__ 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] six.reraise(self.type_, self.value, self.tb) 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2102, in _build_and_run_instance 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] block_device_mapping) as resources: 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__ 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] return self.gen.next() 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2205, in _build_resources 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] reason=msg) 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] BuildAbortException: Build of instance 67e215e0-2193-439d-89c4-be8c378df78d aborted: Failed to allocate the network(s), not rescheduling. 2014-12-12 17:16:52.517 12966 TRACE nova.compute.manager [instance: 67e215e0-2193-439d-89c4-be8c378df78d] 2014-12-12 17:16:52.566 12966 INFO nova.network.neutronv2.api [-] [instance: 67e215e0-2193-439d-89c4-be8c378df78d] Unable to reset device ID for port None 2014-12-12 17:17:04.977 12966 WARNING nova.compute.manager [req-f9b96041-ff4c-4b3c-8a0e-bdedf79193d6 None] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor -- hepeng ICT From rakhmerov at mirantis.com Mon Dec 22 05:16:32 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Mon, 22 Dec 2014 11:16:32 +0600 Subject: [openstack-dev] [Mistral] For-each In-Reply-To: <31AB780F-4868-424C-A642-13D89C3D6AAA@stackstorm.com> References: <31AB780F-4868-424C-A642-13D89C3D6AAA@stackstorm.com> Message-ID: <5BEF97D8-3B3D-4708-8B3C-831D0F88D7A4@mirantis.com> > On 20 Dec 2014, at 03:01, Dmitri Zimine wrote: > > Another observation on naming consistency - mistral uses dash, like `for-each`. > Heat uses _underscores when naming YAML keys. > So does TOSCA standard. We should have thought about this earlier but it may be not late to fix it while v2 spec is still forming. Dmitri, We thought about it long ago. So my related comments/thoughts: We didn?t find any strict requirements about using snake case in YAML as well as in OpenStack We also didn?t find any technical problems with using ?dash case" One of the reasons to use ?dash case? was a suggested style of naming workflow variables using snake case. So not to confuse workflow language keywords with workflow variables we consciously made this decision. v2 is still forming but I?ve been totally against of introducing backwards incompatible changes into it since the beginning of October since we promised not to when we released 0.1 version. All the changes we?re now considering should be 100% backwards compatible with all existing syntax. Otherwise, it will be not easy to gain trusts from people who use it. Again, all changes must be additive within at least one major version of DSL. If we gather enough feedback that some of the things need to be changed in a non-backwards compatible way then we will probably create v3. Otherwise, why do we need versioning at all? Renat Akhmerov @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From swati.shukla1 at tcs.com Mon Dec 22 05:34:03 2014 From: swati.shukla1 at tcs.com (Swati Shukla1) Date: Mon, 22 Dec 2014 11:04:03 +0530 Subject: [openstack-dev] #PERSONAL# : Horizon -- CSS and JS Query for Blueprint submission In-Reply-To: References: , Message-ID: Hi All, I have 2 queries - 1) CSS : I have added a new template file. Where should I write its corresponding external css? should it be written in a .css file/.scss file? I am unaware of the .scss syntax so pls let me know how to proceed. 2) JS : I have included a new html2canvas.js file in my code. While committing the code, additional line feeds (carriage returns,etc) also get committed. Should I replace it with the .min.js file or is there any other way of commiting? Also, kindly let me know which all files have to go through xstatic? I am in the middle of a blueprint submission so an early help is appreciated. Thanks and Regards, Swati Shukla Tata Consultancy Services Mailto: swati.shukla1 at tcs.com Website: http://www.tcs.com ____________________________________________ Experience certainty. IT Services Business Solutions Consulting ____________________________________________ - =====-----=====-----===== Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Mon Dec 22 05:45:58 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Mon, 22 Dec 2014 11:45:58 +0600 Subject: [openstack-dev] [Mistral] For-each In-Reply-To: <82E688DB-515D-46F7-9FCB-F077E3DFDF26@stackstorm.com> References: <31AB780F-4868-424C-A642-13D89C3D6AAA@stackstorm.com> <82E688DB-515D-46F7-9FCB-F077E3DFDF26@stackstorm.com> Message-ID: <9FD16F08-5D72-45A4-B3B6-7E44A6350C4E@mirantis.com> > On 20 Dec 2014, at 06:54, Dmitri Zimine wrote: > > I appended some more ideas on making for-each loop more readable / less confusing in the document. > > It?s not rocking the boat (yet) - all the key agreements done that far, stay so far. It?s refinements. > > Please take a look, leave comments, +1 / -1 for various ideas, and leave your ideas there, too. > > https://docs.google.com/document/d/1iw0OgQcU0LV_i3Lnbax9NqAJ397zSYA3PMvl6F_uqm0/edit#heading=h.5hjdjqxsgfle Dmitri, thanks. Looks like you guys have been really creative around this blueprint :) I left my comments in the document. In essence, I would suggest we stop at the following syntax: 1. Short one-line syntax in case we need to iterate through one array. task1: for: my_item in $.my_array action: my_action ?. 2. Full syntax in case there are more than one array that we need to iterate through. task1: for: my_item1: $.my_array1 my_item2: $.my_array2 ... action: my_action ?. Option 3 task1: for: - my_item1 in $.my_array1 - my_item2 in $.my_array2 also looks ok to me but it doesn?t seem a YAML way a little bit because in YAML we can express key-value pairs naturally like in #2. Actually, I don?t see any problems with supporting all three options. As far as naming, let?s comment on each of the options we have now: ??for-each? - I?d be ok with it but seems like lots of people have been confused with it so far because their expectation were really different than the description we told them, so I?m ok to pick a different name. ?map? - I?m totally against it, first glance at ?map? would make very little sense to me even though I understand where this option comes from. I?m pretty sure it will be even more confusing than ?for-each?. ?with_items? - Ansible style, it?s ok to me. ?for? - I think this is my favourite option so far. Semantically it may not be too different than ?for-each? but it?s very concise and most general purpose languages have the same construct with similar semantics. After all, my suggestion would be not to spend huge amount of time arguing on naming. Although it?s pretty important, it?s always subjective to a great extent. Thanks Renat Akhmerov @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamielennox at redhat.com Mon Dec 22 06:02:53 2014 From: jamielennox at redhat.com (Jamie Lennox) Date: Mon, 22 Dec 2014 01:02:53 -0500 (EST) Subject: [openstack-dev] [all] Lets not assume everyone is using the global `CONF` object (zaqar broken by latest keystoneclient release 1.0) In-Reply-To: <5E2373BD-4156-4484-A8B7-6BAAFBF3B513@doughellmann.com> References: <20141219121700.GA13255@redhat.com> <5E2373BD-4156-4484-A8B7-6BAAFBF3B513@doughellmann.com> Message-ID: <60479836.170485.1419228173092.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Doug Hellmann" > To: "OpenStack Development Mailing List (not for usage questions)" > Sent: Saturday, 20 December, 2014 12:07:59 AM > Subject: Re: [openstack-dev] [all] Lets not assume everyone is using the global `CONF` object (zaqar broken by latest > keystoneclient release 1.0) > > > On Dec 19, 2014, at 7:17 AM, Flavio Percoco wrote: > > > Greetings, > > > > DISCLAIMER: The following comments are neither finger pointing the > > author of this work nor the keystone team. That was me. > > RANT: We should really stop assuming everyone is using a global `CONF` > > object. Moreover, we should really stop using it, especially in > > libraries. > > > > That said, here's a gentle note for all of us: > > > > If I understood the flow of changes correctly, keystoneclient recently > > introduced a auth_section[0] option, which needs to be registered in > > order for it to work properly. In keystoneclient, it's been correctly > > added a function[1] to register this option in a conf object. > > > > keystonemiddleware was then updated to support the above and a call to > > the register function[1] was then added to the `auth_token` module[2]. > > > > The above, unfortunately, broke Zaqar's auth because Zaqar is not > > using the global `CONF` object which means it has to register > > keystonemiddleware's options itself. Since the option was registered > > in the global conf instead of the conf object passed to > > `AuthProtocol`, the new `auth_section` option is not bein registered > > as keystoneclient excepts. > > > > So, as a gentle reminder to everyone, please, lets not assume all > > projects are using the global `CONF` object and make sure all libraries > > provide a good way to register the required options. I think either > > secretly registering options or exposing a function to let consumers > > do so is fine. > > > > I hate complaining without helping to solve the problem so, here's[3] a > > workaround to provide a, hopefully, better way to do this. Note that > > this shouldn't be the definitive fix and that we also implemented a > > workaround in zaqar as well. That will fix the immediate problem - and i assume is fixing the issue that oslo.config sample config generator must not be picking up those options if it's not there. > That change will fix the issue, but a better solution is to have the code in > keystoneclient that wants the options handle the registration at runtime. It > looks like keystoneclient/auth/conf.py:load_from_conf_options() is at least > one place that?s needed, there may be others. So auth_token middleware was never designed to work this way - but we can fix it to. The reason AuthProtocol.__init__ takes a conf dict (it's not an oslo.config.Cfg object) is to work with options being included via paste file, these are expected to be overrides of the global CONF object. Putting these options in paste.ini is something the keystone team has been advising against for a while now and my understanding from that was that we were close to everyone using the global CONF object. Do you know if there are any other projects managing CONF this way? I too dislike the global CONF, however this is the only time i've seen a project work to not use it. The problem with the proposed solution is that we are moving towards pluggable authentication in keystonemiddleware (and the clients). The auth_section option is the first called but the important option there is the auth_plugin option which specifies what sort of authentication to perform. The options that will be read/registered on CONF are then dependent on the plugin specified by auth_plugin. Handling this manually from Zaqar and having the correct options registered is going to be a pain. Given that there are users, I'll have a look into making auth_token middleware actually accept a CONF object rather than require people to hack things through in the override dictionary. Jamie > Doug > > > > > Cheers, > > Flavio > > > > [0] > > https://github.com/openstack/python-keystoneclient/blob/41afe3c963fa01f61b67c44e572eee34b0972382/keystoneclient/auth/conf.py#L20 > > [1] > > https://github.com/openstack/python-keystoneclient/blob/41afe3c963fa01f61b67c44e572eee34b0972382/keystoneclient/auth/conf.py#L49 > > [2] > > https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token.py#L356 > > [3] https://review.openstack.org/143063 > > > > -- > > @flaper87 > > Flavio Percoco > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From vikram.choudhary at huawei.com Mon Dec 22 06:18:50 2014 From: vikram.choudhary at huawei.com (Vikram Choudhary) Date: Mon, 22 Dec 2014 06:18:50 +0000 Subject: [openstack-dev] Our idea for SFC using OpenFlow. RE: [NFV][Telco] Service VM v/s its basic framework In-Reply-To: <45286C18B80CE54EAE4BDD29C033604E7FF6C96C5E@HE111647.EMEA1.CDS.T-INTERNAL.COM> References: <891761EAFA335D44AD1FFDB9B4A8C063D8F9EE@G4W3216.americas.hpqcorp.net> <45286C18B80CE54EAE4BDD29C033604E7FF6C96C5E@HE111647.EMEA1.CDS.T-INTERNAL.COM> Message-ID: <99F160A7D70E22438C8ECB52BDB54B70B20DC9@blreml503-mbx> Hi Murali, We have proposed service function chaining idea using open flow. https://blueprints.launchpad.net/neutron/+spec/service-function-chaining-using-openflow Will submit the same for review soon. Thanks Vikram From: Yuriy.Babenko at telekom.de [mailto:Yuriy.Babenko at telekom.de] Sent: 18 December 2014 19:35 To: openstack-dev at lists.openstack.org; stephen.kf.wong at gmail.com Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi, in the IRC meeting yesterday we agreed to work on the use-case for service function chaining as it seems to be important for a lot of participants [1]. We will prepare the first draft and share it in the TelcoWG Wiki for discussion. There is one blueprint in openstack on that in [2] [1] http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-17-14.01.txt [2] https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining Kind regards/Mit freundlichen Gr??en Yuriy Babenko Von: A, Keshava [mailto:keshava.a at hp.com] Gesendet: Mittwoch, 10. Dezember 2014 19:06 An: stephen.kf.wong at gmail.com; OpenStack Development Mailing List (not for usage questions) Betreff: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi Murali, There are many unknows w.r.t ?Service-VM? and how it should from NFV perspective. In my opinion it was not decided how the Service-VM framework should be. Depending on this we at OpenStack also will have impact for ?Service Chaining?. Please find the mail attached w.r.t that discussion with NFV for ?Service-VM + Openstack OVS related discussion?. Regards, keshava From: Stephen Wong [mailto:stephen.kf.wong at gmail.com] Sent: Wednesday, December 10, 2014 10:03 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi Murali, There is already a ServiceVM project (Tacker), currently under development on stackforge: https://wiki.openstack.org/wiki/ServiceVM If you are interested in this topic, please take a look at the wiki page above and see if the project's goals align with yours. If so, you are certainly welcome to join the IRC meeting and start to contribute to the project's direction and design. Thanks, - Stephen On Wed, Dec 10, 2014 at 7:01 AM, Murali B > wrote: Hi keshava, We would like contribute towards service chain and NFV Could you please share the document if you have any related to service VM The service chain can be achieved if we able to redirect the traffic to service VM using ovs-flows in this case we no need to have routing enable on the service VM(traffic is redirected at L2). All the tenant VM's in cloud could use this service VM services by adding the ovs-rules in OVS Thanks -Murali _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From y-goto at jp.fujitsu.com Mon Dec 22 06:19:13 2014 From: y-goto at jp.fujitsu.com (Yasunori Goto) Date: Mon, 22 Dec 2014 15:19:13 +0900 Subject: [openstack-dev] [Heat] How can I write at milestone section of blueprint? In-Reply-To: <5CD1F2A3-597C-42FB-B5F6-F9AE026FEDFB@rackspace.com> References: <20141219101748.GD22503@t430slt.redhat.com> <5CD1F2A3-597C-42FB-B5F6-F9AE026FEDFB@rackspace.com> Message-ID: <20141222151911.4F09.E1E9C6FF@jp.fujitsu.com> Rundal-san, > There should already be blueprints in launchpad for very similar functionality. > For example: https://blueprints.launchpad.net/heat/+spec/lifecycle-callbacks. > While that specifies Heat sending notifications to the outside world, > there has been discussion around debugging that would allow the receiver to > send notifications back. I only point this out so you can see there should be > similar blueprints and specs that you can reference and use as examples. Thank you for pointing it out. But do you know current status about it? Though the above blueprint is not approved, and it seems to be discarded..... Bye, > > On Dec 19, 2014, at 4:17 AM, Steven Hardy > wrote: > > > On Fri, Dec 19, 2014 at 05:02:04PM +0900, Yasunori Goto wrote: > >> > >> Hello, > >> > >> This is the first mail at Openstack community, > > > > Welcome! :) > > > >> and I have a small question about how to write blueprint for Heat. > >> > >> Currently our team would like to propose 2 interfaces > >> for users operation in HOT. > >> (One is "Event handler" which is to notify user's defined event to heat. > >> Another is definitions of action when heat catches the above notification.) > >> So, I'm preparing the blueprint for it. > > > > Please include details of the exact use-case, e.g the problem you're trying > > to solve (not just the proposed solution), as it's possible we can suggest > > solutions based on exiting interfaces. > > > >> However, I can not find how I can write at the milestone section of blueprint. > >> > >> Heat blueprint template has a section for Milestones. > >> "Milestones -- Target Milestone for completeion:" > >> > >> But I don't think I can decide it by myself. > >> In my understanding, it should be decided by PTL. > > > > Normally, it's decided by when the person submitting the spec expects to > > finish writing the code by. The PTL doesn't really have much control over > > that ;) > > > >> In addition, probably the above our request will not finish > >> by Kilo. I suppose it will be "L" version or later. > > > > So to clarify, you want to propose the feature, but you're not planning on > > working on it (e.g implementing it) yourself? > > > >> So, what should I write at this section? > >> "Kilo-x", "L version", or empty? > > > > As has already been mentioned, it doesn't matter that much - I see it as a > > statement of intent from developers. If you're just requesting a feature, > > you can even leave it blank if you want and we'll update it when an > > assignee is found (e.g during the spec review). > > > > Thanks, > > > > Steve > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Yasunori Goto From vikram.choudhary at huawei.com Mon Dec 22 06:57:53 2014 From: vikram.choudhary at huawei.com (Vikram Choudhary) Date: Mon, 22 Dec 2014 06:57:53 +0000 Subject: [openstack-dev] Our idea for SFC using OpenFlow. RE: [NFV][Telco] Service VM v/s its basic framework In-Reply-To: References: <891761EAFA335D44AD1FFDB9B4A8C063D8F9EE@G4W3216.americas.hpqcorp.net> <45286C18B80CE54EAE4BDD29C033604E7FF6C96C5E@HE111647.EMEA1.CDS.T-INTERNAL.COM> <99F160A7D70E22438C8ECB52BDB54B70B20DC9@blreml503-mbx> Message-ID: <99F160A7D70E22438C8ECB52BDB54B70B210FE@blreml503-mbx> Sorry for the incontinence. We will sort the issue at the earliest. Please find the BP attached with the mail!!! From: Murali B [mailto:mbirru at gmail.com] Sent: 22 December 2014 12:20 To: Vikram Choudhary Cc: openstack-dev at lists.openstack.org; Yuriy.Babenko at telekom.de; keshava.a at hp.com; stephen.kf.wong at gmail.com; Dhruv Dhody; Dongfeng (C); Kalyankumar Asangi Subject: Re: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Thank you Vikram, Could you or somebody please provide the access the full specification document Thanks -Murali On Mon, Dec 22, 2014 at 11:48 AM, Vikram Choudhary > wrote: Hi Murali, We have proposed service function chaining idea using open flow. https://blueprints.launchpad.net/neutron/+spec/service-function-chaining-using-openflow Will submit the same for review soon. Thanks Vikram From: Yuriy.Babenko at telekom.de [mailto:Yuriy.Babenko at telekom.de] Sent: 18 December 2014 19:35 To: openstack-dev at lists.openstack.org; stephen.kf.wong at gmail.com Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi, in the IRC meeting yesterday we agreed to work on the use-case for service function chaining as it seems to be important for a lot of participants [1]. We will prepare the first draft and share it in the TelcoWG Wiki for discussion. There is one blueprint in openstack on that in [2] [1] http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-17-14.01.txt [2] https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining Kind regards/Mit freundlichen Gr??en Yuriy Babenko Von: A, Keshava [mailto:keshava.a at hp.com] Gesendet: Mittwoch, 10. Dezember 2014 19:06 An: stephen.kf.wong at gmail.com; OpenStack Development Mailing List (not for usage questions) Betreff: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi Murali, There are many unknows w.r.t ?Service-VM? and how it should from NFV perspective. In my opinion it was not decided how the Service-VM framework should be. Depending on this we at OpenStack also will have impact for ?Service Chaining?. Please find the mail attached w.r.t that discussion with NFV for ?Service-VM + Openstack OVS related discussion?. Regards, keshava From: Stephen Wong [mailto:stephen.kf.wong at gmail.com] Sent: Wednesday, December 10, 2014 10:03 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi Murali, There is already a ServiceVM project (Tacker), currently under development on stackforge: https://wiki.openstack.org/wiki/ServiceVM If you are interested in this topic, please take a look at the wiki page above and see if the project's goals align with yours. If so, you are certainly welcome to join the IRC meeting and start to contribute to the project's direction and design. Thanks, - Stephen On Wed, Dec 10, 2014 at 7:01 AM, Murali B > wrote: Hi keshava, We would like contribute towards service chain and NFV Could you please share the document if you have any related to service VM The service chain can be achieved if we able to redirect the traffic to service VM using ovs-flows in this case we no need to have routing enable on the service VM(traffic is redirected at L2). All the tenant VM's in cloud could use this service VM services by adding the ovs-rules in OVS Thanks -Murali _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: service-function-chaining-using-openflow.rst Type: application/octet-stream Size: 12666 bytes Desc: service-function-chaining-using-openflow.rst URL: From jichenjc at cn.ibm.com Mon Dec 22 07:05:56 2014 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Mon, 22 Dec 2014 15:05:56 +0800 Subject: [openstack-dev] [nova][resend with correct subject prefix] ask for usage of quota reserve In-Reply-To: <549791A9.3060800@linux.vnet.ibm.com> References: <54929143.2070903@linux.vnet.ibm.com> <549791A9.3060800@linux.vnet.ibm.com> Message-ID: I used to submit a patch and retrieve the reservation of quota, got a -2 because it can expire :) so, I guess expire do no harm unless the uncommitted / unrollbacked reservation may occupy the quota and lead to side effect on upcoming actions... Best Regards! Kevin (Chen) Ji ? ? Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82454158 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Eli Qiao To: "OpenStack Development Mailing List (not for usage questions)" Date: 12/22/2014 11:37 AM Subject: Re: [openstack-dev] [nova][resend with correct subject prefix] ask for usage of quota reserve ? 2014?12?18? 17:33, Chen CH Ji ??: AFAIK, quota will expire in 24 hours cfg.IntOpt('reservation_expire', ? ? ? ? ? ? ? ?default=86400, ? ? ? ? ? ? ? ?help='Number of seconds until a reservation expires'), Best Regards! hi Kevin, but that is not reliable,? user/op can change the default value. shall we just leave as the quota reservation there can do not commit/rollback ? I don't think there will be much more we can do. Kevin (Chen) Ji ? ? Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN ? Internet: jichenjc at cn.ibm.com Phone: +86-10-82454158 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC Inactive hide details for "Eli Qiao(Li Yong Qiao)" ---12/18/2014 04:34:32 PM---hi all, can anyone tell if we call quotas.reserv"Eli Qiao(Li Yong Qiao)" ---12/18/2014 04:34:32 PM---hi all, can anyone tell if we call quotas.reserve() but never call From: "Eli Qiao(Li Yong Qiao)" To: "OpenStack Development Mailing List (not for usage questions)" Date: 12/18/2014 04:34 PM Subject: [openstack-dev] [nova][resend with correct subject prefix] ask for usage of quota reserve hi all, can anyone tell if we call quotas.reserve() but never call quotas.commit() or quotas.rollback(). what will happen? for example: 1. when doing resize, we call quotas.reserve() to reservea a delta quota.(new_flavor - old_flavor) 2. for some reasons, nova-compute crashed, and not chance to call quotas.commit() or quotas.rollback() (called by finish_resize in nova/compute/manager.py) 3. next time restart nova-compute server, is the delta quota still reserved , or do we need any other operation on quotas? Thanks in advance -Eli. ps: this is related to patch : Handle RESIZE_PREP status when nova compute do init_instance(https://review.openstack.org/#/c/132827/) -- Thanks Eli Qiao(qiaoly at cn.ibm.com) _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Eli (Li Yong) Qiao_______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From mbirru at gmail.com Mon Dec 22 06:50:11 2014 From: mbirru at gmail.com (Murali B) Date: Mon, 22 Dec 2014 12:20:11 +0530 Subject: [openstack-dev] Our idea for SFC using OpenFlow. RE: [NFV][Telco] Service VM v/s its basic framework In-Reply-To: <99F160A7D70E22438C8ECB52BDB54B70B20DC9@blreml503-mbx> References: <891761EAFA335D44AD1FFDB9B4A8C063D8F9EE@G4W3216.americas.hpqcorp.net> <45286C18B80CE54EAE4BDD29C033604E7FF6C96C5E@HE111647.EMEA1.CDS.T-INTERNAL.COM> <99F160A7D70E22438C8ECB52BDB54B70B20DC9@blreml503-mbx> Message-ID: Thank you Vikram, Could you or somebody please provide the access the full specification document Thanks -Murali On Mon, Dec 22, 2014 at 11:48 AM, Vikram Choudhary < vikram.choudhary at huawei.com> wrote: > > Hi Murali, > > > > We have proposed service function chaining idea using open flow. > > > https://blueprints.launchpad.net/neutron/+spec/service-function-chaining-using-openflow > > > > Will submit the same for review soon. > > > > Thanks > > Vikram > > > > *From:* Yuriy.Babenko at telekom.de [mailto:Yuriy.Babenko at telekom.de] > *Sent:* 18 December 2014 19:35 > *To:* openstack-dev at lists.openstack.org; stephen.kf.wong at gmail.com > *Subject:* Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic > framework > > > > Hi, > > in the IRC meeting yesterday we agreed to work on the use-case for service > function chaining as it seems to be important for a lot of participants > [1]. > > We will prepare the first draft and share it in the TelcoWG Wiki for > discussion. > > There is one blueprint in openstack on that in [2] > > > > [1] > http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-17-14.01.txt > > [2] > https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining > > > > Kind regards/Mit freundlichen Gr??en > > Yuriy Babenko > > > > *Von:* A, Keshava [mailto:keshava.a at hp.com ] > *Gesendet:* Mittwoch, 10. Dezember 2014 19:06 > *An:* stephen.kf.wong at gmail.com; OpenStack Development Mailing List (not > for usage questions) > *Betreff:* Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic > framework > > > > Hi Murali, > > > > There are many unknows w.r.t ?Service-VM? and how it should from NFV > perspective. > In my opinion it was not decided how the Service-VM framework should be. > > Depending on this we at OpenStack also will have impact for ?Service > Chaining?. > > *Please find the mail attached w.r.t that discussion with NFV for > ?Service-VM + Openstack OVS related discussion?.* > > > > > > Regards, > > keshava > > > > *From:* Stephen Wong [mailto:stephen.kf.wong at gmail.com > ] > *Sent:* Wednesday, December 10, 2014 10:03 PM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic > framework > > > > Hi Murali, > > > > There is already a ServiceVM project (Tacker), currently under > development on stackforge: > > > > https://wiki.openstack.org/wiki/ServiceVM > > > > If you are interested in this topic, please take a look at the wiki > page above and see if the project's goals align with yours. If so, you are > certainly welcome to join the IRC meeting and start to contribute to the > project's direction and design. > > > > Thanks, > > - Stephen > > > > > > On Wed, Dec 10, 2014 at 7:01 AM, Murali B wrote: > > Hi keshava, > > > > We would like contribute towards service chain and NFV > > > > Could you please share the document if you have any related to service VM > > > > The service chain can be achieved if we able to redirect the traffic to > service VM using ovs-flows > > > > in this case we no need to have routing enable on the service VM(traffic > is redirected at L2). > > > > All the tenant VM's in cloud could use this service VM services by adding > the ovs-rules in OVS > > > > > > Thanks > > -Murali > > > > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eduard.matei at cloudfounders.com Mon Dec 22 07:00:34 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Mon, 22 Dec 2014 09:00:34 +0200 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A424BC0@G4W3223.americas.hpqcorp.net> References: <5486D947.4090209@hp.com> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4240C5@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A424BC0@G4W3223.americas.hpqcorp.net> Message-ID: Thanks Ramy, Unfortunately i don't see dsvm-tempest-full in the "status" output. Any idea how i can get it "registered"? Thanks, Eduard On Fri, Dec 19, 2014 at 9:43 PM, Asselin, Ramy wrote: > > Eduard, > > > > If you run this command, you can see which jobs are registered: > > >telnet localhost 4730 > > > > >status > > > > There are 3 numbers per job: queued, running and workers that can run job. > Make sure the job is listed & last ?workers? is non-zero. > > > > To run the job again without submitting a patch set, leave a ?recheck? > comment on the patch & make sure your zuul layout.yaml is configured to > trigger off that comment. For example [1]. > > Be sure to use the sandbox repository. [2] > > I?m not aware of other ways. > > > > Ramy > > > > [1] > https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L20 > > [2] https://github.com/openstack-dev/sandbox > > > > > > > > > > *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] > *Sent:* Friday, December 19, 2014 3:36 AM > *To:* Asselin, Ramy > *Cc:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Hi all, > > After a little struggle with the config scripts i managed to get a working > setup that is able to process openstack-dev/sandbox and run > noop-check-comunication. > > > > Then, i tried enabling dsvm-tempest-full job but it keeps returning > "NOT_REGISTERED" > > > > 2014-12-19 12:07:14,683 INFO zuul.IndependentPipelineManager: Change > depends on changes [] > > 2014-12-19 12:07:14,683 INFO zuul.Gearman: Launch job > noop-check-communication for change with > dependent changes [] > > 2014-12-19 12:07:14,693 INFO zuul.Gearman: Launch job dsvm-tempest-full > for change with dependent changes [] > > 2014-12-19 12:07:14,694 ERROR zuul.Gearman: Job handle: None name: build:dsvm-tempest-full unique: > a9199d304d1140a8bf4448dfb1ae42c1> is not registered with Gearman > > 2014-12-19 12:07:14,694 INFO zuul.Gearman: Build handle: None name: build:dsvm-tempest-full unique: > a9199d304d1140a8bf4448dfb1ae42c1> complete, result NOT_REGISTERED > > 2014-12-19 12:07:14,765 INFO zuul.Gearman: Build handle: H:127.0.0.1:2 name: build:noop-check-communication unique: > 333c6ea077324a788e3c37a313d872c5> started > > 2014-12-19 12:07:14,910 INFO zuul.Gearman: Build handle: H:127.0.0.1:2 name: build:noop-check-communication unique: > 333c6ea077324a788e3c37a313d872c5> complete, result SUCCESS > > 2014-12-19 12:07:14,916 INFO zuul.IndependentPipelineManager: Reporting > change , actions: [ , {'verified': -1}>] > > > > Nodepoold's log show no reference to dsvm-tempest-full and neither > jenkins' logs. > > > > Any idea how to enable this job? > > > > Also, i got the "Cloud provider" setup and i can access it from the > jenkins master. > > Any idea how i can manually trigger dsvm-tempest-full job to run and test > the cloud provider without having to push a review to Gerrit? > > > > Thanks, > > Eduard > > > > On Thu, Dec 18, 2014 at 7:52 PM, Eduard Matei < > eduard.matei at cloudfounders.com> wrote: > > Thanks for the input. > > > > I managed to get another master working (on Ubuntu 13.10), again with some > issues since it was already setup. > > I'm now working towards setting up the slave. > > > > Will add comments to those reviews. > > > > Thanks, > > Eduard > > > > On Thu, Dec 18, 2014 at 7:42 PM, Asselin, Ramy > wrote: > > Yes, Ubuntu 12.04 is tested as mentioned in the readme [1]. Note that > the referenced script is just a wrapper that pulls all the latest from > various locations in openstack-infra, e.g. [2]. > > Ubuntu 14.04 support is WIP [3] > > FYI, there?s a spec to get an in-tree 3rd party ci solution [4]. Please > add your comments if this interests you. > > > > [1] https://github.com/rasselin/os-ext-testing/blob/master/README.md > > [2] > https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L29 > > [3] https://review.openstack.org/#/c/141518/ > > [4] https://review.openstack.org/#/c/139745/ > > > > > > *From:* Punith S [mailto:punith.s at cloudbyte.com] > *Sent:* Thursday, December 18, 2014 3:12 AM > *To:* OpenStack Development Mailing List (not for usage questions); > Eduard Matei > > > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Hi Eduard > > > > we tried running > https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh > > on ubuntu master 12.04, and it appears to be working fine on 12.04. > > > > thanks > > > > On Thu, Dec 18, 2014 at 1:57 PM, Eduard Matei < > eduard.matei at cloudfounders.com> wrote: > > Hi, > > Seems i can't install using puppet on the jenkins master using > install_master.sh from > https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh > because it's running Ubuntu 11.10 and it appears unsupported. > > I managed to install puppet manually on master and everything else fails > > So i'm trying to manually install zuul and nodepool and jenkins job > builder, see where i end up. > > > > The slave looks complete, got some errors on running install_slave so i > ran parts of the script manually, changing some params and it appears > installed but no way to test it without the master. > > > > Any ideas welcome. > > > > Thanks, > > > > Eduard > > > > On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy > wrote: > > Manually running the script requires a few environment settings. Take a > look at the README here: > > https://github.com/openstack-infra/devstack-gate > > > > Regarding cinder, I?m using this repo to run our cinder jobs (fork from > jaypipes). > > https://github.com/rasselin/os-ext-testing > > > > Note that this solution doesn?t use the Jenkins gerrit trigger pluggin, > but zuul. > > > > There?s a sample job for cinder here. It?s in Jenkins Job Builder format. > > > https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample > > > > You can ask more questions in IRC freenode #openstack-cinder. (irc# > asselin) > > > > Ramy > > > > *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] > *Sent:* Tuesday, December 16, 2014 12:41 AM > *To:* Bailey, Darragh > *Cc:* OpenStack Development Mailing List (not for usage questions); > OpenStack > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Hi, > > > > Can someone point me to some working documentation on how to setup third > party CI? (joinfu's instructions don't seem to work, and manually running > devstack-gate scripts fails: > > Running gate_hook > > Job timeout set to: 163 minutes > > timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory > > ERROR: the main setup script run by this job failed - exit code: 127 > > please look at the relevant log files to determine the root cause > > Cleaning up host > > ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) > > Build step 'Execute shell' marked build as failure. > > > > I have a working Jenkins slave with devstack and our internal libraries, i > have Gerrit Trigger Plugin working and triggering on patches created, i > just need the actual job contents so that it can get to comment with the > test results. > > > > Thanks, > > > > Eduard > > > > On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei < > eduard.matei at cloudfounders.com> wrote: > > Hi Darragh, thanks for your input > > > > I double checked the job settings and fixed it: > > - build triggers is set to Gerrit event > > - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin > and tested separately) > > - Trigger on: Patchset Created > > - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: > Type: Path, Pattern: ** (was Type Plain on both) > > Now the job is triggered by commit on openstack-dev/sandbox :) > > > > Regarding the Query and Trigger Gerrit Patches, i found my patch using > query: status:open project:openstack-dev/sandbox change:139585 and i can > trigger it manually and it executes the job. > > > > But i still have the problem: what should the job do? It doesn't actually > do anything, it doesn't run tests or comment on the patch. > > Do you have an example of job? > > > > Thanks, > > Eduard > > > > On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh wrote: > > Hi Eduard, > > > I would check the trigger settings in the job, particularly which "type" > of pattern matching is being used for the branches. Found it tends to be > the spot that catches most people out when configuring jobs with the > Gerrit Trigger plugin. If you're looking to trigger against all branches > then you would want "Type: Path" and "Pattern: **" appearing in the UI. > > If you have sufficient access using the 'Query and Trigger Gerrit > Patches' page accessible from the main view will make it easier to > confirm that your Jenkins instance can actually see changes in gerrit > for the given project (which should mean that it can see the > corresponding events as well). Can also use the same page to re-trigger > for PatchsetCreated events to see if you've set the patterns on the job > correctly. > > Regards, > Darragh Bailey > > "Nothing is foolproof to a sufficiently talented fool" - Unknown > > On 08/12/14 14:33, Eduard Matei wrote: > > Resending this to dev ML as it seems i get quicker response :) > > > > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: > > Patchset Created", chose as server the configured Gerrit server that > > was previously tested, then added the project openstack-dev/sandbox > > and saved. > > I made a change on dev sandbox repo but couldn't trigger my job. > > > > Any ideas? > > > > Thanks, > > Eduard > > > > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei > > > > wrote: > > > > Hello everyone, > > > > Thanks to the latest changes to the creation of service accounts > > process we're one step closer to setting up our own CI platform > > for Cinder. > > > > So far we've got: > > - Jenkins master (with Gerrit plugin) and slave (with DevStack and > > our storage solution) > > - Service account configured and tested (can manually connect to > > review.openstack.org and get events > > and publish comments) > > > > Next step would be to set up a job to do the actual testing, this > > is where we're stuck. > > Can someone please point us to a clear example on how a job should > > look like (preferably for testing Cinder on Kilo)? Most links > > we've found are broken, or tools/scripts are no longer working. > > Also, we cannot change the Jenkins master too much (it's owned by > > Ops team and they need a list of tools/scripts to review before > > installing/running so we're not allowed to experiment). > > > > Thanks, > > Eduard > > > > -- > > > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > > intended solely for the use of the individual or entity to whom > > they are addressed.If you are not the named addressee or an > > employee or agent responsible for delivering this message to the > > named addressee, you are hereby notified that you are not > > authorized to read, print, retain, copy or disseminate this > > message or any part of it. If you have received this email in > > error we request you to notify us by reply e-mail and to delete > > all electronic files of the message. If you are not the intended > > recipient you are notified that disclosing, copying, distributing > > or taking any action in reliance on the contents of this > > information is strictly prohibited. E-mail transmission cannot be > > guaranteed to be secure or error free as information could be > > intercepted, corrupted, lost, destroyed, arrive late or > > incomplete, or contain viruses. The sender therefore does not > > accept liability for any errors or omissions in the content of > > this message, and shall have no liability for any loss or damage > > suffered by the user, which arise as a result of e-mail transmission. > > > > > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > > intended solely for the use of the individual or entity to whom they > > are addressed.If you are not the named addressee or an employee or > > agent responsible for delivering this message to the named addressee, > > you are hereby notified that you are not authorized to read, print, > > retain, copy or disseminate this message or any part of it. If you > > have received this email in error we request you to notify us by reply > > e-mail and to delete all electronic files of the message. If you are > > not the intended recipient you are notified that disclosing, copying, > > distributing or taking any action in reliance on the contents of this > > information is strictly prohibited. E-mail transmission cannot be > > guaranteed to be secure or error free as information could be > > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or > > contain viruses. The sender therefore does not accept liability for > > any errors or omissions in the content of this message, and shall have > > no liability for any loss or damage suffered by the user, which arise > > as a result of e-mail transmission. > > > > > > > _______________________________________________ > > OpenStack-Infra mailing list > > OpenStack-Infra at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > > regards, > > > > punith s > > cloudbyte.com > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bpiotrowski at mirantis.com Mon Dec 22 06:28:16 2014 From: bpiotrowski at mirantis.com (Bartlomiej Piotrowski) Date: Mon, 22 Dec 2014 07:28:16 +0100 Subject: [openstack-dev] [Fuel] Change diagnostic snapshot compression algoritm In-Reply-To: <50CAAE06-BBD9-4296-A49C-8C2F8AB1025E@mirantis.com> References: <7A5052E0-C658-493B-BCE0-4158F7BF9BB6@mirantis.com> <50CAAE06-BBD9-4296-A49C-8C2F8AB1025E@mirantis.com> Message-ID: FYI, xz with multithreading support (5.2 release) has been marked as stable yesterday. Regards, Bart?omiej Piotrowski On Mon, Nov 24, 2014 at 12:32 PM, Bart?omiej Piotrowski < bpiotrowski at mirantis.com> wrote: > On 24 Nov 2014, at 12:25, Matthew Mosesohn wrote: > > I did this exercise over many iterations during Docker container > > packing and found that as long as the data is under 1gb, it's going to > > compress really well with xz. Over 1gb and lrzip looks more attractive > > (but only on high memory systems). In reality, we're looking at log > > footprints from OpenStack environments on the order of 500mb to 2gb. > > > > xz is very slow on single-core systems with 1.5gb of memory, but it's > > quite a bit faster if you run it on a more powerful system. I've found > > level 4 compression to be the best compromise that works well enough > > that it's still far better than gzip. If increasing compression time > > by 3-5x is too much for you guys, why not just go to bzip? You'll > > still improve compression but be able to cut back on time. > > > > Best Regards, > > Matthew Mosesohn > > Alpha release of xz supports multithreading via -T (or ?threads) parameter. > We could also use pbzip2 instead of regular bzip to cut some time on > multi-core > systems. > > Regards, > Bart?omiej Piotrowski -------------- next part -------------- An HTML attachment was scrubbed... URL: From rao.shweta at tcs.com Mon Dec 22 09:18:54 2014 From: rao.shweta at tcs.com (Rao Shweta) Date: Mon, 22 Dec 2014 14:48:54 +0530 Subject: [openstack-dev] Fw: [Heat] Multiple_Routers_Topoloy Message-ID: Hi All I am working on openstack Heat and i wanted to make below topolgy using heat template : For this i am using a template as given : AWSTemplateFormatVersion: '2010-09-09' Description: Sample Heat template that spins up multiple instances and a private network ? (JSON) Resources: ? heat_network_01: ??? Properties: {name: heat-network-01} ??? Type: OS::Neutron::Net ? heat_network_02: ??? Properties: {name: heat-network-02} ??? Type: OS::Neutron::Net ? heat_router_01: ??? Properties: {admin_state_up: 'True', name: heat-router-01} ??? Type: OS::Neutron::Router ? heat_router_02: ??? Properties: {admin_state_up: 'True', name: heat-router-02} ??? Type: OS::Neutron::Router ? heat_router_int0: ??? Properties: ????? router_id: {Ref: heat_router_01} ????? subnet_id: {Ref: heat_subnet_01} ??? Type: OS::Neutron::RouterInterface ? heat_router_int1: ??? Properties: ????? router_id: {Ref: heat_router_02} ????? subnet_id: {Ref: heat_subnet_02} ??? Type: OS::Neutron::RouterInterface ? heat_subnet_01: ??? Properties: ????? cidr: 10.10.10.0/24 ????? dns_nameservers: [172.16.1.11, 172.16.1.6] ????? enable_dhcp: 'True' ????? gateway_ip: 10.10.10.254 ????? name: heat-subnet-01 ????? network_id: {Ref: heat_network_01} ??? Type: OS::Neutron::Subnet ? heat_subnet_02: ??? Properties: ????? cidr: 10.10.11.0/24 ????? dns_nameservers: [172.16.1.11, 172.16.1.6] ????? enable_dhcp: 'True' ????? gateway_ip: 10.10.11.254 ????? name: heat-subnet-01 ????? network_id: {Ref: heat_network_02} ??? Type: OS::Neutron::Subnet ? instance0: ??? Properties: ????? flavor: m1.nano ????? image: cirros-0.3.2-x86_64-uec ????? name: heat-instance-01 ????? networks: ????? - port: {Ref: instance0_port0} ??? Type: OS::Nova::Server ? instance0_port0: ??? Properties: ????? admin_state_up: 'True' ????? network_id: {Ref: heat_network_01} ??? Type: OS::Neutron::Port ? instance1: ??? Properties: ????? flavor: m1.nano ????? image: cirros-0.3.2-x86_64-uec ????? name: heat-instance-02 ????? networks: ????? - port: {Ref: instance1_port0} ??? Type: OS::Nova::Server ? instance1_port0: ??? Properties: ????? admin_state_up: 'True' ????? network_id: {Ref: heat_network_01} ??? Type: OS::Neutron::Port ? instance11: ??? Properties: ????? flavor: m1.nano ????? image: cirros-0.3.2-x86_64-uec ????? name: heat-instance11-01 ????? networks: ????? - port: {Ref: instance11_port0} ??? Type: OS::Nova::Server ? instance11_port0: ??? Properties: ????? admin_state_up: 'True' ????? network_id: {Ref: heat_network_02} ??? Type: OS::Neutron::Port ? instance1: ??? Properties: ????? flavor: m1.nano ????? image: cirros-0.3.2-x86_64-uec ????? name: heat-instance12-02 ????? networks: ????? - port: {Ref: instance12_port0} ??? Type: OS::Nova::Server ? instance12_port0: ??? Properties: ????? admin_state_up: 'True' ????? network_id: {Ref: heat_network_02} ??? Type: OS::Neutron::Port I am able to create topology using the template but i am not able to connect two routers. Neither i can get a template example on internet through which i can connect two routers. Can you please help me with : 1.) Can we connect two routers? I tried with making a interface on router 1 and connecting it to the subnet2 which is showing error. ? heat_router_int0: ??? Properties: ????? router_id: {Ref: heat_router_01} ????? subnet_id: {Ref: heat_subnet_02} Can you please guide me how can we connect routers or have link between routers using template. 2.) Can you please forward a link or a example template from which i can refer and implement reqiured topology using heat template. Waiting for a response Thankyou Regards Shweta Rao Mailto: rao.shweta at tcs.com Website: http://www.tcs.com ____________________________________________ =====-----=====-----===== Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.1419238153591.png Type: image/png Size: 24165 bytes Desc: not available URL: From openstack at sheep.art.pl Mon Dec 22 08:42:23 2014 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Mon, 22 Dec 2014 09:42:23 +0100 Subject: [openstack-dev] [horizon] static files handling, bower/ In-Reply-To: References: <5492DD6F.2020604@sheep.art.pl> Message-ID: <5497D96F.6020500@sheep.art.pl> On 20/12/14 21:25, Richard Jones wrote: > This is a good proposal, though I'm unclear on how the > static_settings.py file is populated by a developer (as opposed to a > packager, which you described). It's not, the developer version is included in the repository, and simply points to where Bower is configured to put the files. -- Radomir Dopieralski From li-zheming at 163.com Mon Dec 22 09:54:52 2014 From: li-zheming at 163.com (li-zheming) Date: Mon, 22 Dec 2014 17:54:52 +0800 (CST) Subject: [openstack-dev] [nova] How can I continue to complete a abandoned blueprint? Message-ID: <79045ce6.1930.14a716bb72a.Coremail.li-zheming@163.com> hi all: Bp flavor-quota-memory(https://blueprints.launchpad.net/nova/+spec/flavor-quota-memory) was submitted by my partner in havana. but it has abandoned because of some reason. I want to continue to this blueprint. Based on the rules about BP for kilo, for this bp, spec is not necessary, so I submit the code directly and give commit message to clear out questions in spec. Is it right? how can I do? thanks! -- Name : Li zheming Company : Hua Wei Address ? Shenzhen China Tel?0086 18665391827 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sragolu at mvista.com Mon Dec 22 11:07:47 2014 From: sragolu at mvista.com (Srinivasa Rao Ragolu) Date: Mon, 22 Dec 2014 16:37:47 +0530 Subject: [openstack-dev] 'module' object has no attribute 'HVSpec' Message-ID: Hi All, I have integrated below CPU pinning patches to Nova https://review.openstack.org/#/c/132001/2https://review.openstack.org/#/c/128738/12https://review.openstack.org/#/c/129266/11https://review.openstack.org/#/c/129326/11https://review.openstack.org/#/c/129603/10https://review.openstack.org/#/c/129626/11https://review.openstack.org/#/c/130490/11https://review.openstack.org/#/c/130491/11https://review.openstack.org/#/c/130598/10https://review.openstack.org/#/c/131069/9https://review.openstack.org/#/c/131210/8https://review.openstack.org/#/c/131830/5https://review.openstack.org/#/c/131831/6https://review.openstack.org/#/c/131070/https://review.openstack.org/#/c/132086/https://review.openstack.org/#/c/132295/https://review.openstack.org/#/c/132296/https://review.openstack.org/#/c/132297/https://review.openstack.org/#/c/132557/https://review.openstack.org/#/c/132655/ And now if I try to run nova-compute, getting below error File "/opt/stack/nova/nova/objects/compute_node.py", line 93, in _from_db_object for hv_spec in hv_specs] AttributeError: 'module' object has no attribute 'HVSpec' Please help me in resolving this issue. Thanks, Srinivas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Mon Dec 22 11:48:32 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 22 Dec 2014 12:48:32 +0100 Subject: [openstack-dev] 'module' object has no attribute 'HVSpec' In-Reply-To: References: Message-ID: <20141222114832.GP13316@tesla.redhat.com> On Mon, Dec 22, 2014 at 04:37:47PM +0530, Srinivasa Rao Ragolu wrote: > Hi All, > > I have integrated below CPU pinning patches to Nova As of now, CPU pinning works directly from Nova git (as you can see, most of the patches below are merged), you don't have to manually apply any patches. > https://review.openstack.org/#/c/132001/2https://review.openstack.org/#/c/128738/12https://review.openstack.org/#/c/129266/11https://review.openstack.org/#/c/129326/11https://review.openstack.org/#/c/129603/10https://review.openstack.org/#/c/129626/11https://review.openstack.org/#/c/130490/11https://review.openstack.org/#/c/130491/11https://review.openstack.org/#/c/130598/10https://review.openstack.org/#/c/131069/9https://review.openstack.org/#/c/131210/8https://review.openstack.org/#/c/131830/5https://review.openstack.org/#/c/131831/6https://review.openstack.org/#/c/131070/https://review.openstack.org/#/c/132086/https://review.openstack.org/#/c/132295/https://review.openstack.org/#/c/132296/https://review.openstack.org/#/c/132297/https://review.openstack.org/#/c/132557/https://review.openstack.org/#/c/132655/ The links are all mangled due to the bad formatting. > And now if I try to run nova-compute, getting below error > > > File "/opt/stack/nova/nova/objects/compute_node.py", line 93, in _from_db_object > > for hv_spec in hv_specs] > > AttributeError: 'module' object has no attribute 'HVSpec' You can try directly from git and DevStack without applying manually patches. Also, these kind of usage questions are better suited for operator list or ask.openstack.org. -- /kashyap From soulxu at gmail.com Mon Dec 22 12:20:32 2014 From: soulxu at gmail.com (Alex Xu) Date: Mon, 22 Dec 2014 20:20:32 +0800 Subject: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy In-Reply-To: <062F530F-1199-4767-B5BF-71708DC1A4DA@gmail.com> References: <062F530F-1199-4767-B5BF-71708DC1A4DA@gmail.com> Message-ID: Joe, thanks, that's useful feature. But still not sure it's good for this case. Thinking of user's server-group will be deleted by administrator and new server-group created for user by administrator, that sounds confused for user. I'm thinking of the HA case, if there is host failed, the infrastructure can evacuate instance out of failed host automatically, and user shouldn't be affected by that(user still will know his instance is down, and the instance get back later. At least we should reduce the affect). I think the key is whether we think evacuate instance out of failed host that in affinity group is violation or not. The host already failed, we can ignore the failed host which in server group when we evacuate first instance to another host. After first instance evacuated, there is new alive host in the server group, then other instances will be evacuated to that new alive host to comply affinity policy. 2014-12-22 11:29 GMT+08:00 Joe Cropper : > This is another great example of a use case in which these blueprints [1, > 2] would be handy. They didn?t make the clip line for Kilo, but we?ll try > again for L. I personally don?t think the scheduler should have ?special > case? rules about when/when not to apply affinity policies, as that could > be confusing for administrators. It would be simple to just remove it from > the group, thereby allowing the administrator to rebuild the VM anywhere > s/he wants? and then re-add the VM to the group once the environment is > operational once again. > > [1] https://review.openstack.org/#/c/136487/ > [2] https://review.openstack.org/#/c/139272/ > > - Joe > > On Dec 21, 2014, at 8:36 PM, Lingxian Kong wrote: > > > 2014-12-22 9:21 GMT+08:00 Alex Xu : > >> > >> > >> 2014-12-22 9:01 GMT+08:00 Lingxian Kong : > >>> > > > >>> > >>> but what if the compute node is back to normal? There will be > >>> instances in the same server group with affinity policy, but located > >>> in different hosts. > >>> > >> > >> If operator decide to evacuate the instance from the failed host, we > should > >> fence the failed host first. > > > > Yes, actually. I mean the recommandation or prerequisite should be > > emphasized somewhere, e.g. the Operation Guide, otherwise it'll make > > things more confused. But the issue you are working around is indeed a > > problem we should solve. > > > > -- > > Regards! > > ----------------------------------- > > Lingxian Kong > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akuznetsova at mirantis.com Mon Dec 22 12:25:22 2014 From: akuznetsova at mirantis.com (Anastasia Kuznetsova) Date: Mon, 22 Dec 2014 16:25:22 +0400 Subject: [openstack-dev] [Mistral] Plans to load and performance testing In-Reply-To: References: Message-ID: Dmitry, Now I see that my comments are not so informative, I will try to describe environment and scenarios in more details. 1) *1 api 1 engine 1 executor *it means that there were 3 Mistral processes running on the same box 2) list-workbooks scenario was run when there were no workflow executions at the same time, I will notice this your comment and I will measure time in such situation, but I guess that it will take more time, the question is as far as. 3) 60 % of success means that only 60 % of number of times execution of scenario 'list-workbooks' were successful, at the moment I have observed only one type of error: error connection to Rabbit : Error ConnectionError: ('Connection aborted.', error(104, 'Connection reset by peer')) 4) we don't know the durability criteria of Mistral and under what load Mistral will 'die', we want to define the threshold. P.S. Dmitry, if you have any ideas/scenarios which you want to test, please share them. On Sat, Dec 20, 2014 at 9:35 AM, Dmitri Zimine wrote: > Anastasia, any start is a good start. > > *> 1 api 1 engine 1 executor, list-workbooks* > > what exactly doest it mean: 1) is mistral deployed on 3 boxes with > component per box, or all three are processes on the same box? 2) is > list-workbooks test running while workflow executions going on? How many? > what?s the character of the load 3) when it says 60% success what exactly > does it mean, what kind of failures? 4) what is the durability criteria, > how long do we expect Mistral to withstand the load. > > Let?s discuss this in details on the next IRC meeting? > > Thanks again for getting this started. > > DZ. > > > On Dec 19, 2014, at 7:44 AM, Anastasia Kuznetsova < > akuznetsova at mirantis.com> wrote: > > Boris, > > Thanks for feedback! > > > But I belive that you should put bigger load here: > https://etherpad.openstack.org/p/mistral-rally-testing-results > > As I said it is only beginning and I will increase the load and change > its type. > > >As well concurrency should be at least 2-3 times bigger than times > otherwise it won't generate proper load and you won't collect >enough data > for statistical analyze. > > > >As well use "rps" runner that generates more real life load. > >Plus it will be nice to share as well output of "rally task report" > command. > > Thanks for the advice, I will consider it in further testing and reporting. > > Answering to your question about using Rally for integration testing, as I > mentioned in our load testing plan published on wiki page, one of our > final goals is to have a Rally gate in one of Mistral repositories, so we > are interested in it and I already prepare first commits to Rally. > > Thanks, > Anastasia Kuznetsova > > On Fri, Dec 19, 2014 at 4:51 PM, Boris Pavlovic > wrote: >> >> Anastasia, >> >> Nice work on this. But I belive that you should put bigger load here: >> https://etherpad.openstack.org/p/mistral-rally-testing-results >> >> As well concurrency should be at least 2-3 times bigger than times >> otherwise it won't generate proper load and you won't collect enough data >> for statistical analyze. >> >> As well use "rps" runner that generates more real life load. >> Plus it will be nice to share as well output of "rally task report" >> command. >> >> >> By the way what do you think about using Rally scenarios (that you >> already wrote) for integration testing as well? >> >> >> Best regards, >> Boris Pavlovic >> >> On Fri, Dec 19, 2014 at 2:39 PM, Anastasia Kuznetsova < >> akuznetsova at mirantis.com> wrote: >>> >>> Hello everyone, >>> >>> I want to announce that Mistral team has started work on load and >>> performance testing in this release cycle. >>> >>> Brief information about scope of our work can be found here: >>> >>> https://wiki.openstack.org/wiki/Mistral/Testing#Load_and_Performance_Testing >>> >>> First results are published here: >>> https://etherpad.openstack.org/p/mistral-rally-testing-results >>> >>> Thanks, >>> Anastasia Kuznetsova >>> @ Mirantis Inc. >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Mon Dec 22 12:37:33 2014 From: soulxu at gmail.com (Alex Xu) Date: Mon, 22 Dec 2014 20:37:33 +0800 Subject: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy In-Reply-To: References: Message-ID: 2014-12-22 10:36 GMT+08:00 Lingxian Kong : > 2014-12-22 9:21 GMT+08:00 Alex Xu : > > > > > > 2014-12-22 9:01 GMT+08:00 Lingxian Kong : > >> > > >> > >> but what if the compute node is back to normal? There will be > >> instances in the same server group with affinity policy, but located > >> in different hosts. > >> > > > > If operator decide to evacuate the instance from the failed host, we > should > > fence the failed host first. > > Yes, actually. I mean the recommandation or prerequisite should be > emphasized somewhere, e.g. the Operation Guide, otherwise it'll make > things more confused. But the issue you are working around is indeed a > problem we should solve. > > Yea, you are right, we should doc it if we think this make sense. Thanks! > -- > Regards! > ----------------------------------- > Lingxian Kong > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon Dec 22 13:50:44 2014 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 22 Dec 2014 14:50:44 +0100 Subject: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy In-Reply-To: References: Message-ID: <549821B4.6050805@redhat.com> Le 22/12/2014 13:37, Alex Xu a ?crit : > > > 2014-12-22 10:36 GMT+08:00 Lingxian Kong >: > > 2014-12-22 9:21 GMT+08:00 Alex Xu >: > > > > > > 2014-12-22 9:01 GMT+08:00 Lingxian Kong >: > >> > > >> > >> but what if the compute node is back to normal? There will be > >> instances in the same server group with affinity policy, but > located > >> in different hosts. > >> > > > > If operator decide to evacuate the instance from the failed > host, we should > > fence the failed host first. > > Yes, actually. I mean the recommandation or prerequisite should be > emphasized somewhere, e.g. the Operation Guide, otherwise it'll make > things more confused. But the issue you are working around is indeed a > problem we should solve. > > > Yea, you are right, we should doc it if we think this make sense. Thanks! As I said, I'm not in favor of adding more complexity in the instance group setup that is done in the conductor for basic race condition reasons. If I understand correctly, the problem is when there is only one host for all the instances belonging to a group with affinity filter and this host is down, then the filter will deny any other host and consequently the request will fail while it should succeed. Is this really a problem ? I mean, it appears to me that's a normal behaviour because a filter is by definition an *hard* policy. So, provided you would like to implement *soft* policies, that sounds more likely a *weigher* that you would like to have : ie. make sure that hosts running existing instances in the group are weighted more than other ones so they'll be chosen every time, but in case they're down, allow the scheduler to pick other hosts. HTH, -Sylvain > -- > Regards! > ----------------------------------- > Lingxian Kong > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Dec 22 14:32:52 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 22 Dec 2014 09:32:52 -0500 Subject: [openstack-dev] [nova] How can I continue to complete a abandoned blueprint? In-Reply-To: <79045ce6.1930.14a716bb72a.Coremail.li-zheming@163.com> References: <79045ce6.1930.14a716bb72a.Coremail.li-zheming@163.com> Message-ID: <54982B94.5020604@gmail.com> On 12/22/2014 04:54 AM, li-zheming wrote: > hi all: Bp > flavor-quota-memory(https://blueprints.launchpad.net/nova/+spec/flavor-quota-memory) > was submitted by my partner in havana. but it has abandoned because > of some reason. Some reason == the submitter failed to provide any details on how the work would be implemented, what the use cases were, and any alternatives that might be possible. > I want to continue to this blueprint. Based on the > rules about BP for > kilo, > for this bp, spec is not necessary, so I submit the code directly and > give commit message to clear out questions in spec. Is it right? how > can I do? thanks! Specs are no longer necessary for smallish features, no. A blueprint is still necessary on Launchpad, so you should be able to use the abandoned one you link above -- which, AFAICT, has enough implementation details about the proposed changes. Alternately, if you cannot get the original submitter to remove the spec link to the old spec review, you can always start a new blueprint and we can mark that one as obselete. I'd like Dan Berrange (cc'd) to review whichever blueprint on Launchpad you end up using. Please let us know what you do. All the best, -jay From keshava.a at hp.com Mon Dec 22 14:39:45 2014 From: keshava.a at hp.com (A, Keshava) Date: Mon, 22 Dec 2014 14:39:45 +0000 Subject: [openstack-dev] Our idea for SFC using OpenFlow. RE: [NFV][Telco] Service VM v/s its basic framework In-Reply-To: <99F160A7D70E22438C8ECB52BDB54B70B210FE@blreml503-mbx> References: <891761EAFA335D44AD1FFDB9B4A8C063D8F9EE@G4W3216.americas.hpqcorp.net> <45286C18B80CE54EAE4BDD29C033604E7FF6C96C5E@HE111647.EMEA1.CDS.T-INTERNAL.COM> <99F160A7D70E22438C8ECB52BDB54B70B20DC9@blreml503-mbx> <99F160A7D70E22438C8ECB52BDB54B70B210FE@blreml503-mbx> Message-ID: <891761EAFA335D44AD1FFDB9B4A8C063DD95AF@G9W0762.americas.hpqcorp.net> Vikram, 1. In this solution it is assumed that all the OpenStack services are available/enabled on all the CNs ? 2. Consider a scenario: For a particular Tennant traffic the flows are chained across a set of CNs . Then if one of the VM (of that Tennant) migrates to a new CN, where that Tennant was not there earlier on that CN, what will be the impact ? How to control the chaining of flows in these kind of scenario ? so that packet will reach that Tennant VM on new CN ? Here this Tennant VM be a NFV Service-VM (which should be transparent to OpenStack). keshava From: Vikram Choudhary [mailto:vikram.choudhary at huawei.com] Sent: Monday, December 22, 2014 12:28 PM To: Murali B Cc: openstack-dev at lists.openstack.org; Yuriy.Babenko at telekom.de; A, Keshava; stephen.kf.wong at gmail.com; Dhruv Dhody; Dongfeng (C); Kalyankumar Asangi Subject: RE: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Sorry for the incontinence. We will sort the issue at the earliest. Please find the BP attached with the mail!!! From: Murali B [mailto:mbirru at gmail.com] Sent: 22 December 2014 12:20 To: Vikram Choudhary Cc: openstack-dev at lists.openstack.org; Yuriy.Babenko at telekom.de; keshava.a at hp.com; stephen.kf.wong at gmail.com; Dhruv Dhody; Dongfeng (C); Kalyankumar Asangi Subject: Re: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Thank you Vikram, Could you or somebody please provide the access the full specification document Thanks -Murali On Mon, Dec 22, 2014 at 11:48 AM, Vikram Choudhary > wrote: Hi Murali, We have proposed service function chaining idea using open flow. https://blueprints.launchpad.net/neutron/+spec/service-function-chaining-using-openflow Will submit the same for review soon. Thanks Vikram From: Yuriy.Babenko at telekom.de [mailto:Yuriy.Babenko at telekom.de] Sent: 18 December 2014 19:35 To: openstack-dev at lists.openstack.org; stephen.kf.wong at gmail.com Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi, in the IRC meeting yesterday we agreed to work on the use-case for service function chaining as it seems to be important for a lot of participants [1]. We will prepare the first draft and share it in the TelcoWG Wiki for discussion. There is one blueprint in openstack on that in [2] [1] http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-17-14.01.txt [2] https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining Kind regards/Mit freundlichen Gr??en Yuriy Babenko Von: A, Keshava [mailto:keshava.a at hp.com] Gesendet: Mittwoch, 10. Dezember 2014 19:06 An: stephen.kf.wong at gmail.com; OpenStack Development Mailing List (not for usage questions) Betreff: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi Murali, There are many unknows w.r.t ?Service-VM? and how it should from NFV perspective. In my opinion it was not decided how the Service-VM framework should be. Depending on this we at OpenStack also will have impact for ?Service Chaining?. Please find the mail attached w.r.t that discussion with NFV for ?Service-VM + Openstack OVS related discussion?. Regards, keshava From: Stephen Wong [mailto:stephen.kf.wong at gmail.com] Sent: Wednesday, December 10, 2014 10:03 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi Murali, There is already a ServiceVM project (Tacker), currently under development on stackforge: https://wiki.openstack.org/wiki/ServiceVM If you are interested in this topic, please take a look at the wiki page above and see if the project's goals align with yours. If so, you are certainly welcome to join the IRC meeting and start to contribute to the project's direction and design. Thanks, - Stephen On Wed, Dec 10, 2014 at 7:01 AM, Murali B > wrote: Hi keshava, We would like contribute towards service chain and NFV Could you please share the document if you have any related to service VM The service chain can be achieved if we able to redirect the traffic to service VM using ovs-flows in this case we no need to have routing enable on the service VM(traffic is redirected at L2). All the tenant VM's in cloud could use this service VM services by adding the ovs-rules in OVS Thanks -Murali _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dannchoi at cisco.com Mon Dec 22 14:53:03 2014 From: dannchoi at cisco.com (Danny Choi (dannchoi)) Date: Mon, 22 Dec 2014 14:53:03 +0000 Subject: [openstack-dev] [qa] host aggregate's availability zone Message-ID: Hi Joe, No, I did not. I?m not aware of this. Can you tell me exactly what needs to be done? Thanks, Danny ------------------------------ Date: Sun, 21 Dec 2014 11:42:02 -0600 From: Joe Cropper > To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [qa] host aggregate's availability zone Message-ID: > Content-Type: text/plain; charset="utf-8" Did you enable the AvailabilityZoneFilter in nova.conf that the scheduler uses? And enable the FilterScheduler? These are two common issues related to this. - Joe On Dec 21, 2014, at 10:28 AM, Danny Choi (dannchoi) > wrote: Hi, I have a multi-node setup with 2 compute hosts, qa5 and qa6. I created 2 host-aggregate, each with its own availability zone, and assigned one compute host: localadmin at qa4:~/devstack$ nova aggregate-details host-aggregate-zone-1 +----+-----------------------+-------------------+-------+--------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+-----------------------+-------------------+-------+--------------------------+ | 9 | host-aggregate-zone-1 | az-1 | 'qa5' | 'availability_zone=az-1' | +----+-----------------------+-------------------+-------+--------------------------+ localadmin at qa4:~/devstack$ nova aggregate-details host-aggregate-zone-2 +----+-----------------------+-------------------+-------+--------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+-----------------------+-------------------+-------+--------------------------+ | 10 | host-aggregate-zone-2 | az-2 | 'qa6' | 'availability_zone=az-2' | +----+-----------------------+-------------------+-------+?????????????+ My intent is to control at which compute host to launch a VM via the host-aggregate?s availability-zone parameter. To test, for vm-1, I specify --availiability-zone=az-1, and --availiability-zone=az-2 for vm-2: localadmin at qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-1 vm-1 +--------------------------------------+----------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000066 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | kxot3ZBZcBH6 | | config_drive | | | created | 2014-12-21T15:59:03Z | | flavor | m1.tiny (1) | | hostId | | | id | 854acae9-b718-4ea5-bc28-e0bc46378b60 | | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | | key_name | - | | metadata | {} | | name | vm-1 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 84827057a7444354b0bff11566ccb80b | | updated | 2014-12-21T15:59:03Z | | user_id | 9d5fd9947d154a2db396fce177f1f83c | +--------------------------------------+----------------------------------------------------------------+ localadmin at qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-2 vm-2 +--------------------------------------+----------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000067 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | 2kXQpV2u9TVv | | config_drive | | | created | 2014-12-21T15:59:55Z | | flavor | m1.tiny (1) | | hostId | | | id | ce1b5dca-a844-4c59-bb00-39a617646c59 | | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | | key_name | - | | metadata | {} | | name | vm-2 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 84827057a7444354b0bff11566ccb80b | | updated | 2014-12-21T15:59:55Z | | user_id | 9d5fd9947d154a2db396fce177f1f83c | +--------------------------------------+????????????????????????????????+ However, both VMs ended up at compute host qa5: localadmin at qa4:~/devstack$ nova hypervisor-servers q +--------------------------------------+-------------------+---------------+---------------------+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--------------------------------------+-------------------+---------------+---------------------+ | 854acae9-b718-4ea5-bc28-e0bc46378b60 | instance-00000066 | 1 | qa5 | | ce1b5dca-a844-4c59-bb00-39a617646c59 | instance-00000067 | 1 | qa5 | +--------------------------------------+-------------------+---------------+---------------------+ localadmin at qa4:~/devstack$ nova show vm-1 +--------------------------------------+----------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | az-1 | | OS-EXT-SRV-ATTR:host | qa5 | | OS-EXT-SRV-ATTR:hypervisor_hostname | qa5 | | OS-EXT-SRV-ATTR:instance_name | instance-00000066 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2014-12-21T16:03:15.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-12-21T15:59:03Z | | flavor | m1.tiny (1) | | hostId | 89119faac9345b51f185bd8b6c2e091644f1544cd523067ecce64613 | | id | 854acae9-b718-4ea5-bc28-e0bc46378b60 | | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | | key_name | - | | metadata | {} | | name | vm-1 | | os-extended-volumes:volumes_attached | [] | | private network | 10.0.0.70 | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id | 84827057a7444354b0bff11566ccb80b | | updated | 2014-12-21T15:59:11Z | | user_id | 9d5fd9947d154a2db396fce177f1f83c | +--------------------------------------+----------------------------------------------------------------+ localadmin at qa4:~/devstack$ nova show vm-2 +--------------------------------------+----------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | az-1 | | OS-EXT-SRV-ATTR:host | qa5 | | OS-EXT-SRV-ATTR:hypervisor_hostname | qa5 | | OS-EXT-SRV-ATTR:instance_name | instance-00000067 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | spawning | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-12-21T15:59:55Z | | flavor | m1.tiny (1) | | hostId | 89119faac9345b51f185bd8b6c2e091644f1544cd523067ecce64613 | | id | ce1b5dca-a844-4c59-bb00-39a617646c59 | | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | | key_name | - | | metadata | {} | | name | vm-2 | | os-extended-volumes:volumes_attached | [] | | private network | 10.0.0.71 | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 84827057a7444354b0bff11566ccb80b | | updated | 2014-12-21T15:59:56Z | | user_id | 9d5fd9947d154a2db396fce177f1f83c | +--------------------------------------+----------------------------------------------------------------+ Is it supposed to work this way? Do I missed something here? Thanks, Danny _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Mon Dec 22 15:34:20 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Mon, 22 Dec 2014 21:34:20 +0600 Subject: [openstack-dev] [mistral] Team meeting - 12/22/2014 Message-ID: Hi, Reminding that we have a team meeting today at #openstack-meeting at 16.00 UTC Review action items Current status (progress, issues, roadblocks, further plans) "Kilo-1" scope and blueprints "for-each" Scoping (global, local etc.) Load testing Open discussion (see https://wiki.openstack.org/wiki/Meetings/MistralAgenda to find the agenda and the meeting archive) Renat Akhmerov @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.griffith8 at gmail.com Mon Dec 22 15:42:09 2014 From: john.griffith8 at gmail.com (John Griffith) Date: Mon, 22 Dec 2014 08:42:09 -0700 Subject: [openstack-dev] [OpenStack-Dev] Logging formats and i18n Message-ID: Lately (on the Cinder team at least) there's been a lot of disagreement in reviews regarding the proper way to do LOG messages correctly. Use of '%' vs ',' in the formatting of variables etc. We do have the oslo i18n guidelines page here [1], which helps a lot but there's some disagreement on a specific case here. Do we have a set answer on: LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2}) vs LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2}) It's always fun when one person provides a -1 for the first usage; the submitter changes it and another reviewer gives a -1 and says, no it should be the other way. I'm hoping maybe somebody on the olso team can provide an authoritative answer and we can then update the example page referenced in [1] to clarify this particular case. Thanks, John [1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html From gkotton at vmware.com Mon Dec 22 15:51:48 2014 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 22 Dec 2014 15:51:48 +0000 Subject: [openstack-dev] [nova][vmware] Canceling VMware meeting 12/24 and 12/31 Message-ID: Hi, I am not sure that we will have enough people around for the up and coming meetings. I suggest that we cancel them and resume in the New Year. Happy holidays to all! A luta continua Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Dec 22 16:01:50 2014 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 22 Dec 2014 17:01:50 +0100 Subject: [openstack-dev] No Cross-project meeting nor 1:1 syncs for next two weeks Message-ID: <5498406E.1040507@openstack.org> PTLs and others, As a reminder, we'll be skipping the cross-project meeting (normally held on Tuesdays at 21:00 UTC) for the next two weeks. Next meeting will be on January 6th. We'll also skip 1:1 sync between release liaisons and release management (normally held on Tuesdays and Thursdays) for the next two weeks. If you have anything urgent to discuss don't hesitate to ping me on #openstack-relmgr-office. Enjoy the end-of-year holiday season! -- Thierry Carrez (ttx) From dougw at a10networks.com Mon Dec 22 16:47:24 2014 From: dougw at a10networks.com (Doug Wiegley) Date: Mon, 22 Dec 2014 16:47:24 +0000 Subject: [openstack-dev] [neutron][lbaas] meetings during holidays In-Reply-To: References: Message-ID: Canceled. The next lbaas meeting will be 1/6. Happy holidays. Thanks, doug On 12/19/14, 11:33 AM, "Doug Wiegley" wrote: >Hi all, > >Anyone have big agenda items for the 12/23 or 12/30 meeting? If not, I?d >suggest we cancel those two meetings, and bring up anything small during >the on-demand portion of the neutron meetings. > >If I don?t hear anything by Monday, we will cancel those two meetings. > >Thanks, >Doug > From aurlapova at mirantis.com Mon Dec 22 16:49:03 2014 From: aurlapova at mirantis.com (Anastasia Urlapova) Date: Mon, 22 Dec 2014 20:49:03 +0400 Subject: [openstack-dev] [Fuel] Feature delivery rules and automated tests In-Reply-To: References: Message-ID: Mike, Dmitry, team, let me add 5 cents - tests per feature have to run on CI before SCF, it is mean that jobs configuration also should be implemented. On Wed, Dec 17, 2014 at 7:33 AM, Mike Scherbakov wrote: > I fully support the idea. > > Feature Lead has to know, that his feature is under threat if it's not yet > covered by system tests (unit/integration tests are not enough!!!), and > should proactive work with QA engineers to get tests implemented and > passing before SCF. > > On Fri, Dec 12, 2014 at 5:55 PM, Dmitry Pyzhov > wrote: > >> Guys, >> >> we've done a good job in 6.0. Most of the features were merged before >> feature freeze. Our QA were involved in testing even earlier. It was much >> better than before. >> >> We had a discussion with Anastasia. There were several bug reports for >> features yesterday, far beyond HCF. So we still have a long way to be >> perfect. We should add one rule: we need to have automated tests before HCF. >> >> Actually, we should have results of these tests just after FF. It is >> quite challengeable because we have a short development cycle. So my >> proposal is to require full deployment and run of automated tests for each >> feature before soft code freeze. And it needs to be tracked in checklists >> and on feature syncups. >> >> Your opinion? >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > -- > Mike Scherbakov > #mihgen > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyz at princessleia.com Mon Dec 22 16:55:43 2014 From: lyz at princessleia.com (Elizabeth K. Joseph) Date: Mon, 22 Dec 2014 08:55:43 -0800 Subject: [openstack-dev] [Infra] Meeting Tuesday December 23rd at 19:00 UTC Message-ID: Hi everyone, The OpenStack Infrastructure (Infra) team is hosting our weekly meeting on Tuesday December 23rd, at 19:00 UTC in #openstack-meeting Meeting agenda available here: https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is welcome to to add agenda items) Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend. Meeting log and minutes from the last meeting are available here: Minutes: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-16-19.01.html Minutes (text): http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-16-19.01.txt Log: http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-16-19.01.log.html -- Elizabeth Krumbach Joseph || Lyz || pleia2 From randall.burt at RACKSPACE.COM Mon Dec 22 17:02:47 2014 From: randall.burt at RACKSPACE.COM (Randall Burt) Date: Mon, 22 Dec 2014 17:02:47 +0000 Subject: [openstack-dev] [Heat] How can I write at milestone section of blueprint? In-Reply-To: <20141222151911.4F09.E1E9C6FF@jp.fujitsu.com> References: <20141219101748.GD22503@t430slt.redhat.com> <5CD1F2A3-597C-42FB-B5F6-F9AE026FEDFB@rackspace.com> <20141222151911.4F09.E1E9C6FF@jp.fujitsu.com> Message-ID: <698A6975-A43C-47ED-B0D0-0DA4CC5C7E69@rackspace.com> Its been discussed at several summits. We have settled on a general solution using Zaqar, but no work has been done that I know of. I was just pointing out that similar blueprints/specs exist and you may want to look through those to get some ideas about writing your own and/or basing your proposal off of one of them. On Dec 22, 2014, at 12:19 AM, Yasunori Goto wrote: > Rundal-san, > >> There should already be blueprints in launchpad for very similar functionality. >> For example: https://blueprints.launchpad.net/heat/+spec/lifecycle-callbacks. >> While that specifies Heat sending notifications to the outside world, >> there has been discussion around debugging that would allow the receiver to >> send notifications back. I only point this out so you can see there should be >> similar blueprints and specs that you can reference and use as examples. > > Thank you for pointing it out. > But do you know current status about it? > Though the above blueprint is not approved, and it seems to be discarded..... > > Bye, > >> >> On Dec 19, 2014, at 4:17 AM, Steven Hardy >> wrote: >> >>> On Fri, Dec 19, 2014 at 05:02:04PM +0900, Yasunori Goto wrote: >>>> >>>> Hello, >>>> >>>> This is the first mail at Openstack community, >>> >>> Welcome! :) >>> >>>> and I have a small question about how to write blueprint for Heat. >>>> >>>> Currently our team would like to propose 2 interfaces >>>> for users operation in HOT. >>>> (One is "Event handler" which is to notify user's defined event to heat. >>>> Another is definitions of action when heat catches the above notification.) >>>> So, I'm preparing the blueprint for it. >>> >>> Please include details of the exact use-case, e.g the problem you're trying >>> to solve (not just the proposed solution), as it's possible we can suggest >>> solutions based on exiting interfaces. >>> >>>> However, I can not find how I can write at the milestone section of blueprint. >>>> >>>> Heat blueprint template has a section for Milestones. >>>> "Milestones -- Target Milestone for completeion:" >>>> >>>> But I don't think I can decide it by myself. >>>> In my understanding, it should be decided by PTL. >>> >>> Normally, it's decided by when the person submitting the spec expects to >>> finish writing the code by. The PTL doesn't really have much control over >>> that ;) >>> >>>> In addition, probably the above our request will not finish >>>> by Kilo. I suppose it will be "L" version or later. >>> >>> So to clarify, you want to propose the feature, but you're not planning on >>> working on it (e.g implementing it) yourself? >>> >>>> So, what should I write at this section? >>>> "Kilo-x", "L version", or empty? >>> >>> As has already been mentioned, it doesn't matter that much - I see it as a >>> statement of intent from developers. If you're just requesting a feature, >>> you can even leave it blank if you want and we'll update it when an >>> assignee is found (e.g during the spec review). >>> >>> Thanks, >>> >>> Steve >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Yasunori Goto > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at nemebean.com Mon Dec 22 17:03:10 2014 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 22 Dec 2014 11:03:10 -0600 Subject: [openstack-dev] [OpenStack-Dev] Logging formats and i18n In-Reply-To: References: Message-ID: <54984ECE.4000707@nemebean.com> On 12/22/2014 09:42 AM, John Griffith wrote: > Lately (on the Cinder team at least) there's been a lot of > disagreement in reviews regarding the proper way to do LOG messages > correctly. Use of '%' vs ',' in the formatting of variables etc. > > We do have the oslo i18n guidelines page here [1], which helps a lot > but there's some disagreement on a specific case here. Do we have a > set answer on: > > LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2}) > > vs > > LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2}) This is the preferred way. Note that this is just a multi-variable variation on http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-variables-to-log-messages and the reasoning discussed there applies. I'd be curious why some people prefer the % version because to my knowledge that's not recommended even for untranslated log messages. > > > It's always fun when one person provides a -1 for the first usage; the > submitter changes it and another reviewer gives a -1 and says, no it > should be the other way. > > I'm hoping maybe somebody on the olso team can provide an > authoritative answer and we can then update the example page > referenced in [1] to clarify this particular case. > > Thanks, > John > > [1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zbitter at redhat.com Mon Dec 22 17:05:17 2014 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 22 Dec 2014 12:05:17 -0500 Subject: [openstack-dev] Fw: [Heat] Multiple_Routers_Topoloy In-Reply-To: References: Message-ID: <54984F4D.40006@redhat.com> The -dev mailing list is not for usage questions. Please post your question to ask.openstack.org and include the text of the error message you when trying to add a RouterInterface. cheers, Zane. On 22/12/14 04:18, Rao Shweta wrote: > > > Hi All > > I am working on openstack Heat and i wanted to make below topolgy using > heat template : > > > > For this i am using a template as given : > > AWSTemplateFormatVersion: '2010-09-09' > Description: Sample Heat template that spins up multiple instances and a > private network > (JSON) > Resources: > heat_network_01: > Properties: {name: heat-network-01} > Type: OS::Neutron::Net > heat_network_02: > Properties: {name: heat-network-02} > Type: OS::Neutron::Net > heat_router_01: > Properties: {admin_state_up: 'True', name: heat-router-01} > Type: OS::Neutron::Router > heat_router_02: > Properties: {admin_state_up: 'True', name: heat-router-02} > Type: OS::Neutron::Router > heat_router_int0: > Properties: > router_id: {Ref: heat_router_01} > subnet_id: {Ref: heat_subnet_01} > Type: OS::Neutron::RouterInterface > heat_router_int1: > Properties: > router_id: {Ref: heat_router_02} > subnet_id: {Ref: heat_subnet_02} > Type: OS::Neutron::RouterInterface > heat_subnet_01: > Properties: > cidr: 10.10.10.0/24 > dns_nameservers: [172.16.1.11, 172.16.1.6] > enable_dhcp: 'True' > gateway_ip: 10.10.10.254 > name: heat-subnet-01 > network_id: {Ref: heat_network_01} > Type: OS::Neutron::Subnet > heat_subnet_02: > Properties: > cidr: 10.10.11.0/24 > dns_nameservers: [172.16.1.11, 172.16.1.6] > enable_dhcp: 'True' > gateway_ip: 10.10.11.254 > name: heat-subnet-01 > network_id: {Ref: heat_network_02} > Type: OS::Neutron::Subnet > instance0: > Properties: > flavor: m1.nano > image: cirros-0.3.2-x86_64-uec > name: heat-instance-01 > networks: > - port: {Ref: instance0_port0} > Type: OS::Nova::Server > instance0_port0: > Properties: > admin_state_up: 'True' > network_id: {Ref: heat_network_01} > Type: OS::Neutron::Port > instance1: > Properties: > flavor: m1.nano > image: cirros-0.3.2-x86_64-uec > name: heat-instance-02 > networks: > - port: {Ref: instance1_port0} > Type: OS::Nova::Server > instance1_port0: > Properties: > admin_state_up: 'True' > network_id: {Ref: heat_network_01} > Type: OS::Neutron::Port > instance11: > Properties: > flavor: m1.nano > image: cirros-0.3.2-x86_64-uec > name: heat-instance11-01 > networks: > - port: {Ref: instance11_port0} > Type: OS::Nova::Server > instance11_port0: > Properties: > admin_state_up: 'True' > network_id: {Ref: heat_network_02} > Type: OS::Neutron::Port > instance1: > Properties: > flavor: m1.nano > image: cirros-0.3.2-x86_64-uec > name: heat-instance12-02 > networks: > - port: {Ref: instance12_port0} > Type: OS::Nova::Server > instance12_port0: > Properties: > admin_state_up: 'True' > network_id: {Ref: heat_network_02} > Type: OS::Neutron::Port > > I am able to create topology using the template but i am not able to > connect two routers. Neither i can get a template example on internet > through which i can connect two routers. Can you please help me with : > > 1.) Can we connect two routers? I tried with making a interface on > router 1 and connecting it to the subnet2 which is showing error. > > heat_router_int0: > Properties: > router_id: {Ref: heat_router_01} > subnet_id: {Ref: heat_subnet_02} > > Can you please guide me how can we connect routers or have link between > routers using template. > > 2.) Can you please forward a link or a example template from which i can > refer and implement reqiured topology using heat template. > > Waiting for a response > > > > Thankyou > > Regards > Shweta Rao > Mailto: rao.shweta at tcs.com > Website: http://www.tcs.com > ____________________________________________ > > =====-----=====-----===== > Notice: The information contained in this e-mail > message and/or attachments to it may contain > confidential or privileged information. If you are > not the intended recipient, any dissemination, use, > review, distribution, printing or copying of the > information contained in this e-mail message > and/or attachments to it are strictly prohibited. If > you have received this communication in error, > please notify us by reply e-mail or telephone and > immediately and permanently delete the message > and any attachments. Thank you > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From vishvananda at gmail.com Mon Dec 22 17:12:34 2014 From: vishvananda at gmail.com (Vishvananda Ishaya) Date: Mon, 22 Dec 2014 09:12:34 -0800 Subject: [openstack-dev] [nova] Setting MTU size for tap device In-Reply-To: References: Message-ID: <13D33D15-BA9A-4EE5-91CD-8B58250F0083@gmail.com> It makes sense to add it to me. Libvirt sets the mtu from the bridge when it creates the tap device, but if you are creating it manually you might need to set it to something else. Vish On Dec 17, 2014, at 10:29 PM, Ryu Ishimoto wrote: > Hi All, > > I noticed that in linux_net.py, the method to create a tap interface[1] does not let you set the MTU size. In other places, I see calls made to set the MTU of the device [2]. > > I'm wondering if there is any technical reasons to why we can't also set the MTU size when creating tap interfaces for general cases. In certain overlay solutions, this would come in handy. If there isn't any, I would love to submit a patch to accomplish this. > > Thanks in advance! > > Ryu > > [1] https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L1374 > [2] https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L1309 > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt.f.martin at hp.com Mon Dec 22 17:16:35 2014 From: kurt.f.martin at hp.com (Martin, Kurt Frederick (ESSN Storage MSDU)) Date: Mon, 22 Dec 2014 17:16:35 +0000 Subject: [openstack-dev] [Cinder] Listing of backends In-Reply-To: References: Message-ID: <297F9F867E2AE6468F458AFF0DD74EC104AE4032@G5W2742.americas.hpqcorp.net> You can set/unset key value pairs to your volume type with the cinder type-key command. Or you can also set them in the Horizon Admin console under the Admin->Volumes->Volume Types tab, then select ?View Extra Specs? Action. $cinder help type-key usage: cinder type-key [ ...] Sets or unsets extra_spec for a volume type. Positional arguments: Name or ID of volume type. The action. Valid values are 'set' or 'unset.' The extra specs key and value pair to set or unset. For unset, specify only the key. e.g. cinder type-key GoldVolumeType set volume_backend_name=my_iscsi_backend ~Kurt From: Pradip Mukhopadhyay [mailto:pradip.interra at gmail.com] Sent: Sunday, December 07, 2014 4:36 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Cinder] Listing of backends Thanks! One more question. Is there any equivalent API to add keys to the volume-type? I understand we have APIs for creating volume-type? But how about adding key-value pair (say I want to add-key to the volume-type as backend-name="my_iscsi_backend" ? Thanks, Pradip On Sun, Dec 7, 2014 at 4:25 PM, Duncan Thomas > wrote: See https://review.openstack.org/#/c/119938/ - now merged. I don't believe the python-cinderclient side work has been done yet, nor anything in Horizon, but the API itself is now there. On 7 December 2014 at 09:53, Pradip Mukhopadhyay > wrote: Hi, Is there a way to find out/list down the backends discovered for Cinder? There is, I guess, no API for get the list of backends. Thanks, Pradip _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Duncan Thomas _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From vishvananda at gmail.com Mon Dec 22 17:30:35 2014 From: vishvananda at gmail.com (Vishvananda Ishaya) Date: Mon, 22 Dec 2014 09:30:35 -0800 Subject: [openstack-dev] [OpenStack-dev][nova-net]Floating ip assigned as /32 from the start of the range In-Reply-To: References: Message-ID: <5559D778-CCE4-416B-8327-8B0B17FDA82D@gmail.com> Floating ips are always added to the host as a /32. You will need one ip on the compute host from the floating range with the /16 prefix (which it will use for natting instances without floating ips as well). In other words you should manually assign an ip from 10.100.130.X/16 to each compute node and set that value as routing_source_ip=10.100.130.X (or my_ip) in nova.conf. Vish On Dec 19, 2014, at 7:00 AM, Eduard Matei wrote: > Hi, > I'm trying to create a vm and assign it an ip in range 10.100.130.0/16. > On the host, the ip is assigned to br100 as inet 10.100.0.3/32 scope global br100 > instead of 10.100.130.X/16, so it's not reachable from the outside. > > The localrc.conf : > FLOATING_RANGE=10.100.130.0/16 > > Any idea what to change? > > Thanks, > Eduard > > > -- > Eduard Biceri Matei, Senior Software Developer > www.cloudfounders.com | eduard.matei at cloudfounders.com > > > > CloudFounders, The Private Cloud Software Company > > Disclaimer: > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramy.asselin at hp.com Mon Dec 22 17:40:20 2014 From: ramy.asselin at hp.com (Asselin, Ramy) Date: Mon, 22 Dec 2014 17:40:20 +0000 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: References: <5486D947.4090209@hp.com> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4240C5@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A424BC0@G4W3223.americas.hpqcorp.net> Message-ID: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A435949@G4W3223.americas.hpqcorp.net> Eduard, A few items you can try: 1. Double-check that the job is in Jenkins a. If not, then that?s the issue 2. Check that the processes are running correctly a. ps -ef | grep zuul i. Should have 2 zuul-server & 1 zuul-merger b. ps -ef | grep Jenkins i. Should have 1 /usr/bin/daemon --name=jenkins & 1 /usr/bin/java 3. In Jenkins, Manage Jenkins, Gearman Plugin Config, ?Test Connection? 4. Stop and Zuul & Jenkins. Start Zuul & Jenkins a. service Jenkins stop b. service zuul stop c. service zuul-merger stop d. service Jenkins start e. service zuul start f. service zuul-merger start Otherwise, I suggest you ask in #openstack-infra irc channel. Ramy From: Eduard Matei [mailto:eduard.matei at cloudfounders.com] Sent: Sunday, December 21, 2014 11:01 PM To: Asselin, Ramy Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Thanks Ramy, Unfortunately i don't see dsvm-tempest-full in the "status" output. Any idea how i can get it "registered"? Thanks, Eduard On Fri, Dec 19, 2014 at 9:43 PM, Asselin, Ramy > wrote: Eduard, If you run this command, you can see which jobs are registered: >telnet localhost 4730 >status There are 3 numbers per job: queued, running and workers that can run job. Make sure the job is listed & last ?workers? is non-zero. To run the job again without submitting a patch set, leave a ?recheck? comment on the patch & make sure your zuul layout.yaml is configured to trigger off that comment. For example [1]. Be sure to use the sandbox repository. [2] I?m not aware of other ways. Ramy [1] https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L20 [2] https://github.com/openstack-dev/sandbox From: Eduard Matei [mailto:eduard.matei at cloudfounders.com] Sent: Friday, December 19, 2014 3:36 AM To: Asselin, Ramy Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Hi all, After a little struggle with the config scripts i managed to get a working setup that is able to process openstack-dev/sandbox and run noop-check-comunication. Then, i tried enabling dsvm-tempest-full job but it keeps returning "NOT_REGISTERED" 2014-12-19 12:07:14,683 INFO zuul.IndependentPipelineManager: Change depends on changes [] 2014-12-19 12:07:14,683 INFO zuul.Gearman: Launch job noop-check-communication for change with dependent changes [] 2014-12-19 12:07:14,693 INFO zuul.Gearman: Launch job dsvm-tempest-full for change with dependent changes [] 2014-12-19 12:07:14,694 ERROR zuul.Gearman: Job is not registered with Gearman 2014-12-19 12:07:14,694 INFO zuul.Gearman: Build complete, result NOT_REGISTERED 2014-12-19 12:07:14,765 INFO zuul.Gearman: Build name: build:noop-check-communication unique: 333c6ea077324a788e3c37a313d872c5> started 2014-12-19 12:07:14,910 INFO zuul.Gearman: Build name: build:noop-check-communication unique: 333c6ea077324a788e3c37a313d872c5> complete, result SUCCESS 2014-12-19 12:07:14,916 INFO zuul.IndependentPipelineManager: Reporting change , actions: [, {'verified': -1}>] Nodepoold's log show no reference to dsvm-tempest-full and neither jenkins' logs. Any idea how to enable this job? Also, i got the "Cloud provider" setup and i can access it from the jenkins master. Any idea how i can manually trigger dsvm-tempest-full job to run and test the cloud provider without having to push a review to Gerrit? Thanks, Eduard On Thu, Dec 18, 2014 at 7:52 PM, Eduard Matei > wrote: Thanks for the input. I managed to get another master working (on Ubuntu 13.10), again with some issues since it was already setup. I'm now working towards setting up the slave. Will add comments to those reviews. Thanks, Eduard On Thu, Dec 18, 2014 at 7:42 PM, Asselin, Ramy > wrote: Yes, Ubuntu 12.04 is tested as mentioned in the readme [1]. Note that the referenced script is just a wrapper that pulls all the latest from various locations in openstack-infra, e.g. [2]. Ubuntu 14.04 support is WIP [3] FYI, there?s a spec to get an in-tree 3rd party ci solution [4]. Please add your comments if this interests you. [1] https://github.com/rasselin/os-ext-testing/blob/master/README.md [2] https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L29 [3] https://review.openstack.org/#/c/141518/ [4] https://review.openstack.org/#/c/139745/ From: Punith S [mailto:punith.s at cloudbyte.com] Sent: Thursday, December 18, 2014 3:12 AM To: OpenStack Development Mailing List (not for usage questions); Eduard Matei Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Hi Eduard we tried running https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh on ubuntu master 12.04, and it appears to be working fine on 12.04. thanks On Thu, Dec 18, 2014 at 1:57 PM, Eduard Matei > wrote: Hi, Seems i can't install using puppet on the jenkins master using install_master.sh from https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh because it's running Ubuntu 11.10 and it appears unsupported. I managed to install puppet manually on master and everything else fails So i'm trying to manually install zuul and nodepool and jenkins job builder, see where i end up. The slave looks complete, got some errors on running install_slave so i ran parts of the script manually, changing some params and it appears installed but no way to test it without the master. Any ideas welcome. Thanks, Eduard On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy > wrote: Manually running the script requires a few environment settings. Take a look at the README here: https://github.com/openstack-infra/devstack-gate Regarding cinder, I?m using this repo to run our cinder jobs (fork from jaypipes). https://github.com/rasselin/os-ext-testing Note that this solution doesn?t use the Jenkins gerrit trigger pluggin, but zuul. There?s a sample job for cinder here. It?s in Jenkins Job Builder format. https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample You can ask more questions in IRC freenode #openstack-cinder. (irc# asselin) Ramy From: Eduard Matei [mailto:eduard.matei at cloudfounders.com] Sent: Tuesday, December 16, 2014 12:41 AM To: Bailey, Darragh Cc: OpenStack Development Mailing List (not for usage questions); OpenStack Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Hi, Can someone point me to some working documentation on how to setup third party CI? (joinfu's instructions don't seem to work, and manually running devstack-gate scripts fails: Running gate_hook Job timeout set to: 163 minutes timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory ERROR: the main setup script run by this job failed - exit code: 127 please look at the relevant log files to determine the root cause Cleaning up host ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) Build step 'Execute shell' marked build as failure. I have a working Jenkins slave with devstack and our internal libraries, i have Gerrit Trigger Plugin working and triggering on patches created, i just need the actual job contents so that it can get to comment with the test results. Thanks, Eduard On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei > wrote: Hi Darragh, thanks for your input I double checked the job settings and fixed it: - build triggers is set to Gerrit event - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin and tested separately) - Trigger on: Patchset Created - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: Type: Path, Pattern: ** (was Type Plain on both) Now the job is triggered by commit on openstack-dev/sandbox :) Regarding the Query and Trigger Gerrit Patches, i found my patch using query: status:open project:openstack-dev/sandbox change:139585 and i can trigger it manually and it executes the job. But i still have the problem: what should the job do? It doesn't actually do anything, it doesn't run tests or comment on the patch. Do you have an example of job? Thanks, Eduard On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh > wrote: Hi Eduard, I would check the trigger settings in the job, particularly which "type" of pattern matching is being used for the branches. Found it tends to be the spot that catches most people out when configuring jobs with the Gerrit Trigger plugin. If you're looking to trigger against all branches then you would want "Type: Path" and "Pattern: **" appearing in the UI. If you have sufficient access using the 'Query and Trigger Gerrit Patches' page accessible from the main view will make it easier to confirm that your Jenkins instance can actually see changes in gerrit for the given project (which should mean that it can see the corresponding events as well). Can also use the same page to re-trigger for PatchsetCreated events to see if you've set the patterns on the job correctly. Regards, Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" - Unknown On 08/12/14 14:33, Eduard Matei wrote: > Resending this to dev ML as it seems i get quicker response :) > > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: > Patchset Created", chose as server the configured Gerrit server that > was previously tested, then added the project openstack-dev/sandbox > and saved. > I made a change on dev sandbox repo but couldn't trigger my job. > > Any ideas? > > Thanks, > Eduard > > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei > > >> wrote: > > Hello everyone, > > Thanks to the latest changes to the creation of service accounts > process we're one step closer to setting up our own CI platform > for Cinder. > > So far we've got: > - Jenkins master (with Gerrit plugin) and slave (with DevStack and > our storage solution) > - Service account configured and tested (can manually connect to > review.openstack.org and get events > and publish comments) > > Next step would be to set up a job to do the actual testing, this > is where we're stuck. > Can someone please point us to a clear example on how a job should > look like (preferably for testing Cinder on Kilo)? Most links > we've found are broken, or tools/scripts are no longer working. > Also, we cannot change the Jenkins master too much (it's owned by > Ops team and they need a list of tools/scripts to review before > installing/running so we're not allowed to experiment). > > Thanks, > Eduard > > -- > > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom > they are addressed.If you are not the named addressee or an > employee or agent responsible for delivering this message to the > named addressee, you are hereby notified that you are not > authorized to read, print, retain, copy or disseminate this > message or any part of it. If you have received this email in > error we request you to notify us by reply e-mail and to delete > all electronic files of the message. If you are not the intended > recipient you are notified that disclosing, copying, distributing > or taking any action in reliance on the contents of this > information is strictly prohibited. E-mail transmission cannot be > guaranteed to be secure or error free as information could be > intercepted, corrupted, lost, destroyed, arrive late or > incomplete, or contain viruses. The sender therefore does not > accept liability for any errors or omissions in the content of > this message, and shall have no liability for any loss or damage > suffered by the user, which arise as a result of e-mail transmission. > > > > > -- > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom they > are addressed.If you are not the named addressee or an employee or > agent responsible for delivering this message to the named addressee, > you are hereby notified that you are not authorized to read, print, > retain, copy or disseminate this message or any part of it. If you > have received this email in error we request you to notify us by reply > e-mail and to delete all electronic files of the message. If you are > not the intended recipient you are notified that disclosing, copying, > distributing or taking any action in reliance on the contents of this > information is strictly prohibited. E-mail transmission cannot be > guaranteed to be secure or error free as information could be > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or > contain viruses. The sender therefore does not accept liability for > any errors or omissions in the content of this message, and shall have > no liability for any loss or damage suffered by the user, which arise > as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- regards, punith s cloudbyte.com -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eduard.matei at cloudfounders.com Mon Dec 22 17:43:55 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Mon, 22 Dec 2014 19:43:55 +0200 Subject: [openstack-dev] [OpenStack-dev][nova-net]Floating ip assigned as /32 from the start of the range In-Reply-To: <5559D778-CCE4-416B-8327-8B0B17FDA82D@gmail.com> References: <5559D778-CCE4-416B-8327-8B0B17FDA82D@gmail.com> Message-ID: Thanks, I managed to get it working by deleting the public pool (which was the whole 10.100.X.X subnet) and creating a new pool 10.100.129.X. This gives me control over which ips are assignable to the vms. Eduard. On Mon, Dec 22, 2014 at 7:30 PM, Vishvananda Ishaya wrote: > > Floating ips are always added to the host as a /32. You will need one ip > on the > compute host from the floating range with the /16 prefix (which it will > use for > natting instances without floating ips as well). > > In other words you should manually assign an ip from 10.100.130.X/16 to > each > compute node and set that value as routing_source_ip=10.100.130.X (or > my_ip) in > nova.conf. > > Vish > On Dec 19, 2014, at 7:00 AM, Eduard Matei > wrote: > > Hi, > I'm trying to create a vm and assign it an ip in range 10.100.130.0/16. > On the host, the ip is assigned to br100 as inet 10.100.0.3/32 scope > global br100 > instead of 10.100.130.X/16, so it's not reachable from the outside. > > The localrc.conf : > FLOATING_RANGE=10.100.130.0/16 > > Any idea what to change? > > Thanks, > Eduard > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.griffith8 at gmail.com Mon Dec 22 18:05:23 2014 From: john.griffith8 at gmail.com (John Griffith) Date: Mon, 22 Dec 2014 11:05:23 -0700 Subject: [openstack-dev] [OpenStack-Dev] Logging formats and i18n In-Reply-To: <54984ECE.4000707@nemebean.com> References: <54984ECE.4000707@nemebean.com> Message-ID: On Mon, Dec 22, 2014 at 10:03 AM, Ben Nemec wrote: > On 12/22/2014 09:42 AM, John Griffith wrote: >> Lately (on the Cinder team at least) there's been a lot of >> disagreement in reviews regarding the proper way to do LOG messages >> correctly. Use of '%' vs ',' in the formatting of variables etc. >> >> We do have the oslo i18n guidelines page here [1], which helps a lot >> but there's some disagreement on a specific case here. Do we have a >> set answer on: >> >> LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2}) >> >> vs >> >> LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2}) > > This is the preferred way. > > Note that this is just a multi-variable variation on > http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-variables-to-log-messages > and the reasoning discussed there applies. > > I'd be curious why some people prefer the % version because to my > knowledge that's not recommended even for untranslated log messages. Not sure if it's that anybody has a preference as opposed to an interpretation, notice the recommendation for multi-vars in raise: # RIGHT raise ValueError(_('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2}) > >> >> >> It's always fun when one person provides a -1 for the first usage; the >> submitter changes it and another reviewer gives a -1 and says, no it >> should be the other way. >> >> I'm hoping maybe somebody on the olso team can provide an >> authoritative answer and we can then update the example page >> referenced in [1] to clarify this particular case. >> >> Thanks, >> John >> >> [1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Mon Dec 22 18:05:36 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 22 Dec 2014 13:05:36 -0500 Subject: [openstack-dev] [OpenStack-Dev] Logging formats and i18n In-Reply-To: <54984ECE.4000707@nemebean.com> References: <54984ECE.4000707@nemebean.com> Message-ID: On Dec 22, 2014, at 12:03 PM, Ben Nemec wrote: > On 12/22/2014 09:42 AM, John Griffith wrote: >> Lately (on the Cinder team at least) there's been a lot of >> disagreement in reviews regarding the proper way to do LOG messages >> correctly. Use of '%' vs ',' in the formatting of variables etc. >> >> We do have the oslo i18n guidelines page here [1], which helps a lot >> but there's some disagreement on a specific case here. Do we have a >> set answer on: >> >> LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2}) >> >> vs >> >> LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2}) > > This is the preferred way. +1 > > Note that this is just a multi-variable variation on > http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-variables-to-log-messages > and the reasoning discussed there applies. > > I'd be curious why some people prefer the % version because to my > knowledge that's not recommended even for untranslated log messages. > >> >> >> It's always fun when one person provides a -1 for the first usage; the >> submitter changes it and another reviewer gives a -1 and says, no it >> should be the other way. >> >> I'm hoping maybe somebody on the olso team can provide an >> authoritative answer and we can then update the example page >> referenced in [1] to clarify this particular case. >> >> Thanks, >> John >> >> [1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Mon Dec 22 18:13:27 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 22 Dec 2014 13:13:27 -0500 Subject: [openstack-dev] [OpenStack-Dev] Logging formats and i18n In-Reply-To: References: <54984ECE.4000707@nemebean.com> Message-ID: <7CE2A17D-BB0D-4AC3-B411-7686384BBC8A@doughellmann.com> On Dec 22, 2014, at 1:05 PM, John Griffith wrote: > On Mon, Dec 22, 2014 at 10:03 AM, Ben Nemec wrote: >> On 12/22/2014 09:42 AM, John Griffith wrote: >>> Lately (on the Cinder team at least) there's been a lot of >>> disagreement in reviews regarding the proper way to do LOG messages >>> correctly. Use of '%' vs ',' in the formatting of variables etc. >>> >>> We do have the oslo i18n guidelines page here [1], which helps a lot >>> but there's some disagreement on a specific case here. Do we have a >>> set answer on: >>> >>> LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2}) >>> >>> vs >>> >>> LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2}) >> >> This is the preferred way. >> >> Note that this is just a multi-variable variation on >> http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-variables-to-log-messages >> and the reasoning discussed there applies. >> >> I'd be curious why some people prefer the % version because to my >> knowledge that's not recommended even for untranslated log messages. > > Not sure if it's that anybody has a preference as opposed to an > interpretation, notice the recommendation for multi-vars in raise: > > # RIGHT > raise ValueError(_('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2}) It?s really not related to translation as much as the logging API itself. With the exception, you want to initialize the ValueError instance with a proper message as soon as you throw it because you don?t know what the calling code might do with it. Therefore you use string interpolation inline. When you call into the logging subsystem, your call might be ignored based on the level of the message and the logging configuration. By letting the logging code do the string interpolation, you potentially skip the work of serializing variables to strings for messages that will be discarded, saving time and memory. These ?rules? apply whether your messages are being translated or not, so even for debug log messages you should write: LOG.debug(?some message: v1=%(v1)s v2=%(v2)s?, {?v1?: v1, ?v2?: v2}) > >> >>> >>> >>> It's always fun when one person provides a -1 for the first usage; the >>> submitter changes it and another reviewer gives a -1 and says, no it >>> should be the other way. >>> >>> I'm hoping maybe somebody on the olso team can provide an >>> authoritative answer and we can then update the example page >>> referenced in [1] to clarify this particular case. >>> >>> Thanks, >>> John >>> >>> [1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From joe.gordon0 at gmail.com Mon Dec 22 18:20:35 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Mon, 22 Dec 2014 10:20:35 -0800 Subject: [openstack-dev] [nova][sriov] SRIOV related specs pending for approval In-Reply-To: References: Message-ID: On Fri, Dec 19, 2014 at 6:53 AM, Robert Li (baoli) wrote: > Hi Joe, > > See this thread on the SR-IOV CI from Irena and Sandhya: > > > http://lists.openstack.org/pipermail/openstack-dev/2014-November/050658.html > > > http://lists.openstack.org/pipermail/openstack-dev/2014-November/050755.html > > I believe that Intel is building a CI system to test SR-IOV as well. > Thanks for the clarification. > > Thanks, > Robert > > > On 12/18/14, 9:13 PM, "Joe Gordon" wrote: > > > > On Thu, Dec 18, 2014 at 2:18 PM, Robert Li (baoli) > wrote: > >> Hi, >> >> During the Kilo summit, the folks in the pci passthrough and SR-IOV >> groups discussed what we?d like to achieve in this cycle, and the result >> was documented in this Etherpad: >> https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough >> >> To get the work going, we?ve submitted a few design specs: >> >> Nova: Live migration with macvtap SR-IOV >> https://blueprints.launchpad.net/nova/+spec/sriov-live-migration >> >> Nova: sriov interface attach/detach >> https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach >> >> Nova: Api specify vnic_type >> https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type >> >> Neutron-Network settings support for vnic-type >> >> https://blueprints.launchpad.net/neutron/+spec/network-settings-support-vnic-type >> >> Nova: SRIOV scheduling with stateless offloads >> >> https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads >> >> Now that the specs deadline is approaching, I?d like to bring them up >> in here for exception considerations. A lot of works have been put into >> them. And we?d like to see them get through for Kilo. >> > > We haven't started the spec exception process yet. > > >> >> Regarding CI for PCI passthrough and SR-IOV, see the attached thread. >> > > Can you share this via a link to something on > http://lists.openstack.org/pipermail/openstack-dev/ > > >> >> thanks, >> Robert >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shardy at redhat.com Mon Dec 22 18:21:24 2014 From: shardy at redhat.com (Steven Hardy) Date: Mon, 22 Dec 2014 18:21:24 +0000 Subject: [openstack-dev] [heat] Application level HA via Heat Message-ID: <20141222182123.GC14130@t430slt.redhat.com> Hi all, So, lately I've been having various discussions around $subject, and I know it's something several folks in our community are interested in, so I wanted to get some ideas I've been pondering out there for discussion. I'll start with a proposal of how we might replace HARestarter with AutoScaling group, then give some initial ideas of how we might evolve that into something capable of a sort-of active/active failover. 1. HARestarter replacement. My position on HARestarter has long been that equivalent functionality should be available via AutoScalingGroups of size 1. Turns out that shouldn't be too hard to do: resources: server_group: type: OS::Heat::AutoScalingGroup properties: min_size: 1 max_size: 1 resource: type: ha_server.yaml server_replacement_policy: type: OS::Heat::ScalingPolicy properties: # FIXME: this adjustment_type doesn't exist yet adjustment_type: replace_oldest auto_scaling_group_id: {get_resource: server_group} scaling_adjustment: 1 So, currently our ScalingPolicy resource can only support three adjustment types, all of which change the group capacity. AutoScalingGroup already supports batched replacements for rolling updates, so if we modify the interface to allow a signal to trigger replacement of a group member, then the snippet above should be logically equivalent to HARestarter AFAICT. The steps to do this should be: - Standardize the ScalingPolicy-AutoScaling group interface, so aynchronous adjustments (e.g signals) between the two resources don't use the "adjust" method. - Add an option to replace a member to the signal interface of AutoScalingGroup - Add the new "replace adjustment type to ScalingPolicy I posted a patch which implements the first step, and the second will be required for TripleO, e.g we should be doing it soon. https://review.openstack.org/#/c/143496/ https://review.openstack.org/#/c/140781/ 2. A possible next step towards active/active HA failover The next part is the ability to notify before replacement that a scaling action is about to happen (just like we do for LoadBalancer resources already) and orchestrate some or all of the following: - Attempt to quiesce the currently active node (may be impossible if it's in a bad state) - Detach resources (e.g volumes primarily?) from the current active node, and attach them to the new active node - Run some config action to activate the new node (e.g run some config script to fsck and mount a volume, then start some application). The first step is possible by putting a SofwareConfig/SoftwareDeployment resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the node is too bricked to respond and specifying DELETE action so it only runs when we replace the resource). The third step is possible either via a script inside the box which polls for the volume attachment, or possibly via an update-only software config. The second step is the missing piece AFAICS. I've been wondering if we can do something inside a new heat resource, which knows what the current "active" member of an ASG is, and gets triggered on a "replace" signal to orchestrate e.g deleting and creating a VolumeAttachment resource to move a volume between servers. Something like: resources: server_group: type: OS::Heat::AutoScalingGroup properties: min_size: 2 max_size: 2 resource: type: ha_server.yaml server_failover_policy: type: OS::Heat::FailoverPolicy properties: auto_scaling_group_id: {get_resource: server_group} resource: type: OS::Cinder::VolumeAttachment properties: # FIXME: "refs" is a ResourceGroup interface not currently # available in AutoScalingGroup instance_uuid: {get_attr: [server_group, refs, 1]} server_replacement_policy: type: OS::Heat::ScalingPolicy properties: # FIXME: this adjustment_type doesn't exist yet adjustment_type: replace_oldest auto_scaling_policy_id: {get_resource: server_failover_policy} scaling_adjustment: 1 By chaining policies like this we could trigger an update on the attachment resource (or a nested template via a provider resource containing many attachments or other resources) every time the ScalingPolicy is triggered. For the sake of clarity, I've not included the existing stuff like ceilometer alarm resources etc above, but hopefully it gets the idea accross so we can discuss further, what are peoples thoughts? I'm quite happy to iterate on the idea if folks have suggestions for a better interface etc :) One problem I see with the above approach is you'd have to trigger a failover after stack create to get the initial volume attached, still pondering ideas on how best to solve that.. Thanks, Steve From joe.gordon0 at gmail.com Mon Dec 22 18:32:51 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Mon, 22 Dec 2014 10:32:51 -0800 Subject: [openstack-dev] [Openstack-operators] The state of nova-network to neutron migration In-Reply-To: References: <54945973.1010904@anteaya.info> Message-ID: On Fri, Dec 19, 2014 at 9:28 AM, Kyle Mestery wrote: > On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno wrote: >> >> Rather than waste your time making excuses let me state where we are and >> where I would like to get to, also sharing my thoughts about how you can >> get involved if you want to see this happen as badly as I have been told >> you do. >> >> Where we are: >> * a great deal of foundation work has been accomplished to achieve >> parity with nova-network and neutron to the extent that those involved >> are ready for migration plans to be formulated and be put in place >> * a summit session happened with notes and intentions[0] >> * people took responsibility and promptly got swamped with other >> responsibilities >> * spec deadlines arose and in neutron's case have passed >> * currently a neutron spec [1] is a work in progress (and it needs >> significant work still) and a nova spec is required and doesn't have a >> first draft or a champion >> >> Where I would like to get to: >> * I need people in addition to Oleg Bondarev to be available to help >> come up with ideas and words to describe them to create the specs in a >> very short amount of time (Oleg is doing great work and is a fabulous >> person, yay Oleg, he just can't do this alone) >> * specifically I need a contact on the nova side of this complex >> problem, similar to Oleg on the neutron side >> * we need to have a way for people involved with this effort to find >> each other, talk to each other and track progress >> * we need to have representation at both nova and neutron weekly >> meetings to communicate status and needs >> >> We are at K-2 and our current status is insufficient to expect this work >> will be accomplished by the end of K-3. I will be championing this work, >> in whatever state, so at least it doesn't fall off the map. If you would >> like to help this effort please get in contact. I will be thinking of >> ways to further this work and will be communicating to those who >> identify as affected by these decisions in the most effective methods of >> which I am capable. >> >> Thank you to all who have gotten us as far as well have gotten in this >> effort, it has been a long haul and you have all done great work. Let's >> keep going and finish this. >> >> Thank you, >> Anita. >> >> Thank you for volunteering to drive this effort Anita, I am very happy > about this. I support you 100%. > > I'd like to point out that we really need a point of contact on the nova > side, similar to Oleg on the Neutron side. IMHO, this is step 1 here to > continue moving this forward. > At the summit the nova team marked the nova-network to neutron migration as a priority [0], so we are collectively interested in seeing this happen and want to help in any way possible. With regard to a nova point of contact, anyone in nova-specs-core should work, that way we can cover more time zones. >From what I can gather the first step is to finish fleshing out the first spec [1], and it sounds like it would be good to get a few nova-cores reviewing it as well. [0] http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html [1] https://review.openstack.org/#/c/142456/ > > Thanks, > Kyle > > >> [0] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron >> [1] https://review.openstack.org/#/c/142456/ >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anteaya at anteaya.info Mon Dec 22 19:08:58 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Mon, 22 Dec 2014 14:08:58 -0500 Subject: [openstack-dev] [Openstack-operators] The state of nova-network to neutron migration In-Reply-To: References: <54945973.1010904@anteaya.info> Message-ID: <54986C4A.8020708@anteaya.info> On 12/22/2014 01:32 PM, Joe Gordon wrote: > On Fri, Dec 19, 2014 at 9:28 AM, Kyle Mestery wrote: > >> On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno wrote: >>> >>> Rather than waste your time making excuses let me state where we are and >>> where I would like to get to, also sharing my thoughts about how you can >>> get involved if you want to see this happen as badly as I have been told >>> you do. >>> >>> Where we are: >>> * a great deal of foundation work has been accomplished to achieve >>> parity with nova-network and neutron to the extent that those involved >>> are ready for migration plans to be formulated and be put in place >>> * a summit session happened with notes and intentions[0] >>> * people took responsibility and promptly got swamped with other >>> responsibilities >>> * spec deadlines arose and in neutron's case have passed >>> * currently a neutron spec [1] is a work in progress (and it needs >>> significant work still) and a nova spec is required and doesn't have a >>> first draft or a champion >>> >>> Where I would like to get to: >>> * I need people in addition to Oleg Bondarev to be available to help >>> come up with ideas and words to describe them to create the specs in a >>> very short amount of time (Oleg is doing great work and is a fabulous >>> person, yay Oleg, he just can't do this alone) >>> * specifically I need a contact on the nova side of this complex >>> problem, similar to Oleg on the neutron side >>> * we need to have a way for people involved with this effort to find >>> each other, talk to each other and track progress >>> * we need to have representation at both nova and neutron weekly >>> meetings to communicate status and needs >>> >>> We are at K-2 and our current status is insufficient to expect this work >>> will be accomplished by the end of K-3. I will be championing this work, >>> in whatever state, so at least it doesn't fall off the map. If you would >>> like to help this effort please get in contact. I will be thinking of >>> ways to further this work and will be communicating to those who >>> identify as affected by these decisions in the most effective methods of >>> which I am capable. >>> >>> Thank you to all who have gotten us as far as well have gotten in this >>> effort, it has been a long haul and you have all done great work. Let's >>> keep going and finish this. >>> >>> Thank you, >>> Anita. >>> >>> Thank you for volunteering to drive this effort Anita, I am very happy >> about this. I support you 100%. >> >> I'd like to point out that we really need a point of contact on the nova >> side, similar to Oleg on the Neutron side. IMHO, this is step 1 here to >> continue moving this forward. >> > > At the summit the nova team marked the nova-network to neutron migration as > a priority [0], so we are collectively interested in seeing this happen and > want to help in any way possible. With regard to a nova point of contact, > anyone in nova-specs-core should work, that way we can cover more time > zones. > > From what I can gather the first step is to finish fleshing out the first > spec [1], and it sounds like it would be good to get a few nova-cores > reviewing it as well. > > > > > [0] > http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html > [1] https://review.openstack.org/#/c/142456/ > > Wonderful, thank you for the support Joe. It appears that we need to have a regular weekly meeting to track progress in an archived manner. I know there was one meeting November but I don't know what it was called so so far I can't find the logs for that. So if those affected by this issue can identify what time (UTC please, don't tell me what time zone you are in it is too hard to guess what UTC time you are available) and day of the week you are available for a meeting I'll create one and we can start talking to each other. I need to avoid Monday 1500 and 2100 UTC, Tuesday 0800 UTC, 1400 UTC and 1900 - 2200 UTC, Wednesdays 1500 - 1700 UTC, Thursdays 1400 and 2100 UTC. Thanks, Anita. >> >> Thanks, >> Kyle >> >> >>> [0] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron >>> [1] https://review.openstack.org/#/c/142456/ >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > From Sean_Collins2 at cable.comcast.com Mon Dec 22 19:36:44 2014 From: Sean_Collins2 at cable.comcast.com (Collins, Sean) Date: Mon, 22 Dec 2014 19:36:44 +0000 Subject: [openstack-dev] [Neutron][IPv6] No weekly meeting until Jan 6th 2015 Message-ID: <7EB180D009B1A6428D376906754127CB2E460BF7@PACDCEXMB22.cable.comcast.com> See everyone next year! Sean M. Collins -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsbryant at electronicjungle.net Mon Dec 22 20:05:29 2014 From: jsbryant at electronicjungle.net (Jay S. Bryant) Date: Mon, 22 Dec 2014 14:05:29 -0600 Subject: [openstack-dev] [OpenStack-Dev] Logging formats and i18n In-Reply-To: <7CE2A17D-BB0D-4AC3-B411-7686384BBC8A@doughellmann.com> References: <54984ECE.4000707@nemebean.com> <7CE2A17D-BB0D-4AC3-B411-7686384BBC8A@doughellmann.com> Message-ID: <54987989.2090006@electronicjungle.net> John, Thank you for starting this discussion and Doug, thank you for clarifying. Your explanation below helps a lot! Jay On 12/22/2014 12:13 PM, Doug Hellmann wrote: > On Dec 22, 2014, at 1:05 PM, John Griffith wrote: > >> On Mon, Dec 22, 2014 at 10:03 AM, Ben Nemec wrote: >>> On 12/22/2014 09:42 AM, John Griffith wrote: >>>> Lately (on the Cinder team at least) there's been a lot of >>>> disagreement in reviews regarding the proper way to do LOG messages >>>> correctly. Use of '%' vs ',' in the formatting of variables etc. >>>> >>>> We do have the oslo i18n guidelines page here [1], which helps a lot >>>> but there's some disagreement on a specific case here. Do we have a >>>> set answer on: >>>> >>>> LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2}) >>>> >>>> vs >>>> >>>> LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2}) >>> This is the preferred way. >>> >>> Note that this is just a multi-variable variation on >>> http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-variables-to-log-messages >>> and the reasoning discussed there applies. >>> >>> I'd be curious why some people prefer the % version because to my >>> knowledge that's not recommended even for untranslated log messages. >> Not sure if it's that anybody has a preference as opposed to an >> interpretation, notice the recommendation for multi-vars in raise: >> >> # RIGHT >> raise ValueError(_('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2}) > It?s really not related to translation as much as the logging API itself. > > With the exception, you want to initialize the ValueError instance with a proper message as soon as you throw it because you don?t know what the calling code might do with it. Therefore you use string interpolation inline. > > When you call into the logging subsystem, your call might be ignored based on the level of the message and the logging configuration. By letting the logging code do the string interpolation, you potentially skip the work of serializing variables to strings for messages that will be discarded, saving time and memory. > > These ?rules? apply whether your messages are being translated or not, so even for debug log messages you should write: > > LOG.debug(?some message: v1=%(v1)s v2=%(v2)s?, {?v1?: v1, ?v2?: v2}) > >>>> >>>> It's always fun when one person provides a -1 for the first usage; the >>>> submitter changes it and another reviewer gives a -1 and says, no it >>>> should be the other way. >>>> >>>> I'm hoping maybe somebody on the olso team can provide an >>>> authoritative answer and we can then update the example page >>>> referenced in [1] to clarify this particular case. >>>> >>>> Thanks, >>>> John >>>> >>>> [1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mestery at mestery.com Mon Dec 22 20:12:14 2014 From: mestery at mestery.com (Kyle Mestery) Date: Mon, 22 Dec 2014 14:12:14 -0600 Subject: [openstack-dev] [neutron] Canceling the next two meetings Message-ID: Hi folks, given I expect low attendance today and next week, lets just cancel the next two Neutron meetings. We'll reconvene in the new year on Monday, January 5, 2015 at 2100 UTC. Happy holidays to all! Kyle [1] https://wiki.openstack.org/wiki/Network/Meetings -------------- next part -------------- An HTML attachment was scrubbed... URL: From majopela at redhat.com Mon Dec 22 20:16:03 2014 From: majopela at redhat.com (=?utf-8?Q?Miguel_=C3=81ngel_Ajo?=) Date: Mon, 22 Dec 2014 21:16:03 +0100 Subject: [openstack-dev] [neutron] Canceling the next two meetings In-Reply-To: References: Message-ID: <961D8E70C40747BBB5B8AE874E4E5C7A@redhat.com> Happy Holidays!, thank you Kyle. Miguel ?ngel Ajo On Monday, 22 de December de 2014 at 21:12, Kyle Mestery wrote: > Hi folks, given I expect low attendance today and next week, lets just cancel the next two Neutron meetings. We'll reconvene in the new year on Monday, January 5, 2015 at 2100 UTC. > > Happy holidays to all! > > Kyle > > [1] https://wiki.openstack.org/wiki/Network/Meetings > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org (mailto:OpenStack-dev at lists.openstack.org) > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Dec 22 20:42:37 2014 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 22 Dec 2014 15:42:37 -0500 Subject: [openstack-dev] [heat] Application level HA via Heat In-Reply-To: <20141222182123.GC14130@t430slt.redhat.com> References: <20141222182123.GC14130@t430slt.redhat.com> Message-ID: <5498823D.3050005@redhat.com> On 22/12/14 13:21, Steven Hardy wrote: > Hi all, > > So, lately I've been having various discussions around $subject, and I know > it's something several folks in our community are interested in, so I > wanted to get some ideas I've been pondering out there for discussion. > > I'll start with a proposal of how we might replace HARestarter with > AutoScaling group, then give some initial ideas of how we might evolve that > into something capable of a sort-of active/active failover. > > 1. HARestarter replacement. > > My position on HARestarter has long been that equivalent functionality > should be available via AutoScalingGroups of size 1. Turns out that > shouldn't be too hard to do: > > resources: > server_group: > type: OS::Heat::AutoScalingGroup > properties: > min_size: 1 > max_size: 1 > resource: > type: ha_server.yaml > > server_replacement_policy: > type: OS::Heat::ScalingPolicy > properties: > # FIXME: this adjustment_type doesn't exist yet > adjustment_type: replace_oldest > auto_scaling_group_id: {get_resource: server_group} > scaling_adjustment: 1 One potential issue with this is that it is a little bit _too_ equivalent to HARestarter - it will replace your whole scaled unit (ha_server.yaml in this case) rather than just the failed resource inside. > So, currently our ScalingPolicy resource can only support three adjustment > types, all of which change the group capacity. AutoScalingGroup already > supports batched replacements for rolling updates, so if we modify the > interface to allow a signal to trigger replacement of a group member, then > the snippet above should be logically equivalent to HARestarter AFAICT. > > The steps to do this should be: > > - Standardize the ScalingPolicy-AutoScaling group interface, so > aynchronous adjustments (e.g signals) between the two resources don't use > the "adjust" method. > > - Add an option to replace a member to the signal interface of > AutoScalingGroup > > - Add the new "replace adjustment type to ScalingPolicy I think I am broadly in favour of this. > I posted a patch which implements the first step, and the second will be > required for TripleO, e.g we should be doing it soon. > > https://review.openstack.org/#/c/143496/ > https://review.openstack.org/#/c/140781/ > > 2. A possible next step towards active/active HA failover > > The next part is the ability to notify before replacement that a scaling > action is about to happen (just like we do for LoadBalancer resources > already) and orchestrate some or all of the following: > > - Attempt to quiesce the currently active node (may be impossible if it's > in a bad state) > > - Detach resources (e.g volumes primarily?) from the current active node, > and attach them to the new active node > > - Run some config action to activate the new node (e.g run some config > script to fsck and mount a volume, then start some application). > > The first step is possible by putting a SofwareConfig/SoftwareDeployment > resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the > node is too bricked to respond and specifying DELETE action so it only runs > when we replace the resource). > > The third step is possible either via a script inside the box which polls > for the volume attachment, or possibly via an update-only software config. > > The second step is the missing piece AFAICS. > > I've been wondering if we can do something inside a new heat resource, > which knows what the current "active" member of an ASG is, and gets > triggered on a "replace" signal to orchestrate e.g deleting and creating a > VolumeAttachment resource to move a volume between servers. > > Something like: > > resources: > server_group: > type: OS::Heat::AutoScalingGroup > properties: > min_size: 2 > max_size: 2 > resource: > type: ha_server.yaml > > server_failover_policy: > type: OS::Heat::FailoverPolicy > properties: > auto_scaling_group_id: {get_resource: server_group} > resource: > type: OS::Cinder::VolumeAttachment > properties: > # FIXME: "refs" is a ResourceGroup interface not currently > # available in AutoScalingGroup > instance_uuid: {get_attr: [server_group, refs, 1]} > > server_replacement_policy: > type: OS::Heat::ScalingPolicy > properties: > # FIXME: this adjustment_type doesn't exist yet > adjustment_type: replace_oldest > auto_scaling_policy_id: {get_resource: server_failover_policy} > scaling_adjustment: 1 This actually fails because a VolumeAttachment needs to be updated in place; if you try to switch servers but keep the same Volume when replacing the attachment you'll get an error. TBH {get_attr: [server_group, refs, 1]} is doing most of the heavy lifting here, so in theory you could just have an OS::Cinder::VolumeAttachment instead of the FailoverPolicy and then all you need is a way of triggering a stack update with the same template & params. I know Ton added a PATCH method to update in Juno so that you don't have to pass parameters any more, and I believe it's planned to do the same with the template. > By chaining policies like this we could trigger an update on the attachment > resource (or a nested template via a provider resource containing many > attachments or other resources) every time the ScalingPolicy is triggered. > > For the sake of clarity, I've not included the existing stuff like > ceilometer alarm resources etc above, but hopefully it gets the idea > accross so we can discuss further, what are peoples thoughts? I'm quite > happy to iterate on the idea if folks have suggestions for a better > interface etc :) > > One problem I see with the above approach is you'd have to trigger a > failover after stack create to get the initial volume attached, still > pondering ideas on how best to solve that.. To me this is falling into the same old trap of "hey, we want to run this custom workflow, all we need to do is add a new resource type to hang some code on". That's pretty much how we got HARestarter. Also, like HARestarter, this cannot hope to cover the range of possible actions that might be needed by various applications. IMHO the "right" way to implement this is that the Ceilometer alarm triggers a workflow in Mistral that takes the appropriate action defined by the user, which may (or may not) include updating the Heat stack to a new template where the shared storage gets attached to a different server. cheers, Zane. From morgan.fainberg at gmail.com Mon Dec 22 20:45:44 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Mon, 22 Dec 2014 12:45:44 -0800 Subject: [openstack-dev] [Keystone] Keystone Middleware 1.3.1 release Message-ID: The Keystone development community would like to announce the 1.3.1 release of the Keystone Middleware package. This release can be installed from the following locations: * http://tarballs.openstack.org/keystonemiddleware * https://pypi.python.org/pypi/keystonemiddleware 1.3.1 ------- * auth_token middleware no longer contacts keystone when a request with no token is received. Detailed changes in this release beyond what is listed above: https://launchpad.net/keystonemiddleware/+milestone/1.3.1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl at ecbaldwin.net Mon Dec 22 20:46:48 2014 From: carl at ecbaldwin.net (Carl Baldwin) Date: Mon, 22 Dec 2014 13:46:48 -0700 Subject: [openstack-dev] No meetings on Christmas or New Year's Days Message-ID: The L3 sub team meeting [1] will not be held until the 8th of January, 2015. Enjoy your time off. I will try to move some of the refactoring patches along as I can but will be down to minimal hours. Carl [1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam From pcm at cisco.com Mon Dec 22 21:03:47 2014 From: pcm at cisco.com (Paul Michali (pcm)) Date: Mon, 22 Dec 2014 21:03:47 +0000 Subject: [openstack-dev] [neutron][vpnaas] Sub-team meetings on Dec 20th and 27th? In-Reply-To: <837EA5B8-4C8E-4B0C-98BC-A174122156E7@cisco.com> References: <837EA5B8-4C8E-4B0C-98BC-A174122156E7@cisco.com> Message-ID: <7D9ECE8C-408C-439B-A215-EDE4C78BA2B2@cisco.com> Will cancel the next two VPNaaS sub-team meetings. The next meeting will be Tuesday, January 6th at 1500 UTC on meeting-4 (<<< Note the channel change). Enjoy the holiday time! PCM (Paul Michali) MAIL ?..?. pcm at cisco.com IRC ??..? pc_m (irc.freenode.com) TW ???... @pmichali GPG Key ? 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 On Dec 19, 2014, at 2:01 PM, Paul Michali (pcm) wrote: > Does anyone have agenda items to discuss for the next two meetings during the holidays? > > If so, please let me know (and add them to the Wiki page), and we?ll hold the meeting. Otherwise, we can continue on Jan 6th, and any pop-up items can be addressed on the mailing list or Neutron IRC. > > Please let me know by Monday, if you?d like us to meet. > > > Regards, > > PCM (Paul Michali) > > MAIL ?..?. pcm at cisco.com > IRC ??..? pc_m (irc.freenode.com) > TW ???... @pmichali > GPG Key ? 4525ECC253E31A83 > Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From john.griffith8 at gmail.com Mon Dec 22 21:48:46 2014 From: john.griffith8 at gmail.com (John Griffith) Date: Mon, 22 Dec 2014 14:48:46 -0700 Subject: [openstack-dev] [cinder] ratio: created to attached In-Reply-To: <54960C9B.3000705@dyncloud.net> References: <54960C9B.3000705@dyncloud.net> Message-ID: On Sat, Dec 20, 2014 at 4:56 PM, Tom Barron wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Does anyone have real world experience, even data, to speak to the > question: in an OpenStack cloud, what is the likely ratio of (created) > cinder volumes to attached cinder volumes? > > Thanks, > > Tom Barron > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.22 (Darwin) > > iQEcBAEBAgAGBQJUlgybAAoJEGeKBzAeUxEHqKwIAJjL5TCP7s+Ev8RNr+5bWARF > zy3I216qejKdlM+a9Vxkl6ZWHMklWEhpMmQiUDMvEitRSlHpIHyhh1RfZbl4W9Fe > GVXn04sXIuoNPgbFkkPIwE/45CJC1kGIBDub/pr9PmNv9mzAf3asLCHje8n3voWh > d30If5SlPiaVoc0QNrq0paK7Yl1hh5jLa2zeV4qu4teRts/GjySJI7bR0k/TW5n4 > e2EKxf9MhbxzjQ6QsgvWzxmryVIKRSY9z8Eg/qt7AfXF4Kx++MNo8VbX3AuOu1XV > cnHlmuGqVq71uMjWXCeqK8HyAP8nkn2cKnJXhRYli6qSwf9LxzjC+kMLn364IX4= > =AZ0i > -----END PGP SIGNATURE----- > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Honestly I think the assumption is and should be 1:1, perhaps not 100% duty-cycle, but certainly periods of time when there is a 100% attach rate. From raildom at gmail.com Mon Dec 22 21:49:36 2014 From: raildom at gmail.com (Raildo Mascena) Date: Mon, 22 Dec 2014 21:49:36 +0000 Subject: [openstack-dev] Hierarchical Multitenancy Message-ID: Hello folks, My team and I developed the Hierarchical Multitenancy concept for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we implemented? What are the next steps for kilo? To answers these questions, I created a blog post *http://raildo.me/hierarchical-multitenancy-in-openstack/ * Any question, I'm available. -- Raildo Mascena Software Engineer. Bachelor of Computer Science. Distributed Systems Laboratory Federal University of Campina Grande Campina Grande, PB - Brazil -------------- next part -------------- An HTML attachment was scrubbed... URL: From asalkeld at mirantis.com Mon Dec 22 22:46:43 2014 From: asalkeld at mirantis.com (Angus Salkeld) Date: Tue, 23 Dec 2014 08:46:43 +1000 Subject: [openstack-dev] [heat] Application level HA via Heat In-Reply-To: <5498823D.3050005@redhat.com> References: <20141222182123.GC14130@t430slt.redhat.com> <5498823D.3050005@redhat.com> Message-ID: On Tue, Dec 23, 2014 at 6:42 AM, Zane Bitter wrote: > On 22/12/14 13:21, Steven Hardy wrote: > >> Hi all, >> >> So, lately I've been having various discussions around $subject, and I >> know >> it's something several folks in our community are interested in, so I >> wanted to get some ideas I've been pondering out there for discussion. >> >> I'll start with a proposal of how we might replace HARestarter with >> AutoScaling group, then give some initial ideas of how we might evolve >> that >> into something capable of a sort-of active/active failover. >> >> 1. HARestarter replacement. >> >> My position on HARestarter has long been that equivalent functionality >> should be available via AutoScalingGroups of size 1. Turns out that >> shouldn't be too hard to do: >> >> resources: >> server_group: >> type: OS::Heat::AutoScalingGroup >> properties: >> min_size: 1 >> max_size: 1 >> resource: >> type: ha_server.yaml >> >> server_replacement_policy: >> type: OS::Heat::ScalingPolicy >> properties: >> # FIXME: this adjustment_type doesn't exist yet >> adjustment_type: replace_oldest >> auto_scaling_group_id: {get_resource: server_group} >> scaling_adjustment: 1 >> > > One potential issue with this is that it is a little bit _too_ equivalent > to HARestarter - it will replace your whole scaled unit (ha_server.yaml in > this case) rather than just the failed resource inside. > > So, currently our ScalingPolicy resource can only support three adjustment >> types, all of which change the group capacity. AutoScalingGroup already >> supports batched replacements for rolling updates, so if we modify the >> interface to allow a signal to trigger replacement of a group member, then >> the snippet above should be logically equivalent to HARestarter AFAICT. >> >> The steps to do this should be: >> >> - Standardize the ScalingPolicy-AutoScaling group interface, so >> aynchronous adjustments (e.g signals) between the two resources don't use >> the "adjust" method. >> >> - Add an option to replace a member to the signal interface of >> AutoScalingGroup >> >> - Add the new "replace adjustment type to ScalingPolicy >> > > I think I am broadly in favour of this. > > > I posted a patch which implements the first step, and the second will be >> required for TripleO, e.g we should be doing it soon. >> >> https://review.openstack.org/#/c/143496/ >> https://review.openstack.org/#/c/140781/ >> >> 2. A possible next step towards active/active HA failover >> >> The next part is the ability to notify before replacement that a scaling >> action is about to happen (just like we do for LoadBalancer resources >> already) and orchestrate some or all of the following: >> >> - Attempt to quiesce the currently active node (may be impossible if it's >> in a bad state) >> >> - Detach resources (e.g volumes primarily?) from the current active node, >> and attach them to the new active node >> >> - Run some config action to activate the new node (e.g run some config >> script to fsck and mount a volume, then start some application). >> >> The first step is possible by putting a SofwareConfig/SoftwareDeployment >> resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the >> node is too bricked to respond and specifying DELETE action so it only >> runs >> when we replace the resource). >> >> The third step is possible either via a script inside the box which polls >> for the volume attachment, or possibly via an update-only software config. >> >> The second step is the missing piece AFAICS. >> >> I've been wondering if we can do something inside a new heat resource, >> which knows what the current "active" member of an ASG is, and gets >> triggered on a "replace" signal to orchestrate e.g deleting and creating a >> VolumeAttachment resource to move a volume between servers. >> >> Something like: >> >> resources: >> server_group: >> type: OS::Heat::AutoScalingGroup >> properties: >> min_size: 2 >> max_size: 2 >> resource: >> type: ha_server.yaml >> >> server_failover_policy: >> type: OS::Heat::FailoverPolicy >> properties: >> auto_scaling_group_id: {get_resource: server_group} >> resource: >> type: OS::Cinder::VolumeAttachment >> properties: >> # FIXME: "refs" is a ResourceGroup interface not currently >> # available in AutoScalingGroup >> instance_uuid: {get_attr: [server_group, refs, 1]} >> >> server_replacement_policy: >> type: OS::Heat::ScalingPolicy >> properties: >> # FIXME: this adjustment_type doesn't exist yet >> adjustment_type: replace_oldest >> auto_scaling_policy_id: {get_resource: server_failover_policy} >> scaling_adjustment: 1 >> > > This actually fails because a VolumeAttachment needs to be updated in > place; if you try to switch servers but keep the same Volume when replacing > the attachment you'll get an error. > > TBH {get_attr: [server_group, refs, 1]} is doing most of the heavy lifting > here, so in theory you could just have an OS::Cinder::VolumeAttachment > instead of the FailoverPolicy and then all you need is a way of triggering > a stack update with the same template & params. I know Ton added a PATCH > method to update in Juno so that you don't have to pass parameters any > more, and I believe it's planned to do the same with the template. > > By chaining policies like this we could trigger an update on the >> attachment >> resource (or a nested template via a provider resource containing many >> attachments or other resources) every time the ScalingPolicy is triggered. >> >> For the sake of clarity, I've not included the existing stuff like >> ceilometer alarm resources etc above, but hopefully it gets the idea >> accross so we can discuss further, what are peoples thoughts? I'm quite >> happy to iterate on the idea if folks have suggestions for a better >> interface etc :) >> >> One problem I see with the above approach is you'd have to trigger a >> failover after stack create to get the initial volume attached, still >> pondering ideas on how best to solve that.. >> > > To me this is falling into the same old trap of "hey, we want to run this > custom workflow, all we need to do is add a new resource type to hang some > code on". That's pretty much how we got HARestarter. > > Also, like HARestarter, this cannot hope to cover the range of possible > actions that might be needed by various applications. > > IMHO the "right" way to implement this is that the Ceilometer alarm > triggers a workflow in Mistral that takes the appropriate action defined by > the user, which may (or may not) include updating the Heat stack to a new > template where the shared storage gets attached to a different server. > > I agree, we should really be changing our policies to be implemented as mistral workflows. A good first step would be to have a mistral workflow heat resource so that users can start getting more flexibility in what they do with alarm actions. -Angus > cheers, > Zane. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Tue Dec 23 00:21:36 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Mon, 22 Dec 2014 16:21:36 -0800 Subject: [openstack-dev] Hierarchical Multitenancy In-Reply-To: References: Message-ID: <2964CB70-9ED7-42DE-9F17-498458A0031B@gmail.com> Hi Raildo, Thanks for putting this post together. I really appreciate all the work you guys have done (and continue to do) to get the Hierarchical Mulittenancy code into Keystone. It?s great to have the base implementation merged into Keystone for the K1 milestone. I look forward to seeing the rest of the development land during the rest of this cycle and what the other OpenStack projects build around the HMT functionality. Cheers, Morgan > On Dec 22, 2014, at 1:49 PM, Raildo Mascena wrote: > > Hello folks, My team and I developed the Hierarchical Multitenancy concept for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we implemented? What are the next steps for kilo? > To answers these questions, I created a blog post http://raildo.me/hierarchical-multitenancy-in-openstack/ > > Any question, I'm available. > > -- > Raildo Mascena > Software Engineer. > Bachelor of Computer Science. > Distributed Systems Laboratory > Federal University of Campina Grande > Campina Grande, PB - Brazil > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From suro.patz at gmail.com Tue Dec 23 00:36:02 2014 From: suro.patz at gmail.com (Surojit Pathak) Date: Mon, 22 Dec 2014 16:36:02 -0800 Subject: [openstack-dev] [nova][libvirt] - 'nova reboot' causes console-log truncated In-Reply-To: <20141114100250.GA26744@redhat.com> References: <546528BA.4020605@gmail.com> <20141114100250.GA26744@redhat.com> Message-ID: <5498B8F2.60005@gmail.com> On 11/14/14 2:02 AM, Daniel P. Berrange wrote: > On Thu, Nov 13, 2014 at 01:55:06PM -0800, Surojit Pathak wrote: >> Hi all, >> >> [Issue observed] >> If we issue 'nova reboot ', we get to have the console output of the >> latest bootup of the server only. The console output of the previous boot >> for the same server vanishes due to truncation[1]. If we do reboot from >> within the VM instance [ #sudo reboot ], or reboot the instance with 'virsh >> reboot ' the behavior is not the same, where the console.log keeps >> increasing, with the new output being appended. >> This loss of history makes some debugging scenario difficult due to lack of >> information being available. >> >> Please point me to any solution/blueprint for this issue, if already >> planned. Otherwise, please comment on my analysis and proposals as solution, >> below - >> >> [Analysis] >> Nova's libvirt driver on compute node tries to do a graceful restart of the >> server instance, by attempting a soft_reboot first. If soft_reboot fails, it >> attempts a hard_reboot. As part of soft_reboot, it brings down the instance >> by calling shutdown(), and then calls createWithFlags() to bring this up. >> Because of this, qemu-kvm process for the instance gets terminated and new >> process is launched. In QEMU, the chardev file is opened with O_TRUNC, and >> thus we lose the previous content of the console.log file. >> On the other-hand, during 'virsh reboot ', the same qemu-kvm >> process continues, and libvirt actually does a qemuDomainSetFakeReboot(). >> Thus the same file continues capturing the new console output as a >> continuation into the same file. > Nova and libvirt have support for issuing a graceful reboot via the QEMU > guest agent. So if you make sure that is installed, and tell Nova to use > it, then Nova won't have to stop & recreate the QEMU process and thus > won't have the problem of overwriting the logs. Hi Daniel, Having GA to do graceful restart is nice option. But if it were to just preserve the same console file, even 'virsh reboot' achieves the purpose. As I explained in my original analysis, Nova seems to have not taken the path, as it does not want to have a false positive, where the GA does not respond or 'virDomain.reboot' fails later and the domain is not really restarted. [ CC-ed vish, author of nova /virt /libvirt /driver.py ] IMHO, QEMU should preserve the console-log file for a given domain, if it exists, by not opening with O_TRUNC option, instead opening with O_APPEND. I would like to draw a comparison of a real computer to which we might be connected over serial console, and the box gets powered down and up with external button press, and we do not lose the console history, if connected. And that's what is the experience console-log intends to provide. If you think, this is agreeable, please let me know, I will send the patch to qemu-devel at . -- Regards, SURO -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Tue Dec 23 01:04:58 2014 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 23 Dec 2014 12:04:58 +1100 Subject: [openstack-dev] [nova][libvirt] - 'nova reboot' causes console-log truncated In-Reply-To: <5498B8F2.60005@gmail.com> References: <546528BA.4020605@gmail.com> <20141114100250.GA26744@redhat.com> <5498B8F2.60005@gmail.com> Message-ID: <20141223010458.GB68184@thor.bakeyournoodle.com> On Mon, Dec 22, 2014 at 04:36:02PM -0800, Surojit Pathak wrote: > Hi Daniel, > Having GA to do graceful restart is nice option. But if it were to just > preserve the same console file, even 'virsh reboot' achieves the purpose. As > I explained in my original analysis, Nova seems to have not taken the path, > as it does not want to have a false positive, where the GA does not respond > or 'virDomain.reboot' fails later and the domain is not really restarted. [ > CC-ed vish, author of nova > /virt /libvirt /driver.py > ] > > IMHO, QEMU should preserve the console-log file for a given domain, if it > exists, by not opening with O_TRUNC option, instead opening with O_APPEND. I > would like to draw a comparison of a real computer to which we might be > connected over serial console, and the box gets powered down and up with > external button press, and we do not lose the console history, if connected. > And that's what is the experience console-log intends to provide. If you > think, this is agreeable, please let me know, I will send the patch to > qemu-devel at . The issue is more complex than just removing the O_TRUNC from the open() flags. I havd a proposal that will (almost by accident) fix this in qemu by allowing console log files to be "rotated". I'm also waorking on a similar feature in libvirt. I think the tl;dr: is that this /shoudl/ be fixed in kilo with a 'modern' libvirt. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From dzimine at stackstorm.com Tue Dec 23 01:26:06 2014 From: dzimine at stackstorm.com (Dmitri Zimine) Date: Mon, 22 Dec 2014 17:26:06 -0800 Subject: [openstack-dev] [Mistral] Access to worklfow/task results without implicit publish Message-ID: <06A45DEA-69CF-47CA-92A7-F744D526E25D@stackstorm.com> The problem: Refer to workflow / action output without explicitly re-publishing the output values. Why we want it: to reduce repetition, and to make modifications in the place values are used, not where they are obtained (and not in multiple places). E.g., as an editor of a workflow, when I just realized that I need a value of some task down the line, I want to make change right here in the tasks that consumes the data (and only those which need this data), without finding and modifying the task that supplies the data. Reasons: We don't have a concept of workflow or action ?results': it's the task which produces and publishes results. Different tasks call same actions/workflows, produce same output variables with diff values. We don't want to publish this output with output name as a key, to the global context: they will conflict and mess up. Instead, we can namespace them by the task (as specific values are the attributes of the tasks, and we want to refer to tasks, not actions/workflows). Solution: To refer the output of a particular task (aka raw result of action execution invoked by this task), use the_task prefix: $_task.. $_task.my_task.my_task_result.foo.bar Expanded example my_sublfow: output: - foo # << declare output here - bar tasks: my_task: action: get_foo publish: foo: $foo # << define output in a task bar: $bar ... main_flow_with_explicit_publishing: tasks: t1: workflow: my_subflow publish: # Today, you must explicitly publish to make data # from action available for other tasks foo: $foo # << re-plublish, else you can't use it bar: $bar t2: action: echo output="$foo and $bar" # << use it from task t1 main_flow_with_implicit_publishing: tasks: t1: workflow: my_subflow t2: action: echo output="$_task.t1.foo and $_task.t1.bar" -------------- next part -------------- An HTML attachment was scrubbed... URL: From suro.patz at gmail.com Tue Dec 23 03:16:27 2014 From: suro.patz at gmail.com (Surojit Pathak) Date: Mon, 22 Dec 2014 19:16:27 -0800 Subject: [openstack-dev] [nova][libvirt] - 'nova reboot' causes console-log truncated In-Reply-To: <20141223010458.GB68184@thor.bakeyournoodle.com> References: <546528BA.4020605@gmail.com> <20141114100250.GA26744@redhat.com> <5498B8F2.60005@gmail.com> <20141223010458.GB68184@thor.bakeyournoodle.com> Message-ID: <5498DE8B.90908@gmail.com> On 12/22/14 5:04 PM, Tony Breeds wrote: > On Mon, Dec 22, 2014 at 04:36:02PM -0800, Surojit Pathak wrote: > >> Hi Daniel, >> Having GA to do graceful restart is nice option. But if it were to just >> preserve the same console file, even 'virsh reboot' achieves the purpose. As >> I explained in my original analysis, Nova seems to have not taken the path, >> as it does not want to have a false positive, where the GA does not respond >> or 'virDomain.reboot' fails later and the domain is not really restarted. [ >> CC-ed vish, author of nova >> >> >> IMHO, QEMU should preserve the console-log file for a given domain, if it >> exists, by not opening with O_TRUNC option, instead opening with O_APPEND. I >> would like to draw a comparison of a real computer to which we might be >> connected over serial console, and the box gets powered down and up with >> external button press, and we do not lose the console history, if connected. >> And that's what is the experience console-log intends to provide. If you >> think, this is agreeable, please let me know, I will send the patch to >> qemu-devel at . > The issue is more complex than just removing the O_TRUNC from the open() flags. > > I havd a proposal that will (almost by accident) fix this in qemu by allowing > console log files to be "rotated". I'm also waorking on a similar feature in > libvirt. > > I think the tl;dr: is that this /shoudl/ be fixed in kilo with a 'modern' libvirt. Hi Tony, Can you please share some details of the effort, in terms of reference? > > Yours Tony. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards, SURO -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Tue Dec 23 03:32:23 2014 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 23 Dec 2014 14:32:23 +1100 Subject: [openstack-dev] [nova][libvirt] - 'nova reboot' causes console-log truncated In-Reply-To: <5498DE8B.90908@gmail.com> References: <546528BA.4020605@gmail.com> <20141114100250.GA26744@redhat.com> <5498B8F2.60005@gmail.com> <20141223010458.GB68184@thor.bakeyournoodle.com> <5498DE8B.90908@gmail.com> Message-ID: <20141223033223.GC68184@thor.bakeyournoodle.com> On Mon, Dec 22, 2014 at 07:16:27PM -0800, Surojit Pathak wrote: > Hi Tony, > > Can you please share some details of the effort, in terms of reference? Well the initial discussions started with qemu at: http://lists.nongnu.org/archive/html/qemu-devel/2014-12/msg00765.html and then here: http://lists.openstack.org/pipermail/openstack-dev/2014-December/052356.html You'll note the the focus of the discussion is rotating the log files but I'm very much aware of the issue covered in theis thread and it will be covered in my fixes. Which is why I said 'almost' by accident ;P I have a partial implementation for the log rotation in qemu (you can issue a command from the monitor but I haven't looked at the HUP yet). I started looking at doing something in libvirt aswell but I haven't made much progress there due to conflicting priorities. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From donald.d.dugger at intel.com Tue Dec 23 05:24:11 2014 From: donald.d.dugger at intel.com (Dugger, Donald D) Date: Tue, 23 Dec 2014 05:24:11 +0000 Subject: [openstack-dev] [gantt] Scheduler sub-group meeting agenda 12/23 Message-ID: <6AF484C0160C61439DE06F17668F3BCB53465795@ORSMSX114.amr.corp.intel.com> I'll be hanging out on the IRC channel in case anyone wants to talk but, given the holidays, I don't expect much attendance and we'll keep it short no matter what. Meeting on #openstack-meeting at 1500 UTC (8:00AM MST) 1) Status on cleanup work - https://wiki.openstack.org/wiki/Gantt/kilo -- Don Dugger "Censeo Toto nos in Kansa esse decisse." - D. Gale Ph: 303/443-3786 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Abhishek.Kekane at nttdata.com Tue Dec 23 06:36:58 2014 From: Abhishek.Kekane at nttdata.com (Kekane, Abhishek) Date: Tue, 23 Dec 2014 06:36:58 +0000 Subject: [openstack-dev] [Nova] shelved_offload_time configuration Message-ID: Hi All, AFAIK, for shelve api the parameter shelved_offload_time need to be configured on compute node. Can we configure this parameter on controller node as well. Please suggest. Thank You, Abhishek Kekane ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.tran2 at hp.com Tue Dec 23 06:38:32 2014 From: steven.tran2 at hp.com (Tran, Steven) Date: Tue, 23 Dec 2014 06:38:32 +0000 Subject: [openstack-dev] [Congress] simulate examples Message-ID: <928760976E4ACC46B58B36866F60B0DD080A8C3B@G1W3642.americas.hpqcorp.net> Hi, Does anyone have an example on how to use 'simulate' according to the following command line usage? usage: openstack congress policy simulate [-h] [--delta] [--trace] What are the query and sequence? The example under /opt/stack/congress/examples doesn't mention about query and sequence. It seems like all 4 parameters are required. Thanks, -Steven -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Dec 23 07:13:11 2014 From: zigo at debian.org (Thomas Goirand) Date: Tue, 23 Dec 2014 15:13:11 +0800 Subject: [openstack-dev] Cross distribution talks on Friday In-Reply-To: <98F22BC6-238B-4E4E-A29D-8C74EB844782@stufft.io> References: <5454DC71.7040300@debian.org> <20141101152935.GA12516@tesla> <545533E1.1070202@debian.org> <5460EB19.4000500@redhat.com> <98F22BC6-238B-4E4E-A29D-8C74EB844782@stufft.io> Message-ID: <54991607.7060409@debian.org> On 11/11/2014 12:46 AM, Donald Stufft wrote: > >> On Nov 10, 2014, at 11:43 AM, Adam Young wrote: >> >> On 11/01/2014 06:51 PM, Alan Pevec wrote: >>>> %install >>>> export OSLO_PACKAGE_VERSION=%{version} >>>> %{__python} setup.py install -O1 --skip-build --root %{buildroot} >>>> >>>> Then everything should be ok and PBR will become your friend. >>> Still not my friend because I don't want a _build_ tool as runtime dependency :) >>> e.g. you don't ship make(1) to run C programs, do you? >>> For runtime, only pbr.version part is required but unfortunately >>> oslo.version was abandoned. >>> >>> Cheers, >>> Alan >>> >> Perhaps we need a top level Python Version library, not Oslo? Is there such a thing? Seems like it should not be something specific to OpenStack > > What does pbr.version do? Basically, the same as pkg-resources. Therefore I don't really understand the need for it... Am I missing something? Thomas From zigo at debian.org Tue Dec 23 07:17:00 2014 From: zigo at debian.org (Thomas Goirand) Date: Tue, 23 Dec 2014 15:17:00 +0800 Subject: [openstack-dev] Cross distribution talks on Friday In-Reply-To: <54944A7F.6010500@redhat.com> References: <5454DC71.7040300@debian.org> <20141101152935.GA12516@tesla> <545533E1.1070202@debian.org> <54944A7F.6010500@redhat.com> Message-ID: <549916EC.3040002@debian.org> On 12/19/2014 11:55 PM, Ihar Hrachyshka wrote: > Note that OSLO_PACKAGE_VERSION is not public. Well, it used to be public, it has been added and discussed a few years ago because of issues I had with packaging. > Instead, we should use > PBR_VERSION: > > http://docs.openstack.org/developer/pbr/packagers.html#versioning I don't mind switching, though it's going to be a slow process (because I'm using OSLO_PACKAGE_VERSION in all packages). Are we at least *sure* that using OSLO_PACKAGE_VERSION is now deprecated? Thomas From li-zheming at 163.com Tue Dec 23 07:17:26 2014 From: li-zheming at 163.com (li-zheming) Date: Tue, 23 Dec 2014 15:17:26 +0800 (CST) Subject: [openstack-dev] [nova] How can I continue to complete a abandoned blueprint? In-Reply-To: <54982B94.5020604@gmail.com> References: <79045ce6.1930.14a716bb72a.Coremail.li-zheming@163.com> <54982B94.5020604@gmail.com> Message-ID: <7c6df584.b0b.14a7601f0e8.Coremail.li-zheming@163.com> thanks! I have submitted a new blueprint(quota-instance-memory) the link is: https://blueprints.launchpad.net/nova/+spec/quota-instance-memory Merry Christmas!^_^ -- Name : Li zheming Company : Hua Wei Address ? Shenzhen China Tel?0086 18665391827 At 2014-12-22 22:32:52,"Jay Pipes" wrote: >On 12/22/2014 04:54 AM, li-zheming wrote: >> hi all: Bp >> flavor-quota-memory(https://blueprints.launchpad.net/nova/+spec/flavor-quota-memory) >> was submitted by my partner in havana. but it has abandoned because >> of some reason. > >Some reason == the submitter failed to provide any details on how the >work would be implemented, what the use cases were, and any alternatives >that might be possible. > > > I want to continue to this blueprint. Based on the >> rules about BP for >> kilo, >> for this bp, spec is not necessary, so I submit the code directly and >> give commit message to clear out questions in spec. Is it right? how >> can I do? thanks! > >Specs are no longer necessary for smallish features, no. A blueprint is >still necessary on Launchpad, so you should be able to use the abandoned >one you link above -- which, AFAICT, has enough implementation details >about the proposed changes. > >Alternately, if you cannot get the original submitter to remove the spec >link to the old spec review, you can always start a new blueprint and we >can mark that one as obselete. > >I'd like Dan Berrange (cc'd) to review whichever blueprint on Launchpad >you end up using. Please let us know what you do. > >All the best, >-jay > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshik at outlook.com Tue Dec 23 07:26:58 2014 From: akshik at outlook.com (Akshik DBK) Date: Tue, 23 Dec 2014 12:56:58 +0530 Subject: [openstack-dev] copy paste for spice Message-ID: Going by the documentation Spice console supports copy paste and other features, would like to know how do we enable them, meaning how and where do we enable it, should we do something wrt the image or some config at openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhanif at brocade.com Tue Dec 23 07:33:06 2014 From: mhanif at brocade.com (Mohammad Hanif) Date: Tue, 23 Dec 2014 07:33:06 +0000 Subject: [openstack-dev] [neutron][vpnaas] Sub-team meetings on Dec 20th and 27th? In-Reply-To: <7D9ECE8C-408C-439B-A215-EDE4C78BA2B2@cisco.com> References: <837EA5B8-4C8E-4B0C-98BC-A174122156E7@cisco.com>, <7D9ECE8C-408C-439B-A215-EDE4C78BA2B2@cisco.com> Message-ID: Thanks Paul. Happy holidays everyone! On Dec 22, 2014, at 1:06 PM, Paul Michali (pcm) > wrote: Will cancel the next two VPNaaS sub-team meetings. The next meeting will be Tuesday, January 6th at 1500 UTC on meeting-4 (<<< Note the channel change). Enjoy the holiday time! PCM (Paul Michali) MAIL ......... pcm at cisco.com IRC ........... pc_m (irc.freenode.com) TW ............ @pmichali GPG Key ... 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 On Dec 19, 2014, at 2:01 PM, Paul Michali (pcm) > wrote: Does anyone have agenda items to discuss for the next two meetings during the holidays? If so, please let me know (and add them to the Wiki page), and we'll hold the meeting. Otherwise, we can continue on Jan 6th, and any pop-up items can be addressed on the mailing list or Neutron IRC. Please let me know by Monday, if you'd like us to meet. Regards, PCM (Paul Michali) MAIL ......... pcm at cisco.com IRC ........... pc_m (irc.freenode.com) TW ............ @pmichali GPG Key ... 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From punith.s at cloudbyte.com Tue Dec 23 07:36:43 2014 From: punith.s at cloudbyte.com (Punith S) Date: Tue, 23 Dec 2014 13:06:43 +0530 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A435949@G4W3223.americas.hpqcorp.net> References: <5486D947.4090209@hp.com> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4240C5@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A424BC0@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A435949@G4W3223.americas.hpqcorp.net> Message-ID: Hi Asselin, i'm following your readme https://github.com/rasselin/os-ext-testing for setting up our cloudbyte CI on two ubuntu 12.04 VM's(master and slave) currently the scripts and setup went fine as followed with the document. now both master and slave have been connected successfully, but in order to run the tempest integration test against our proposed cloudbyte cinder driver for kilo, we need to have devstack installed in the slave.(in my understanding) but on installing the master devstack i'm getting permission issues in 12.04 in executing ./stack.sh since master devstack suggests the 14.04 or 13.10 ubuntu. and on contrary running install_slave.sh is failing on 13.10 due to puppet modules on found error. is there a way to get this work ? thanks in advance On Mon, Dec 22, 2014 at 11:10 PM, Asselin, Ramy wrote: > Eduard, > > > > A few items you can try: > > 1. Double-check that the job is in Jenkins > > a. If not, then that?s the issue > > 2. Check that the processes are running correctly > > a. ps -ef | grep zuul > > i. Should > have 2 zuul-server & 1 zuul-merger > > b. ps -ef | grep Jenkins > > i. Should > have 1 /usr/bin/daemon --name=jenkins & 1 /usr/bin/java > > 3. In Jenkins, Manage Jenkins, Gearman Plugin Config, ?Test > Connection? > > 4. Stop and Zuul & Jenkins. Start Zuul & Jenkins > > a. service Jenkins stop > > b. service zuul stop > > c. service zuul-merger stop > > d. service Jenkins start > > e. service zuul start > > f. service zuul-merger start > > > > Otherwise, I suggest you ask in #openstack-infra irc channel. > > > > Ramy > > > > *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] > *Sent:* Sunday, December 21, 2014 11:01 PM > > *To:* Asselin, Ramy > *Cc:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Thanks Ramy, > > > > Unfortunately i don't see dsvm-tempest-full in the "status" output. > > Any idea how i can get it "registered"? > > > > Thanks, > > Eduard > > > > On Fri, Dec 19, 2014 at 9:43 PM, Asselin, Ramy > wrote: > > Eduard, > > > > If you run this command, you can see which jobs are registered: > > >telnet localhost 4730 > > > > >status > > > > There are 3 numbers per job: queued, running and workers that can run job. > Make sure the job is listed & last ?workers? is non-zero. > > > > To run the job again without submitting a patch set, leave a ?recheck? > comment on the patch & make sure your zuul layout.yaml is configured to > trigger off that comment. For example [1]. > > Be sure to use the sandbox repository. [2] > > I?m not aware of other ways. > > > > Ramy > > > > [1] > https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L20 > > [2] https://github.com/openstack-dev/sandbox > > > > > > > > > > *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] > *Sent:* Friday, December 19, 2014 3:36 AM > *To:* Asselin, Ramy > *Cc:* OpenStack Development Mailing List (not for usage questions) > > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Hi all, > > After a little struggle with the config scripts i managed to get a working > setup that is able to process openstack-dev/sandbox and run > noop-check-comunication. > > > > Then, i tried enabling dsvm-tempest-full job but it keeps returning > "NOT_REGISTERED" > > > > 2014-12-19 12:07:14,683 INFO zuul.IndependentPipelineManager: Change > depends on changes [] > > 2014-12-19 12:07:14,683 INFO zuul.Gearman: Launch job > noop-check-communication for change with > dependent changes [] > > 2014-12-19 12:07:14,693 INFO zuul.Gearman: Launch job dsvm-tempest-full > for change with dependent changes [] > > 2014-12-19 12:07:14,694 ERROR zuul.Gearman: Job handle: None name: build:dsvm-tempest-full unique: > a9199d304d1140a8bf4448dfb1ae42c1> is not registered with Gearman > > 2014-12-19 12:07:14,694 INFO zuul.Gearman: Build handle: None name: build:dsvm-tempest-full unique: > a9199d304d1140a8bf4448dfb1ae42c1> complete, result NOT_REGISTERED > > 2014-12-19 12:07:14,765 INFO zuul.Gearman: Build handle: H:127.0.0.1:2 name: build:noop-check-communication unique: > 333c6ea077324a788e3c37a313d872c5> started > > 2014-12-19 12:07:14,910 INFO zuul.Gearman: Build handle: H:127.0.0.1:2 name: build:noop-check-communication unique: > 333c6ea077324a788e3c37a313d872c5> complete, result SUCCESS > > 2014-12-19 12:07:14,916 INFO zuul.IndependentPipelineManager: Reporting > change , actions: [ , {'verified': -1}>] > > > > Nodepoold's log show no reference to dsvm-tempest-full and neither > jenkins' logs. > > > > Any idea how to enable this job? > > > > Also, i got the "Cloud provider" setup and i can access it from the > jenkins master. > > Any idea how i can manually trigger dsvm-tempest-full job to run and test > the cloud provider without having to push a review to Gerrit? > > > > Thanks, > > Eduard > > > > On Thu, Dec 18, 2014 at 7:52 PM, Eduard Matei < > eduard.matei at cloudfounders.com> wrote: > > Thanks for the input. > > > > I managed to get another master working (on Ubuntu 13.10), again with some > issues since it was already setup. > > I'm now working towards setting up the slave. > > > > Will add comments to those reviews. > > > > Thanks, > > Eduard > > > > On Thu, Dec 18, 2014 at 7:42 PM, Asselin, Ramy > wrote: > > Yes, Ubuntu 12.04 is tested as mentioned in the readme [1]. Note that > the referenced script is just a wrapper that pulls all the latest from > various locations in openstack-infra, e.g. [2]. > > Ubuntu 14.04 support is WIP [3] > > FYI, there?s a spec to get an in-tree 3rd party ci solution [4]. Please > add your comments if this interests you. > > > > [1] https://github.com/rasselin/os-ext-testing/blob/master/README.md > > [2] > https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L29 > > [3] https://review.openstack.org/#/c/141518/ > > [4] https://review.openstack.org/#/c/139745/ > > > > > > *From:* Punith S [mailto:punith.s at cloudbyte.com] > *Sent:* Thursday, December 18, 2014 3:12 AM > *To:* OpenStack Development Mailing List (not for usage questions); > Eduard Matei > > > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Hi Eduard > > > > we tried running > https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh > > on ubuntu master 12.04, and it appears to be working fine on 12.04. > > > > thanks > > > > On Thu, Dec 18, 2014 at 1:57 PM, Eduard Matei < > eduard.matei at cloudfounders.com> wrote: > > Hi, > > Seems i can't install using puppet on the jenkins master using > install_master.sh from > https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh > because it's running Ubuntu 11.10 and it appears unsupported. > > I managed to install puppet manually on master and everything else fails > > So i'm trying to manually install zuul and nodepool and jenkins job > builder, see where i end up. > > > > The slave looks complete, got some errors on running install_slave so i > ran parts of the script manually, changing some params and it appears > installed but no way to test it without the master. > > > > Any ideas welcome. > > > > Thanks, > > > > Eduard > > > > On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy > wrote: > > Manually running the script requires a few environment settings. Take a > look at the README here: > > https://github.com/openstack-infra/devstack-gate > > > > Regarding cinder, I?m using this repo to run our cinder jobs (fork from > jaypipes). > > https://github.com/rasselin/os-ext-testing > > > > Note that this solution doesn?t use the Jenkins gerrit trigger pluggin, > but zuul. > > > > There?s a sample job for cinder here. It?s in Jenkins Job Builder format. > > > https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample > > > > You can ask more questions in IRC freenode #openstack-cinder. (irc# > asselin) > > > > Ramy > > > > *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] > *Sent:* Tuesday, December 16, 2014 12:41 AM > *To:* Bailey, Darragh > *Cc:* OpenStack Development Mailing List (not for usage questions); > OpenStack > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Hi, > > > > Can someone point me to some working documentation on how to setup third > party CI? (joinfu's instructions don't seem to work, and manually running > devstack-gate scripts fails: > > Running gate_hook > > Job timeout set to: 163 minutes > > timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory > > ERROR: the main setup script run by this job failed - exit code: 127 > > please look at the relevant log files to determine the root cause > > Cleaning up host > > ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) > > Build step 'Execute shell' marked build as failure. > > > > I have a working Jenkins slave with devstack and our internal libraries, i > have Gerrit Trigger Plugin working and triggering on patches created, i > just need the actual job contents so that it can get to comment with the > test results. > > > > Thanks, > > > > Eduard > > > > On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei < > eduard.matei at cloudfounders.com> wrote: > > Hi Darragh, thanks for your input > > > > I double checked the job settings and fixed it: > > - build triggers is set to Gerrit event > > - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin > and tested separately) > > - Trigger on: Patchset Created > > - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: > Type: Path, Pattern: ** (was Type Plain on both) > > Now the job is triggered by commit on openstack-dev/sandbox :) > > > > Regarding the Query and Trigger Gerrit Patches, i found my patch using > query: status:open project:openstack-dev/sandbox change:139585 and i can > trigger it manually and it executes the job. > > > > But i still have the problem: what should the job do? It doesn't actually > do anything, it doesn't run tests or comment on the patch. > > Do you have an example of job? > > > > Thanks, > > Eduard > > > > On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh wrote: > > Hi Eduard, > > > I would check the trigger settings in the job, particularly which "type" > of pattern matching is being used for the branches. Found it tends to be > the spot that catches most people out when configuring jobs with the > Gerrit Trigger plugin. If you're looking to trigger against all branches > then you would want "Type: Path" and "Pattern: **" appearing in the UI. > > If you have sufficient access using the 'Query and Trigger Gerrit > Patches' page accessible from the main view will make it easier to > confirm that your Jenkins instance can actually see changes in gerrit > for the given project (which should mean that it can see the > corresponding events as well). Can also use the same page to re-trigger > for PatchsetCreated events to see if you've set the patterns on the job > correctly. > > Regards, > Darragh Bailey > > "Nothing is foolproof to a sufficiently talented fool" - Unknown > > On 08/12/14 14:33, Eduard Matei wrote: > > Resending this to dev ML as it seems i get quicker response :) > > > > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: > > Patchset Created", chose as server the configured Gerrit server that > > was previously tested, then added the project openstack-dev/sandbox > > and saved. > > I made a change on dev sandbox repo but couldn't trigger my job. > > > > Any ideas? > > > > Thanks, > > Eduard > > > > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei > > > > wrote: > > > > Hello everyone, > > > > Thanks to the latest changes to the creation of service accounts > > process we're one step closer to setting up our own CI platform > > for Cinder. > > > > So far we've got: > > - Jenkins master (with Gerrit plugin) and slave (with DevStack and > > our storage solution) > > - Service account configured and tested (can manually connect to > > review.openstack.org and get events > > and publish comments) > > > > Next step would be to set up a job to do the actual testing, this > > is where we're stuck. > > Can someone please point us to a clear example on how a job should > > look like (preferably for testing Cinder on Kilo)? Most links > > we've found are broken, or tools/scripts are no longer working. > > Also, we cannot change the Jenkins master too much (it's owned by > > Ops team and they need a list of tools/scripts to review before > > installing/running so we're not allowed to experiment). > > > > Thanks, > > Eduard > > > > -- > > > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > > intended solely for the use of the individual or entity to whom > > they are addressed.If you are not the named addressee or an > > employee or agent responsible for delivering this message to the > > named addressee, you are hereby notified that you are not > > authorized to read, print, retain, copy or disseminate this > > message or any part of it. If you have received this email in > > error we request you to notify us by reply e-mail and to delete > > all electronic files of the message. If you are not the intended > > recipient you are notified that disclosing, copying, distributing > > or taking any action in reliance on the contents of this > > information is strictly prohibited. E-mail transmission cannot be > > guaranteed to be secure or error free as information could be > > intercepted, corrupted, lost, destroyed, arrive late or > > incomplete, or contain viruses. The sender therefore does not > > accept liability for any errors or omissions in the content of > > this message, and shall have no liability for any loss or damage > > suffered by the user, which arise as a result of e-mail transmission. > > > > > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > > intended solely for the use of the individual or entity to whom they > > are addressed.If you are not the named addressee or an employee or > > agent responsible for delivering this message to the named addressee, > > you are hereby notified that you are not authorized to read, print, > > retain, copy or disseminate this message or any part of it. If you > > have received this email in error we request you to notify us by reply > > e-mail and to delete all electronic files of the message. If you are > > not the intended recipient you are notified that disclosing, copying, > > distributing or taking any action in reliance on the contents of this > > information is strictly prohibited. E-mail transmission cannot be > > guaranteed to be secure or error free as information could be > > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or > > contain viruses. The sender therefore does not accept liability for > > any errors or omissions in the content of this message, and shall have > > no liability for any loss or damage suffered by the user, which arise > > as a result of e-mail transmission. > > > > > > > _______________________________________________ > > OpenStack-Infra mailing list > > OpenStack-Infra at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > > regards, > > > > punith s > > cloudbyte.com > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- regards, punith s cloudbyte.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Tue Dec 23 08:22:48 2014 From: Tim.Bell at cern.ch (Tim Bell) Date: Tue, 23 Dec 2014 08:22:48 +0000 Subject: [openstack-dev] Hierarchical Multitenancy In-Reply-To: <2964CB70-9ED7-42DE-9F17-498458A0031B@gmail.com> References: <2964CB70-9ED7-42DE-9F17-498458A0031B@gmail.com> Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E501025E032F@CERNXCHG43.cern.ch> It would be great if we can get approval for the Hierachical Quota handling in Nova too (https://review.openstack.org/#/c/129420/). Tim From: Morgan Fainberg [mailto:morgan.fainberg at gmail.com] Sent: 23 December 2014 01:22 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] Hierarchical Multitenancy Hi Raildo, Thanks for putting this post together. I really appreciate all the work you guys have done (and continue to do) to get the Hierarchical Mulittenancy code into Keystone. It?s great to have the base implementation merged into Keystone for the K1 milestone. I look forward to seeing the rest of the development land during the rest of this cycle and what the other OpenStack projects build around the HMT functionality. Cheers, Morgan On Dec 22, 2014, at 1:49 PM, Raildo Mascena > wrote: Hello folks, My team and I developed the Hierarchical Multitenancy concept for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we implemented? What are the next steps for kilo? To answers these questions, I created a blog post http://raildo.me/hierarchical-multitenancy-in-openstack/ Any question, I'm available. -- Raildo Mascena Software Engineer. Bachelor of Computer Science. Distributed Systems Laboratory Federal University of Campina Grande Campina Grande, PB - Brazil _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From EvgenyF at Radware.com Tue Dec 23 08:57:34 2014 From: EvgenyF at Radware.com (Evgeny Fedoruk) Date: Tue, 23 Dec 2014 08:57:34 +0000 Subject: [openstack-dev] [neutron][lbaas] Canceling lbaas meeting 12/16 In-Reply-To: <1419184201.13766.1.camel@localhost> References: <1419184201.13766.1.camel@localhost> Message-ID: Thanks Brandon -----Original Message----- From: Brandon Logan [mailto:brandon.logan at RACKSPACE.COM] Sent: Sunday, December 21, 2014 7:50 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [neutron][lbaas] Canceling lbaas meeting 12/16 The extensions are remaining in neutron until the Neutron WSGI Refactor is completed so it's easier for them to test all extensions and that nothing breaks. I do believe the plan is to move the extensions into the service repos once this is completed. Thanks, Brandon On Sun, 2014-12-21 at 10:14 +0000, Evgeny Fedoruk wrote: > Hi Doug, > How are you? > I have a question regarding https://review.openstack.org/#/c/141247/ > change set Extension changes are not part of this change. I also see the whole extension mechanism is out of the new repository. > I may be missed something. Are we replacing the mechanism with something else? Or we will add it separately in other change set? > > Thanks, > Evg > > -----Original Message----- > From: Doug Wiegley [mailto:dougw at a10networks.com] > Sent: Sunday, December 14, 2014 7:46 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: [openstack-dev] [neutron][lbaas] Canceling lbaas meeting > 12/16 > > Unless someone has an urgent agenda item, and due to the mid-cycle for Octavia, which has a bunch of overlap with the lbaas team, let?s cancel this week. If you have post-split lbaas v2 questions, please find me in #openstack-lbaas. > > The only announcement was going to be: If you are waiting to re-submit/submit lbaasv2 changes for the new repo, please monitor this review, or make your change dependent on it: > > https://review.openstack.org/#/c/141247/ > > > Thanks, > Doug > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cloudbeyond at gmail.com Tue Dec 23 09:27:20 2014 From: cloudbeyond at gmail.com (CloudBeyond) Date: Tue, 23 Dec 2014 17:27:20 +0800 Subject: [openstack-dev] [nova][libvirt]How to customize cpu features in nova Message-ID: Dear Developers, Sorry for interrupting if i sent to wrong email group, but i got a problem on running Solaris 10 on icehouse openstack. I found it is need to disable CPU feature x2apic so that solaris 10 NIC could work in KVM as following code in libvirt.xml SandyBridge Intel if without line the NIC in Solaris could not work well. And I try to migrate the KVM libvirt xml to Nova. I found only two options to control the result. First I used default setting cpu_mdoe = None in nova.conf , the Solaris 10 would keep rebooting before enter desktop env. And then I set cpu_mode = custom, cpu_model = SandyBridge. Solaris 10 could start up but NIC not work. I also set cpu_mode = host-model, cpu_model = None. Solaris 10 could work but NIC not. I read the code located in /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py. Is that possible to do some hacking to customize the cpu feature? Thank you and I am really looking forward your reply. Have a nice day and Merry Christmas! Best Regard. Elbert Wang -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongli.he at intel.com Tue Dec 23 09:27:34 2014 From: yongli.he at intel.com (yongli he) Date: Tue, 23 Dec 2014 17:27:34 +0800 Subject: [openstack-dev] [nova][ThirdPartyCI][PCI CI] comments to Nova Message-ID: <54993586.7010800@intel.com> Hi, Joe Gordon and all recently Intel is setting up a HW based Third Part CI. it's already running a set of basic PCI test cases for several weeks, but do not sent out comments, just log the result. the log server and these test cases seems stable. here is one sample log: http://192.55.68.190/138795/6/ for now, the test cases land in the github: https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases to begin given comments to nova repository, what other necessary work need to be address? some notes: * the test cases just cover basic PCI pass through testing. * after it begin to working, more test cases will be added , include basic SRIOV Thanks Yongli He More logs: http://192.55.68.190/138795/6 http://192.55.68.190/74423/6 http://192.55.68.190/141115/6 http://192.55.68.190/142565/2 http://192.55.68.190/142835/3 http://192.55.68.190/74423/5 http://192.55.68.190/142835/2 http://192.55.68.190/140739/3 ..... -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikram.choudhary at huawei.com Tue Dec 23 10:06:56 2014 From: vikram.choudhary at huawei.com (Vikram Choudhary) Date: Tue, 23 Dec 2014 10:06:56 +0000 Subject: [openstack-dev] Our idea for SFC using OpenFlow. RE: [NFV][Telco] Service VM v/s its basic framework In-Reply-To: <891761EAFA335D44AD1FFDB9B4A8C063DD95AF@G9W0762.americas.hpqcorp.net> References: <891761EAFA335D44AD1FFDB9B4A8C063D8F9EE@G4W3216.americas.hpqcorp.net> <45286C18B80CE54EAE4BDD29C033604E7FF6C96C5E@HE111647.EMEA1.CDS.T-INTERNAL.COM> <99F160A7D70E22438C8ECB52BDB54B70B20DC9@blreml503-mbx> <99F160A7D70E22438C8ECB52BDB54B70B210FE@blreml503-mbx> <891761EAFA335D44AD1FFDB9B4A8C063DD95AF@G9W0762.americas.hpqcorp.net> Message-ID: <99F160A7D70E22438C8ECB52BDB54B70B21E84@blreml503-mbx> Hi Keshava, Please find my answer inline? From: A, Keshava [mailto:keshava.a at hp.com] Sent: 22 December 2014 20:10 To: Vikram Choudhary; Murali B Cc: openstack-dev at lists.openstack.org; Yuriy.Babenko at telekom.de; stephen.kf.wong at gmail.com; Dhruv Dhody; Dongfeng (C); Kalyankumar Asangi; A, Keshava Subject: RE: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Vikram, 1. In this solution it is assumed that all the OpenStack services are available/enabled on all the CNs ? Vikram: The SFC NB API should be independent of where the advanced services are deployed, the API should infact hide this information by design. In the BP, we have proposed an idea of service pool which will be populated by the user with their respective advance service instance/s. Our solution will just try to find out the best service instance to be used and dictate the flow path which the data traffic needs to follow for accomplishing SFC. Let?s say the service pool contains 2 LB and 2 FW services and the user want a SFC as LB->FW. In such scenario a flow rule mentioning the details about LB and FW instance with flow path details will be downloaded to the OVS. Please note the details about the advanced services (like IP details i.e. where the service is running and etc) will be fetched from the neutron db. 2. Consider a scenario: For a particular Tennant traffic the flows are chained across a set of CNs . Then if one of the VM (of that Tennant) migrates to a new CN, where that Tennant was not there earlier on that CN, what will be the impact ? How to control the chaining of flows in these kind of scenario ? so that packet will reach that Tennant VM on new CN ? Vikram: If the deployment of advanced services change, the neutron db would be updated and corresponding actions (selection of advance service instance and corresponding change in OVS dataflow would be done). This is hidden to the user. Here this Tennant VM be a NFV Service-VM (which should be transparent to OpenStack). keshava From: Vikram Choudhary [mailto:vikram.choudhary at huawei.com] Sent: Monday, December 22, 2014 12:28 PM To: Murali B Cc: openstack-dev at lists.openstack.org; Yuriy.Babenko at telekom.de; A, Keshava; stephen.kf.wong at gmail.com; Dhruv Dhody; Dongfeng (C); Kalyankumar Asangi Subject: RE: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Sorry for the incontinence. We will sort the issue at the earliest. Please find the BP attached with the mail!!! From: Murali B [mailto:mbirru at gmail.com] Sent: 22 December 2014 12:20 To: Vikram Choudhary Cc: openstack-dev at lists.openstack.org; Yuriy.Babenko at telekom.de; keshava.a at hp.com; stephen.kf.wong at gmail.com; Dhruv Dhody; Dongfeng (C); Kalyankumar Asangi Subject: Re: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Thank you Vikram, Could you or somebody please provide the access the full specification document Thanks -Murali On Mon, Dec 22, 2014 at 11:48 AM, Vikram Choudhary > wrote: Hi Murali, We have proposed service function chaining idea using open flow. https://blueprints.launchpad.net/neutron/+spec/service-function-chaining-using-openflow Will submit the same for review soon. Thanks Vikram From: Yuriy.Babenko at telekom.de [mailto:Yuriy.Babenko at telekom.de] Sent: 18 December 2014 19:35 To: openstack-dev at lists.openstack.org; stephen.kf.wong at gmail.com Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi, in the IRC meeting yesterday we agreed to work on the use-case for service function chaining as it seems to be important for a lot of participants [1]. We will prepare the first draft and share it in the TelcoWG Wiki for discussion. There is one blueprint in openstack on that in [2] [1] http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-17-14.01.txt [2] https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining Kind regards/Mit freundlichen Gr??en Yuriy Babenko Von: A, Keshava [mailto:keshava.a at hp.com] Gesendet: Mittwoch, 10. Dezember 2014 19:06 An: stephen.kf.wong at gmail.com; OpenStack Development Mailing List (not for usage questions) Betreff: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi Murali, There are many unknows w.r.t ?Service-VM? and how it should from NFV perspective. In my opinion it was not decided how the Service-VM framework should be. Depending on this we at OpenStack also will have impact for ?Service Chaining?. Please find the mail attached w.r.t that discussion with NFV for ?Service-VM + Openstack OVS related discussion?. Regards, keshava From: Stephen Wong [mailto:stephen.kf.wong at gmail.com] Sent: Wednesday, December 10, 2014 10:03 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework Hi Murali, There is already a ServiceVM project (Tacker), currently under development on stackforge: https://wiki.openstack.org/wiki/ServiceVM If you are interested in this topic, please take a look at the wiki page above and see if the project's goals align with yours. If so, you are certainly welcome to join the IRC meeting and start to contribute to the project's direction and design. Thanks, - Stephen On Wed, Dec 10, 2014 at 7:01 AM, Murali B > wrote: Hi keshava, We would like contribute towards service chain and NFV Could you please share the document if you have any related to service VM The service chain can be achieved if we able to redirect the traffic to service VM using ovs-flows in this case we no need to have routing enable on the service VM(traffic is redirected at L2). All the tenant VM's in cloud could use this service VM services by adding the ovs-rules in OVS Thanks -Murali _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From swati.shukla1 at tcs.com Tue Dec 23 10:24:02 2014 From: swati.shukla1 at tcs.com (Swati Shukla1) Date: Tue, 23 Dec 2014 15:54:02 +0530 Subject: [openstack-dev] #PERSONAL# : Horizon -- File permission error in Horizon Message-ID: Hi All, I am getting this error when I run horizon- "horizon.utils.secret_key.FilePermissionError: Insecure key file permissions!" Can you please guide me to debug this? Thnaks and Regards, Swati Shukla Tata Consultancy Services Mailto: swati.shukla1 at tcs.com Website: http://www.tcs.com ____________________________________________ Experience certainty. IT Services Business Solutions Consulting ____________________________________________ - =====-----=====-----===== Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From asalkeld at mirantis.com Tue Dec 23 11:37:49 2014 From: asalkeld at mirantis.com (Angus Salkeld) Date: Tue, 23 Dec 2014 21:37:49 +1000 Subject: [openstack-dev] [Heat] cancel the next 2 weekly meetings Message-ID: Hi Lets cancel the next 2 weekly meetings as they neatly fall on Christmas eve and new years day. Happy holidays! Regards Angus -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Tue Dec 23 11:47:36 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 23 Dec 2014 12:47:36 +0100 Subject: [openstack-dev] Cross distribution talks on Friday In-Reply-To: <549916EC.3040002@debian.org> References: <5454DC71.7040300@debian.org> <20141101152935.GA12516@tesla> <545533E1.1070202@debian.org> <54944A7F.6010500@redhat.com> <549916EC.3040002@debian.org> Message-ID: <54995658.1060705@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 23/12/14 08:17, Thomas Goirand wrote: > On 12/19/2014 11:55 PM, Ihar Hrachyshka wrote: >> Note that OSLO_PACKAGE_VERSION is not public. > > Well, it used to be public, it has been added and discussed a few > years ago because of issues I had with packaging. > >> Instead, we should use PBR_VERSION: >> >> http://docs.openstack.org/developer/pbr/packagers.html#versioning > >> >> >> > I don't mind switching, though it's going to be a slow process > (because I'm using OSLO_PACKAGE_VERSION in all packages). > > Are we at least *sure* that using OSLO_PACKAGE_VERSION is now > deprecated? I haven't said anyone should go forward and switch all existing build manifests. ;) I think Doug Hellmann should be able to answer to your question more formally. Adding him to CC. > > Thomas > > _______________________________________________ OpenStack-dev > mailing list OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUmVZYAAoJEC5aWaUY1u57E8UH/1lekBJpSRFQsRCvudG2dIyr lBk+W9ZsfzsOPbdOEf1xtZprU2qz6SwV6zZ6vD/qy7pXoFQe9Z8DP1K2VjPc1JGC G+maz8HWYxCjgc7UL8nKWqh1gIzSvYN0KkNJYBAHmn39bV7EjSnHJ2y7o2vG57bE nJA6DlRw3oDdfagWwZr3E2A1+WDDkoAImkj9XZeYQjzal5EMsHyMrWMWlcMvt3Sg x4SGtxxRmYOkzARpZrtCfrsm5JZAC21mX8aJJdoRVwOwCPUHZi9mG1X821NJdvh6 fLVnpu6dCFTo+oKZyESRoPu6BUZOKGxElV2pp2UrIJJEJ3t3mHrPGKXOde28KPg= =ydXr -----END PGP SIGNATURE----- From baoli at cisco.com Tue Dec 23 14:42:26 2014 From: baoli at cisco.com (Robert Li (baoli)) Date: Tue, 23 Dec 2014 14:42:26 +0000 Subject: [openstack-dev] [qa] host aggregate's availability zone In-Reply-To: Message-ID: Hi Danny, check this link out. https://wiki.openstack.org/wiki/Scheduler_Filters Add the following into your /etc/nova/nova.conf before starting the nova service. scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, AvailabilityZoneFilter Or, You can do so in your local.conf [[post-config|$NOVA_CONF]] [DEFAULT] pci_alias={"name":"cisco","vendor_id":"8086","product_id":"10ed"} scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, AvailabilityZoneFilter ?Robert On 12/22/14, 9:53 AM, "Danny Choi (dannchoi)" > wrote: Hi Joe, No, I did not. I?m not aware of this. Can you tell me exactly what needs to be done? Thanks, Danny ------------------------------ Date: Sun, 21 Dec 2014 11:42:02 -0600 From: Joe Cropper > To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [qa] host aggregate's availability zone Message-ID: > Content-Type: text/plain; charset="utf-8" Did you enable the AvailabilityZoneFilter in nova.conf that the scheduler uses? And enable the FilterScheduler? These are two common issues related to this. - Joe On Dec 21, 2014, at 10:28 AM, Danny Choi (dannchoi) > wrote: Hi, I have a multi-node setup with 2 compute hosts, qa5 and qa6. I created 2 host-aggregate, each with its own availability zone, and assigned one compute host: localadmin at qa4:~/devstack$ nova aggregate-details host-aggregate-zone-1 +----+-----------------------+-------------------+-------+--------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+-----------------------+-------------------+-------+--------------------------+ | 9 | host-aggregate-zone-1 | az-1 | 'qa5' | 'availability_zone=az-1' | +----+-----------------------+-------------------+-------+--------------------------+ localadmin at qa4:~/devstack$ nova aggregate-details host-aggregate-zone-2 +----+-----------------------+-------------------+-------+--------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+-----------------------+-------------------+-------+--------------------------+ | 10 | host-aggregate-zone-2 | az-2 | 'qa6' | 'availability_zone=az-2' | +----+-----------------------+-------------------+-------+?????????????+ My intent is to control at which compute host to launch a VM via the host-aggregate?s availability-zone parameter. To test, for vm-1, I specify --availiability-zone=az-1, and --availiability-zone=az-2 for vm-2: localadmin at qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-1 vm-1 +--------------------------------------+----------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000066 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | kxot3ZBZcBH6 | | config_drive | | | created | 2014-12-21T15:59:03Z | | flavor | m1.tiny (1) | | hostId | | | id | 854acae9-b718-4ea5-bc28-e0bc46378b60 | | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | | key_name | - | | metadata | {} | | name | vm-1 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 84827057a7444354b0bff11566ccb80b | | updated | 2014-12-21T15:59:03Z | | user_id | 9d5fd9947d154a2db396fce177f1f83c | +--------------------------------------+----------------------------------------------------------------+ localadmin at qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 --nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-2 vm-2 +--------------------------------------+----------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000067 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | 2kXQpV2u9TVv | | config_drive | | | created | 2014-12-21T15:59:55Z | | flavor | m1.tiny (1) | | hostId | | | id | ce1b5dca-a844-4c59-bb00-39a617646c59 | | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | | key_name | - | | metadata | {} | | name | vm-2 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 84827057a7444354b0bff11566ccb80b | | updated | 2014-12-21T15:59:55Z | | user_id | 9d5fd9947d154a2db396fce177f1f83c | +--------------------------------------+????????????????????????????????+ However, both VMs ended up at compute host qa5: localadmin at qa4:~/devstack$ nova hypervisor-servers q +--------------------------------------+-------------------+---------------+---------------------+ | ID | Name | Hypervisor ID | Hypervisor Hostname | +--------------------------------------+-------------------+---------------+---------------------+ | 854acae9-b718-4ea5-bc28-e0bc46378b60 | instance-00000066 | 1 | qa5 | | ce1b5dca-a844-4c59-bb00-39a617646c59 | instance-00000067 | 1 | qa5 | +--------------------------------------+-------------------+---------------+---------------------+ localadmin at qa4:~/devstack$ nova show vm-1 +--------------------------------------+----------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | az-1 | | OS-EXT-SRV-ATTR:host | qa5 | | OS-EXT-SRV-ATTR:hypervisor_hostname | qa5 | | OS-EXT-SRV-ATTR:instance_name | instance-00000066 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2014-12-21T16:03:15.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-12-21T15:59:03Z | | flavor | m1.tiny (1) | | hostId | 89119faac9345b51f185bd8b6c2e091644f1544cd523067ecce64613 | | id | 854acae9-b718-4ea5-bc28-e0bc46378b60 | | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | | key_name | - | | metadata | {} | | name | vm-1 | | os-extended-volumes:volumes_attached | [] | | private network | 10.0.0.70 | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id | 84827057a7444354b0bff11566ccb80b | | updated | 2014-12-21T15:59:11Z | | user_id | 9d5fd9947d154a2db396fce177f1f83c | +--------------------------------------+----------------------------------------------------------------+ localadmin at qa4:~/devstack$ nova show vm-2 +--------------------------------------+----------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | az-1 | | OS-EXT-SRV-ATTR:host | qa5 | | OS-EXT-SRV-ATTR:hypervisor_hostname | qa5 | | OS-EXT-SRV-ATTR:instance_name | instance-00000067 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | spawning | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-12-21T15:59:55Z | | flavor | m1.tiny (1) | | hostId | 89119faac9345b51f185bd8b6c2e091644f1544cd523067ecce64613 | | id | ce1b5dca-a844-4c59-bb00-39a617646c59 | | image | cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | | key_name | - | | metadata | {} | | name | vm-2 | | os-extended-volumes:volumes_attached | [] | | private network | 10.0.0.71 | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 84827057a7444354b0bff11566ccb80b | | updated | 2014-12-21T15:59:56Z | | user_id | 9d5fd9947d154a2db396fce177f1f83c | +--------------------------------------+----------------------------------------------------------------+ Is it supposed to work this way? Do I missed something here? Thanks, Danny _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Dec 23 14:53:47 2014 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 23 Dec 2014 09:53:47 -0500 Subject: [openstack-dev] Cross distribution talks on Friday In-Reply-To: <549916EC.3040002@debian.org> References: <5454DC71.7040300@debian.org> <20141101152935.GA12516@tesla> <545533E1.1070202@debian.org> <54944A7F.6010500@redhat.com> <549916EC.3040002@debian.org> Message-ID: On Dec 23, 2014, at 2:17 AM, Thomas Goirand wrote: > On 12/19/2014 11:55 PM, Ihar Hrachyshka wrote: >> Note that OSLO_PACKAGE_VERSION is not public. > > Well, it used to be public, it has been added and discussed a few years > ago because of issues I had with packaging. > >> Instead, we should use >> PBR_VERSION: >> >> http://docs.openstack.org/developer/pbr/packagers.html#versioning > > I don't mind switching, though it's going to be a slow process (because > I'm using OSLO_PACKAGE_VERSION in all packages). > > Are we at least *sure* that using OSLO_PACKAGE_VERSION is now deprecated? It?s not marked as deprecated [1], but I think we added PBR_VERSION because the name OSLO_PACKAGE_VERSION made less sense to someone outside of OpenStack who isn?t familiar with the fact that the Oslo program manages pbr. Doug [1] http://git.openstack.org/cgit/openstack-dev/pbr/tree/pbr/packaging.py#n641 > > Thomas > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sbauza at redhat.com Tue Dec 23 14:55:51 2014 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 23 Dec 2014 15:55:51 +0100 Subject: [openstack-dev] [qa] host aggregate's availability zone In-Reply-To: References: Message-ID: <54998277.9070701@redhat.com> Le 23/12/2014 15:42, Robert Li (baoli) a ?crit : > Hi Danny, > > check this link out. > https://wiki.openstack.org/wiki/Scheduler_Filters > > Add the following into your /etc/nova/nova.conf before starting the > nova service. > > scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, > RamFilter, ComputeFilter, ComputeCapabilitiesFilter, > ImagePropertiesFilter, ServerGroupAntiAffinityFilter, > ServerGroupAffinityFilter, AvailabilityZoneFilter > > Or, You can do so in your local.conf > [[post-config|$NOVA_CONF]] > [DEFAULT] > pci_alias={"name":"cisco","vendor_id":"8086","product_id":"10ed"} > scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, > RamFilter, ComputeFilter, ComputeCapabilitiesFilter, > ImagePropertiesFilter, ServerGroupAntiAffinityFilter, > ServerGroupAffinityFilter, AvailabilityZoneFilter > > That's weird because the default value for scheduler_default_filters is : cfg.ListOpt('scheduler_default_filters', default=[ 'RetryFilter', 'AvailabilityZoneFilter', 'RamFilter', 'ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter', ], The AZ filter is present, so I suspect something is wrong elsewhere. Could you maybe paste your log files for the nova-scheduler log ? Also, please stop posting to the -dev ML, I think it's more appropriate to the openstack@ ML. We need more details before creating a bug. -Sylvain > ?Robert > > On 12/22/14, 9:53 AM, "Danny Choi (dannchoi)" > wrote: > > Hi Joe, > > No, I did not. I?m not aware of this. > > Can you tell me exactly what needs to be done? > > Thanks, > Danny > > ------------------------------ > > Date: Sun, 21 Dec 2014 11:42:02 -0600 > From: Joe Cropper > > To: "OpenStack Development Mailing List (not for usage questions)" > > > Subject: Re: [openstack-dev] [qa] host aggregate's availability zone > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Did you enable the AvailabilityZoneFilter in nova.conf that the > scheduler uses? And enable the FilterScheduler? These are two > common issues related to this. > > - Joe > > On Dec 21, 2014, at 10:28 AM, Danny Choi (dannchoi) > > wrote: > Hi, > I have a multi-node setup with 2 compute hosts, qa5 and qa6. > I created 2 host-aggregate, each with its own availability > zone, and assigned one compute host: > localadmin at qa4:~/devstack$ nova aggregate-details > host-aggregate-zone-1 > +----+-----------------------+-------------------+-------+--------------------------+ > | Id | Name | Availability Zone | Hosts | > Metadata | > +----+-----------------------+-------------------+-------+--------------------------+ > | 9 | host-aggregate-zone-1 | az-1 | 'qa5' | > 'availability_zone=az-1' | > +----+-----------------------+-------------------+-------+--------------------------+ > localadmin at qa4:~/devstack$ nova aggregate-details > host-aggregate-zone-2 > +----+-----------------------+-------------------+-------+--------------------------+ > | Id | Name | Availability Zone | Hosts | > Metadata | > +----+-----------------------+-------------------+-------+--------------------------+ > | 10 | host-aggregate-zone-2 | az-2 | 'qa6' | > 'availability_zone=az-2' | > +----+-----------------------+-------------------+-------+?????????????+ > My intent is to control at which compute host to launch a VM > via the host-aggregate?s availability-zone parameter. > To test, for vm-1, I specify --availiability-zone=az-1, and > --availiability-zone=az-2 for vm-2: > localadmin at qa4:~/devstack$ nova boot --image > cirros-0.3.2-x86_64-uec --flavor 1 --nic > net-id=5da9d715-19fd-47c7-9710-e395b5b90442 > --availability-zone az-1 vm-1 > +--------------------------------------+----------------------------------------------------------------+ > | Property | > Value | > +--------------------------------------+----------------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | nova | > | OS-EXT-SRV-ATTR:host | > - | > | OS-EXT-SRV-ATTR:hypervisor_hostname | > - | > | OS-EXT-SRV-ATTR:instance_name | > instance-00000066 | > | OS-EXT-STS:power_state | > 0 | > | OS-EXT-STS:task_state | > - | > | OS-EXT-STS:vm_state | building | > | OS-SRV-USG:launched_at | > - | > | OS-SRV-USG:terminated_at | > - | > | accessIPv4 > | | > | accessIPv6 > | | > | adminPass | kxot3ZBZcBH6 | > | config_drive > | | > | created | 2014-12-21T15:59:03Z | > | flavor | m1.tiny > (1) | > | hostId > | | > | id | > 854acae9-b718-4ea5-bc28-e0bc46378b60 | > | image | > cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | > | key_name | > - | > | metadata | {} | > | name | vm-1 | > | os-extended-volumes:volumes_attached | [] | > | progress | > 0 | > | security_groups | > default | > | status | > BUILD | > | tenant_id | > 84827057a7444354b0bff11566ccb80b | > | updated | 2014-12-21T15:59:03Z | > | user_id | > 9d5fd9947d154a2db396fce177f1f83c | > +--------------------------------------+----------------------------------------------------------------+ > localadmin at qa4:~/devstack$ nova boot --image > cirros-0.3.2-x86_64-uec --flavor 1 --nic > net-id=5da9d715-19fd-47c7-9710-e395b5b90442 > --availability-zone az-2 vm-2 > +--------------------------------------+----------------------------------------------------------------+ > | Property | > Value | > +--------------------------------------+----------------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | nova | > | OS-EXT-SRV-ATTR:host | > - | > | OS-EXT-SRV-ATTR:hypervisor_hostname | > - | > | OS-EXT-SRV-ATTR:instance_name | > instance-00000067 | > | OS-EXT-STS:power_state | > 0 | > | OS-EXT-STS:task_state | scheduling | > | OS-EXT-STS:vm_state | building | > | OS-SRV-USG:launched_at | > - | > | OS-SRV-USG:terminated_at | > - | > | accessIPv4 > | | > | accessIPv6 > | | > | adminPass | 2kXQpV2u9TVv | > | config_drive > | | > | created | 2014-12-21T15:59:55Z | > | flavor | m1.tiny > (1) | > | hostId > | | > | id | > ce1b5dca-a844-4c59-bb00-39a617646c59 | > | image | > cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | > | key_name | > - | > | metadata | {} | > | name | vm-2 | > | os-extended-volumes:volumes_attached | [] | > | progress | > 0 | > | security_groups | > default | > | status | > BUILD | > | tenant_id | > 84827057a7444354b0bff11566ccb80b | > | updated | 2014-12-21T15:59:55Z | > | user_id | > 9d5fd9947d154a2db396fce177f1f83c | > +--------------------------------------+????????????????????????????????+ > However, both VMs ended up at compute host qa5: > localadmin at qa4:~/devstack$ nova hypervisor-servers q > +--------------------------------------+-------------------+---------------+---------------------+ > | ID | Name | > Hypervisor ID | Hypervisor Hostname | > +--------------------------------------+-------------------+---------------+---------------------+ > | 854acae9-b718-4ea5-bc28-e0bc46378b60 | instance-00000066 | > 1 | qa5 | > | ce1b5dca-a844-4c59-bb00-39a617646c59 | instance-00000067 | > 1 | qa5 | > +--------------------------------------+-------------------+---------------+---------------------+ > localadmin at qa4:~/devstack$ nova show vm-1 > +--------------------------------------+----------------------------------------------------------------+ > | Property | > Value | > +--------------------------------------+----------------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | az-1 | > | OS-EXT-SRV-ATTR:host | > qa5 | > | OS-EXT-SRV-ATTR:hypervisor_hostname | > qa5 | > | OS-EXT-SRV-ATTR:instance_name | > instance-00000066 | > | OS-EXT-STS:power_state | > 1 | > | OS-EXT-STS:task_state | > - | > | OS-EXT-STS:vm_state | active | > | OS-SRV-USG:launched_at | > 2014-12-21T16:03:15.000000 | > | OS-SRV-USG:terminated_at | > - | > | accessIPv4 > | | > | accessIPv6 > | | > | config_drive > | | > | created | 2014-12-21T15:59:03Z | > | flavor | m1.tiny > (1) | > | hostId | > 89119faac9345b51f185bd8b6c2e091644f1544cd523067ecce64613 | > | id | > 854acae9-b718-4ea5-bc28-e0bc46378b60 | > | image | > cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | > | key_name | > - | > | metadata | {} | > | name | vm-1 | > | os-extended-volumes:volumes_attached | [] | > | private network | > 10.0.0.70 | > | progress | > 0 | > | security_groups | > default | > | status | ACTIVE | > | tenant_id | > 84827057a7444354b0bff11566ccb80b | > | updated | 2014-12-21T15:59:11Z | > | user_id | > 9d5fd9947d154a2db396fce177f1f83c | > +--------------------------------------+----------------------------------------------------------------+ > localadmin at qa4:~/devstack$ nova show vm-2 > +--------------------------------------+----------------------------------------------------------------+ > | Property | > Value | > +--------------------------------------+----------------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | az-1 | > | OS-EXT-SRV-ATTR:host | > qa5 | > | OS-EXT-SRV-ATTR:hypervisor_hostname | > qa5 | > | OS-EXT-SRV-ATTR:instance_name | > instance-00000067 | > | OS-EXT-STS:power_state | > 0 | > | OS-EXT-STS:task_state | spawning | > | OS-EXT-STS:vm_state | building | > | OS-SRV-USG:launched_at | > - | > | OS-SRV-USG:terminated_at | > - | > | accessIPv4 > | | > | accessIPv6 > | | > | config_drive > | | > | created | 2014-12-21T15:59:55Z | > | flavor | m1.tiny > (1) | > | hostId | > 89119faac9345b51f185bd8b6c2e091644f1544cd523067ecce64613 | > | id | > ce1b5dca-a844-4c59-bb00-39a617646c59 | > | image | > cirros-0.3.2-x86_64-uec (61409a53-305c-4022-978b-06e55052875b) | > | key_name | > - | > | metadata | {} | > | name | vm-2 | > | os-extended-volumes:volumes_attached | [] | > | private network | > 10.0.0.71 | > | progress | > 0 | > | security_groups | > default | > | status | > BUILD | > | tenant_id | > 84827057a7444354b0bff11566ccb80b | > | updated | 2014-12-21T15:59:56Z | > | user_id | > 9d5fd9947d154a2db396fce177f1f83c | > +--------------------------------------+----------------------------------------------------------------+ > Is it supposed to work this way? Do I missed something here? > Thanks, > Danny > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Tue Dec 23 14:58:45 2014 From: sgordon at redhat.com (Steve Gordon) Date: Tue, 23 Dec 2014 09:58:45 -0500 (EST) Subject: [openstack-dev] [nova][libvirt]How to customize cpu features in nova In-Reply-To: References: Message-ID: <129589601.1629659.1419346725212.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "CloudBeyond" > To: openstack-dev at lists.openstack.org > > Dear Developers, > > Sorry for interrupting if i sent to wrong email group, but i got a problem on > running Solaris 10 on icehouse openstack. > I found it is need to disable CPU feature x2apic so that solaris 10 NIC > could work in KVM as following code in libvirt.xml > > > SandyBridge > Intel > > > > if without line > > the NIC in Solaris could not work well. > > And I try to migrate the KVM libvirt xml to Nova. I found only two options > to control the result. > > First I used default setting cpu_mdoe = None in nova.conf , the Solaris 10 > would keep rebooting before enter desktop env. > > And then I set cpu_mode = custom, cpu_model = SandyBridge. Solaris 10 could > start up but NIC not work. > > I also set cpu_mode = host-model, cpu_model = None. Solaris 10 could work > but NIC not. > > I read the code located > in /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py. Is that > possible to do some hacking to customize the cpu feature? It's possible though as you note requires modification of the driver, if you want to try and do this in a way that is compatible with other efforts to handle guest OS specific customizations you might want to review this proposal: https://review.openstack.org/#/c/133945/4 -Steve From gkotton at vmware.com Tue Dec 23 15:15:25 2014 From: gkotton at vmware.com (Gary Kotton) Date: Tue, 23 Dec 2014 15:15:25 +0000 Subject: [openstack-dev] No meetings on Christmas or New Year's Days In-Reply-To: References: Message-ID: Two reasons to celebrate. Its Elvis?s birthday! On 12/22/14, 10:46 PM, "Carl Baldwin" wrote: >The L3 sub team meeting [1] will not be held until the 8th of January, >2015. Enjoy your time off. I will try to move some of the >refactoring patches along as I can but will be down to minimal hours. > >Carl > >[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev at lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ramy.asselin at hp.com Tue Dec 23 15:16:34 2014 From: ramy.asselin at hp.com (Asselin, Ramy) Date: Tue, 23 Dec 2014 15:16:34 +0000 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: References: <5486D947.4090209@hp.com> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4240C5@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A424BC0@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A435949@G4W3223.americas.hpqcorp.net> Message-ID: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4361EE@G4W3223.americas.hpqcorp.net> You should use 14.04 for the slave. The limitation for using 12.04 is only for the master since zuul?s apache configuration is WIP on 14.04 [1], and zuul does not run on the slave. Ramy [1] https://review.openstack.org/#/c/141518/ From: Punith S [mailto:punith.s at cloudbyte.com] Sent: Monday, December 22, 2014 11:37 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Hi Asselin, i'm following your readme https://github.com/rasselin/os-ext-testing for setting up our cloudbyte CI on two ubuntu 12.04 VM's(master and slave) currently the scripts and setup went fine as followed with the document. now both master and slave have been connected successfully, but in order to run the tempest integration test against our proposed cloudbyte cinder driver for kilo, we need to have devstack installed in the slave.(in my understanding) but on installing the master devstack i'm getting permission issues in 12.04 in executing ./stack.sh since master devstack suggests the 14.04 or 13.10 ubuntu. and on contrary running install_slave.sh is failing on 13.10 due to puppet modules on found error. is there a way to get this work ? thanks in advance On Mon, Dec 22, 2014 at 11:10 PM, Asselin, Ramy > wrote: Eduard, A few items you can try: 1. Double-check that the job is in Jenkins a. If not, then that?s the issue 2. Check that the processes are running correctly a. ps -ef | grep zuul i. Should have 2 zuul-server & 1 zuul-merger b. ps -ef | grep Jenkins i. Should have 1 /usr/bin/daemon --name=jenkins & 1 /usr/bin/java 3. In Jenkins, Manage Jenkins, Gearman Plugin Config, ?Test Connection? 4. Stop and Zuul & Jenkins. Start Zuul & Jenkins a. service Jenkins stop b. service zuul stop c. service zuul-merger stop d. service Jenkins start e. service zuul start f. service zuul-merger start Otherwise, I suggest you ask in #openstack-infra irc channel. Ramy From: Eduard Matei [mailto:eduard.matei at cloudfounders.com] Sent: Sunday, December 21, 2014 11:01 PM To: Asselin, Ramy Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Thanks Ramy, Unfortunately i don't see dsvm-tempest-full in the "status" output. Any idea how i can get it "registered"? Thanks, Eduard On Fri, Dec 19, 2014 at 9:43 PM, Asselin, Ramy > wrote: Eduard, If you run this command, you can see which jobs are registered: >telnet localhost 4730 >status There are 3 numbers per job: queued, running and workers that can run job. Make sure the job is listed & last ?workers? is non-zero. To run the job again without submitting a patch set, leave a ?recheck? comment on the patch & make sure your zuul layout.yaml is configured to trigger off that comment. For example [1]. Be sure to use the sandbox repository. [2] I?m not aware of other ways. Ramy [1] https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L20 [2] https://github.com/openstack-dev/sandbox From: Eduard Matei [mailto:eduard.matei at cloudfounders.com] Sent: Friday, December 19, 2014 3:36 AM To: Asselin, Ramy Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Hi all, After a little struggle with the config scripts i managed to get a working setup that is able to process openstack-dev/sandbox and run noop-check-comunication. Then, i tried enabling dsvm-tempest-full job but it keeps returning "NOT_REGISTERED" 2014-12-19 12:07:14,683 INFO zuul.IndependentPipelineManager: Change depends on changes [] 2014-12-19 12:07:14,683 INFO zuul.Gearman: Launch job noop-check-communication for change with dependent changes [] 2014-12-19 12:07:14,693 INFO zuul.Gearman: Launch job dsvm-tempest-full for change with dependent changes [] 2014-12-19 12:07:14,694 ERROR zuul.Gearman: Job is not registered with Gearman 2014-12-19 12:07:14,694 INFO zuul.Gearman: Build complete, result NOT_REGISTERED 2014-12-19 12:07:14,765 INFO zuul.Gearman: Build name: build:noop-check-communication unique: 333c6ea077324a788e3c37a313d872c5> started 2014-12-19 12:07:14,910 INFO zuul.Gearman: Build name: build:noop-check-communication unique: 333c6ea077324a788e3c37a313d872c5> complete, result SUCCESS 2014-12-19 12:07:14,916 INFO zuul.IndependentPipelineManager: Reporting change , actions: [, {'verified': -1}>] Nodepoold's log show no reference to dsvm-tempest-full and neither jenkins' logs. Any idea how to enable this job? Also, i got the "Cloud provider" setup and i can access it from the jenkins master. Any idea how i can manually trigger dsvm-tempest-full job to run and test the cloud provider without having to push a review to Gerrit? Thanks, Eduard On Thu, Dec 18, 2014 at 7:52 PM, Eduard Matei > wrote: Thanks for the input. I managed to get another master working (on Ubuntu 13.10), again with some issues since it was already setup. I'm now working towards setting up the slave. Will add comments to those reviews. Thanks, Eduard On Thu, Dec 18, 2014 at 7:42 PM, Asselin, Ramy > wrote: Yes, Ubuntu 12.04 is tested as mentioned in the readme [1]. Note that the referenced script is just a wrapper that pulls all the latest from various locations in openstack-infra, e.g. [2]. Ubuntu 14.04 support is WIP [3] FYI, there?s a spec to get an in-tree 3rd party ci solution [4]. Please add your comments if this interests you. [1] https://github.com/rasselin/os-ext-testing/blob/master/README.md [2] https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L29 [3] https://review.openstack.org/#/c/141518/ [4] https://review.openstack.org/#/c/139745/ From: Punith S [mailto:punith.s at cloudbyte.com] Sent: Thursday, December 18, 2014 3:12 AM To: OpenStack Development Mailing List (not for usage questions); Eduard Matei Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Hi Eduard we tried running https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh on ubuntu master 12.04, and it appears to be working fine on 12.04. thanks On Thu, Dec 18, 2014 at 1:57 PM, Eduard Matei > wrote: Hi, Seems i can't install using puppet on the jenkins master using install_master.sh from https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh because it's running Ubuntu 11.10 and it appears unsupported. I managed to install puppet manually on master and everything else fails So i'm trying to manually install zuul and nodepool and jenkins job builder, see where i end up. The slave looks complete, got some errors on running install_slave so i ran parts of the script manually, changing some params and it appears installed but no way to test it without the master. Any ideas welcome. Thanks, Eduard On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy > wrote: Manually running the script requires a few environment settings. Take a look at the README here: https://github.com/openstack-infra/devstack-gate Regarding cinder, I?m using this repo to run our cinder jobs (fork from jaypipes). https://github.com/rasselin/os-ext-testing Note that this solution doesn?t use the Jenkins gerrit trigger pluggin, but zuul. There?s a sample job for cinder here. It?s in Jenkins Job Builder format. https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample You can ask more questions in IRC freenode #openstack-cinder. (irc# asselin) Ramy From: Eduard Matei [mailto:eduard.matei at cloudfounders.com] Sent: Tuesday, December 16, 2014 12:41 AM To: Bailey, Darragh Cc: OpenStack Development Mailing List (not for usage questions); OpenStack Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI Hi, Can someone point me to some working documentation on how to setup third party CI? (joinfu's instructions don't seem to work, and manually running devstack-gate scripts fails: Running gate_hook Job timeout set to: 163 minutes timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory ERROR: the main setup script run by this job failed - exit code: 127 please look at the relevant log files to determine the root cause Cleaning up host ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) Build step 'Execute shell' marked build as failure. I have a working Jenkins slave with devstack and our internal libraries, i have Gerrit Trigger Plugin working and triggering on patches created, i just need the actual job contents so that it can get to comment with the test results. Thanks, Eduard On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei > wrote: Hi Darragh, thanks for your input I double checked the job settings and fixed it: - build triggers is set to Gerrit event - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin and tested separately) - Trigger on: Patchset Created - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: Type: Path, Pattern: ** (was Type Plain on both) Now the job is triggered by commit on openstack-dev/sandbox :) Regarding the Query and Trigger Gerrit Patches, i found my patch using query: status:open project:openstack-dev/sandbox change:139585 and i can trigger it manually and it executes the job. But i still have the problem: what should the job do? It doesn't actually do anything, it doesn't run tests or comment on the patch. Do you have an example of job? Thanks, Eduard On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh > wrote: Hi Eduard, I would check the trigger settings in the job, particularly which "type" of pattern matching is being used for the branches. Found it tends to be the spot that catches most people out when configuring jobs with the Gerrit Trigger plugin. If you're looking to trigger against all branches then you would want "Type: Path" and "Pattern: **" appearing in the UI. If you have sufficient access using the 'Query and Trigger Gerrit Patches' page accessible from the main view will make it easier to confirm that your Jenkins instance can actually see changes in gerrit for the given project (which should mean that it can see the corresponding events as well). Can also use the same page to re-trigger for PatchsetCreated events to see if you've set the patterns on the job correctly. Regards, Darragh Bailey "Nothing is foolproof to a sufficiently talented fool" - Unknown On 08/12/14 14:33, Eduard Matei wrote: > Resending this to dev ML as it seems i get quicker response :) > > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: > Patchset Created", chose as server the configured Gerrit server that > was previously tested, then added the project openstack-dev/sandbox > and saved. > I made a change on dev sandbox repo but couldn't trigger my job. > > Any ideas? > > Thanks, > Eduard > > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei > > >> wrote: > > Hello everyone, > > Thanks to the latest changes to the creation of service accounts > process we're one step closer to setting up our own CI platform > for Cinder. > > So far we've got: > - Jenkins master (with Gerrit plugin) and slave (with DevStack and > our storage solution) > - Service account configured and tested (can manually connect to > review.openstack.org and get events > and publish comments) > > Next step would be to set up a job to do the actual testing, this > is where we're stuck. > Can someone please point us to a clear example on how a job should > look like (preferably for testing Cinder on Kilo)? Most links > we've found are broken, or tools/scripts are no longer working. > Also, we cannot change the Jenkins master too much (it's owned by > Ops team and they need a list of tools/scripts to review before > installing/running so we're not allowed to experiment). > > Thanks, > Eduard > > -- > > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom > they are addressed.If you are not the named addressee or an > employee or agent responsible for delivering this message to the > named addressee, you are hereby notified that you are not > authorized to read, print, retain, copy or disseminate this > message or any part of it. If you have received this email in > error we request you to notify us by reply e-mail and to delete > all electronic files of the message. If you are not the intended > recipient you are notified that disclosing, copying, distributing > or taking any action in reliance on the contents of this > information is strictly prohibited. E-mail transmission cannot be > guaranteed to be secure or error free as information could be > intercepted, corrupted, lost, destroyed, arrive late or > incomplete, or contain viruses. The sender therefore does not > accept liability for any errors or omissions in the content of > this message, and shall have no liability for any loss or damage > suffered by the user, which arise as a result of e-mail transmission. > > > > > -- > *Eduard Biceri Matei, Senior Software Developer* > www.cloudfounders.com > | eduard.matei at cloudfounders.com > > > > > > *CloudFounders, The Private Cloud Software Company* > > Disclaimer: > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom they > are addressed.If you are not the named addressee or an employee or > agent responsible for delivering this message to the named addressee, > you are hereby notified that you are not authorized to read, print, > retain, copy or disseminate this message or any part of it. If you > have received this email in error we request you to notify us by reply > e-mail and to delete all electronic files of the message. If you are > not the intended recipient you are notified that disclosing, copying, > distributing or taking any action in reliance on the contents of this > information is strictly prohibited. E-mail transmission cannot be > guaranteed to be secure or error free as information could be > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or > contain viruses. The sender therefore does not accept liability for > any errors or omissions in the content of this message, and shall have > no liability for any loss or damage suffered by the user, which arise > as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- regards, punith s cloudbyte.com -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -- Eduard Biceri Matei, Senior Software Developer www.cloudfounders.com | eduard.matei at cloudfounders.com CloudFounders, The Private Cloud Software Company Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- regards, punith s cloudbyte.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Tue Dec 23 15:26:18 2014 From: soulxu at gmail.com (Alex Xu) Date: Tue, 23 Dec 2014 23:26:18 +0800 Subject: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy In-Reply-To: <549821B4.6050805@redhat.com> References: <549821B4.6050805@redhat.com> Message-ID: 2014-12-22 21:50 GMT+08:00 Sylvain Bauza : > > Le 22/12/2014 13:37, Alex Xu a ?crit : > > > > 2014-12-22 10:36 GMT+08:00 Lingxian Kong : > >> 2014-12-22 9:21 GMT+08:00 Alex Xu : >> > >> > >> > 2014-12-22 9:01 GMT+08:00 Lingxian Kong : >> >> >> >> >> >> >> but what if the compute node is back to normal? There will be >> >> instances in the same server group with affinity policy, but located >> >> in different hosts. >> >> >> > >> > If operator decide to evacuate the instance from the failed host, we >> should >> > fence the failed host first. >> >> Yes, actually. I mean the recommandation or prerequisite should be >> emphasized somewhere, e.g. the Operation Guide, otherwise it'll make >> things more confused. But the issue you are working around is indeed a >> problem we should solve. >> >> > Yea, you are right, we should doc it if we think this make sense. Thanks! > > > > As I said, I'm not in favor of adding more complexity in the instance > group setup that is done in the conductor for basic race condition reasons. > Emm...anyway we can resolve it for now? > > If I understand correctly, the problem is when there is only one host for > all the instances belonging to a group with affinity filter and this host > is down, then the filter will deny any other host and consequently the > request will fail while it should succeed. > > Yes, you understand correctly. Thanks for explain that, that's good for other people to understand what we talking about. > Is this really a problem ? I mean, it appears to me that's a normal > behaviour because a filter is by definition an *hard* policy. > Yea, it isn't problem for normal case. But it's problem for VM HA. So I want to ask whether we should tell user if you use *hard* policy, that means you lose the VM HA. If we choice that, maybe we should doc at somewhere to notice user. But if user can have *hard* policy and VM HA at sametime and we aren't break anything(except a little complex code), that's sounds good for user. > > So, provided you would like to implement *soft* policies, that sounds more > likely a *weigher* that you would like to have : ie. make sure that hosts > running existing instances in the group are weighted more than other ones > so they'll be chosen every time, but in case they're down, allow the > scheduler to pick other hosts. > yes, soft policy didn't have this problem. > > HTH, > -Sylvain > > > > > -- >> Regards! >> ----------------------------------- >> Lingxian Kong >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > _______________________________________________ > OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppouliot at microsoft.com Tue Dec 23 15:46:49 2014 From: ppouliot at microsoft.com (Peter Pouliot) Date: Tue, 23 Dec 2014 15:46:49 +0000 Subject: [openstack-dev] Hyper-V meeting Message-ID: Hi All, With the pending holidays we have a lack of quorum for today. We'll resume meetings after the New Year. p Peter J. Pouliot CISSP Microsoft Enterprise Cloud Solutions C:\OpenStack New England Research & Development Center 1 Memorial Drive Cambridge, MA 02142 P: 1.(857).4536436 E: ppouliot at microsoft.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From devananda.vdv at gmail.com Tue Dec 23 16:31:09 2014 From: devananda.vdv at gmail.com (Devananda van der Veen) Date: Tue, 23 Dec 2014 16:31:09 +0000 Subject: [openstack-dev] [Ironic] Cancelling next week's meeting Message-ID: With the winter break coming up (or already here, for some folks) I am cancelling next week's meeting on Dec 29. I had not cancelled last night's meeting ahead of time, but very few people attended, and with so few core reviewers present there wasn't much we could get done. We did not have a formal meeting, and just hung out in channel for about 15 minutes. This means our next meeting will be Jan 6th at 0500 UTC (Jan 5th at 9pm US west coast). See you all again after the break! Best, Devananda -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.w.chadwick at kent.ac.uk Tue Dec 23 16:34:41 2014 From: d.w.chadwick at kent.ac.uk (David Chadwick) Date: Tue, 23 Dec 2014 16:34:41 +0000 Subject: [openstack-dev] [Keystone] Bug in federation Message-ID: <549999A1.2070003@kent.ac.uk> Hi guys we now have the ABFAB federation protocol working with Keystone, using a modified mod_auth_kerb plugin for Apache (available from the project Moonshot web site). However, we did not change Keystone configuration from its original SAML federation configuration, when it was talking to SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code (which I believe had to be done for OpenID connect.) We simply replaced mod_shibboleth with mod_auth_kerb and talked to a completely different IDP with a different protocol. And everything worked just fine. Consequently Keystone is broken, since you can configure it to trust a particular IDP, talking a particular protocol, but Apache will happily talk to another IDP, using a different protocol, and Keystone cannot tell the difference and will happily accept the authenticated user. Keystone should reject any authenticated user who does not come from the trusted IDP talking the correct protocol. Otherwise there is no point in configuring Keystone with this information, if it is ignored by Keystone. BTW, we are using the Juno release. We should fix this bug in Kilo. As I have been saying for many months, Keystone does not know anything about SAML or ABFAB or OpenID Connect protocols, so there is currently no point in configuring this information into Keystone. Keystone is only aware of environmental parameters coming from Apache. So this is the protocol that Keystone recognises. If you want Keystone to try to control the federation protocol and IDPs used by Apache, then you will need the Apache plugins to pass the name of the IDP and the protocol being used as environmental parameters to Keystone, and then Keystone can check that the ones that it has been configured to trust, are actually being used by Apache. regards David From joe.gordon0 at gmail.com Tue Dec 23 16:34:31 2014 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Tue, 23 Dec 2014 08:34:31 -0800 Subject: [openstack-dev] Hierarchical Multitenancy In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E501025E032F@CERNXCHG43.cern.ch> References: <2964CB70-9ED7-42DE-9F17-498458A0031B@gmail.com> <5D7F9996EA547448BC6C54C8C5AAF4E501025E032F@CERNXCHG43.cern.ch> Message-ID: On Dec 23, 2014 12:26 AM, "Tim Bell" wrote: > > > > It would be great if we can get approval for the Hierachical Quota handling in Nova too (https://review.openstack.org/#/c/129420/). Nova's spec deadline has passed, but I think this is a good candidate for an exception. We will announce the process for asking for a formal spec exception shortly after new years. > > > > Tim > > > > From: Morgan Fainberg [mailto:morgan.fainberg at gmail.com] > Sent: 23 December 2014 01:22 > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] Hierarchical Multitenancy > > > > Hi Raildo, > > > > Thanks for putting this post together. I really appreciate all the work you guys have done (and continue to do) to get the Hierarchical Mulittenancy code into Keystone. It?s great to have the base implementation merged into Keystone for the K1 milestone. I look forward to seeing the rest of the development land during the rest of this cycle and what the other OpenStack projects build around the HMT functionality. > > > > Cheers, > > Morgan > > > > > > >> >> On Dec 22, 2014, at 1:49 PM, Raildo Mascena wrote: >> >> >> >> Hello folks, My team and I developed the Hierarchical Multitenancy concept for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we implemented? What are the next steps for kilo? >> >> To answers these questions, I created a blog post http://raildo.me/hierarchical-multitenancy-in-openstack/ >> >> >> >> Any question, I'm available. >> >> >> >> -- >> >> Raildo Mascena >> >> Software Engineer. >> >> Bachelor of Computer Science. >> >> Distributed Systems Laboratory >> Federal University of Campina Grande >> Campina Grande, PB - Brazil >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tnurlygayanov at mirantis.com Tue Dec 23 16:35:51 2014 From: tnurlygayanov at mirantis.com (Timur Nurlygayanov) Date: Tue, 23 Dec 2014 20:35:51 +0400 Subject: [openstack-dev] [Murano] Murano Agent In-Reply-To: <70bff7c448b84794a05a2383daba26d5@BY2PR42MB101.048d.mgd.msft.net> References: <70bff7c448b84794a05a2383daba26d5@BY2PR42MB101.048d.mgd.msft.net> Message-ID: Hi, murano python client allows to work with Murano API from the console interface (instead of Web UI). To install Murano python client on Ubuntu you can execute the following commands: apt-get update apt-get install python-pip pip install -U pip setuptools pip install python-muranoclient On Tue, Dec 16, 2014 at 1:39 PM, wrote: > > > > > Hi Team, > > > > I am installing Murano on the Ubuntu 14.04 Juno setup and would like to > know what components need to be installed in a separate VM for Murano > agent. > > Please let me kow why Murano-agent is required and the components that > needs to be installed in it. > > > > Warm Regards, > > *Raghavendra Lad* > > > > > > ------------------------------ > > This message is for the designated recipient only and may contain > privileged, proprietary, or otherwise confidential information. If you have > received it in error, please notify the sender immediately and delete the > original. Any other use of the e-mail by you is prohibited. Where allowed > by local law, electronic communications with Accenture and its affiliates, > including e-mail and instant messaging (including content), may be scanned > by our systems for the purposes of information security and assessment of > internal compliance with Accenture policy. > > ______________________________________________________________________________________ > > www.accenture.com > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Timur, Senior QA Engineer OpenStack Projects Mirantis Inc My OpenStack summit schedule: http://kilodesignsummit.sched.org/timur.nurlygayanov#.VFSrD8mhhOI -------------- next part -------------- An HTML attachment was scrubbed... URL: From afedorova at mirantis.com Tue Dec 23 17:20:45 2014 From: afedorova at mirantis.com (Aleksandra Fedorova) Date: Tue, 23 Dec 2014 21:20:45 +0400 Subject: [openstack-dev] [Fuel][Docs] Move fuel-web/docs to fuel-docs Message-ID: Blueprint https://blueprints.launchpad.net/fuel/+spec/fuel-dev-docs-merge-fuel-docs suggests us to move all documentation from fuel-web to fuel-docs repository. While I agree that moving Developer Guide to fuel-docs is a good idea, there is an issue with autodocs which currently blocks the whole process. If we move dev docs to fuel-docs as suggested by Christopher in [1] we will make it impossible to build fuel-docs without cloning fuel-web repository and installing all nailgun dependencies into current environment. And this is bad from both CI and user point of view. I think we should keep fuel-docs repository self-contained, i.e. one should be able to build docs without any external code. We can add a switch or separate make target to build 'addons' to this documentation when explicitly requested, but it shouldn't be default behaviour. Thus I think we need to split documentation in fuel-web/ repository and move the "static" part to fuel-docs, but keep "dynamic" auto-generated part in fuel-web repo. See patch [2]. Then to move docs from fuel-web to fuel-docs we need to perform following steps: 1) Merge/abandon all docs-related patches to fuel-web, see full list [3] 2) Merge updated patch [2] which removes docs from fuel-web repo, leaving autogenerated api docs only. 3) Disable docs CI for fuel-web 4) Add building of api docs to fuel-web/run_tests.sh. 5) Update fuel-docs repository with new data as in patch [4] but excluding anything related to autodocs. 6) Implement additional make target in fuel-docs to download and build autodocs from fuel-web repo as a separate chapter. 7) Add this make target in fuel-docs CI. [1] https://review.openstack.org/#/c/124551/ [2] https://review.openstack.org/#/c/143679/ [3] https://review.openstack.org/#/q/project:stackforge/fuel-web+status:open+file:%255Edoc.*,n,z [4] https://review.openstack.org/#/c/125234/ -- Aleksandra Fedorova Fuel Devops Engineer bookwar From ayoung at redhat.com Tue Dec 23 17:34:29 2014 From: ayoung at redhat.com (Adam Young) Date: Tue, 23 Dec 2014 12:34:29 -0500 Subject: [openstack-dev] [Keystone] Bug in federation In-Reply-To: <549999A1.2070003@kent.ac.uk> References: <549999A1.2070003@kent.ac.uk> Message-ID: <5499A7A5.2000403@redhat.com> On 12/23/2014 11:34 AM, David Chadwick wrote: > Hi guys > > we now have the ABFAB federation protocol working with Keystone, using a > modified mod_auth_kerb plugin for Apache (available from the project > Moonshot web site). However, we did not change Keystone configuration > from its original SAML federation configuration, when it was talking to > SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code > (which I believe had to be done for OpenID connect.) We simply replaced > mod_shibboleth with mod_auth_kerb and talked to a completely different > IDP with a different protocol. And everything worked just fine. > > Consequently Keystone is broken, since you can configure it to trust a > particular IDP, talking a particular protocol, but Apache will happily > talk to another IDP, using a different protocol, and Keystone cannot > tell the difference and will happily accept the authenticated user. > Keystone should reject any authenticated user who does not come from the > trusted IDP talking the correct protocol. Otherwise there is no point in > configuring Keystone with this information, if it is ignored by Keystone. The IDP and the Protocol should be passed from HTTPD in env vars. Can you confirm/deny that this is the case now? On the Apache side we are looking to expand the set of variables set. http://www.freeipa.org/page/Environment_Variables#Proposed_Additional_Variables mod_shib does support Shib-Identity-Provider : https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPAttributeAccess#NativeSPAttributeAccess-CustomSPVariables Which should be sufficient: if the user is coming in via mod_shib, they are using SAML. > > BTW, we are using the Juno release. We should fix this bug in Kilo. > > As I have been saying for many months, Keystone does not know anything > about SAML or ABFAB or OpenID Connect protocols, so there is currently > no point in configuring this information into Keystone. Keystone is only > aware of environmental parameters coming from Apache. So this is the > protocol that Keystone recognises. If you want Keystone to try to > control the federation protocol and IDPs used by Apache, then you will > need the Apache plugins to pass the name of the IDP and the protocol > being used as environmental parameters to Keystone, and then Keystone > can check that the ones that it has been configured to trust, are > actually being used by Apache. > > regards > > David > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Tim.Bell at cern.ch Tue Dec 23 18:01:52 2014 From: Tim.Bell at cern.ch (Tim Bell) Date: Tue, 23 Dec 2014 18:01:52 +0000 Subject: [openstack-dev] Hierarchical Multitenancy In-Reply-To: References: <2964CB70-9ED7-42DE-9F17-498458A0031B@gmail.com> <5D7F9996EA547448BC6C54C8C5AAF4E501025E032F@CERNXCHG43.cern.ch> Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E501025E4E00@CERNXCHG43.cern.ch> Joe, Thanks? there seems to be good agreement on the spec and the matching implementation is well advanced with BARC so the risk is not too high. Launching HMT with quota in Nova in the same release cycle would also provide a more complete end user experience. For CERN, this functionality is very interesting as it allows the central cloud providers to delegate the allocation of quotas to the LHC experiments. Thus, from a central perspective, we are able to allocate N thousand cores to an experiment and delegate their resource co-ordinator to prioritise the work within the experiment. Currently, we have many manual helpdesk tickets with significant latency to adjust the quotas. Tim From: Joe Gordon [mailto:joe.gordon0 at gmail.com] Sent: 23 December 2014 17:35 To: OpenStack Development Mailing List Subject: Re: [openstack-dev] Hierarchical Multitenancy On Dec 23, 2014 12:26 AM, "Tim Bell" > wrote: > > > > It would be great if we can get approval for the Hierachical Quota handling in Nova too (https://review.openstack.org/#/c/129420/). Nova's spec deadline has passed, but I think this is a good candidate for an exception. We will announce the process for asking for a formal spec exception shortly after new years. > > > > Tim > > > > From: Morgan Fainberg [mailto:morgan.fainberg at gmail.com] > Sent: 23 December 2014 01:22 > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] Hierarchical Multitenancy > > > > Hi Raildo, > > > > Thanks for putting this post together. I really appreciate all the work you guys have done (and continue to do) to get the Hierarchical Mulittenancy code into Keystone. It?s great to have the base implementation merged into Keystone for the K1 milestone. I look forward to seeing the rest of the development land during the rest of this cycle and what the other OpenStack projects build around the HMT functionality. > > > > Cheers, > > Morgan > > > > > > >> >> On Dec 22, 2014, at 1:49 PM, Raildo Mascena > wrote: >> >> >> >> Hello folks, My team and I developed the Hierarchical Multitenancy concept for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we implemented? What are the next steps for kilo? >> >> To answers these questions, I created a blog post http://raildo.me/hierarchical-multitenancy-in-openstack/ >> >> >> >> Any question, I'm available. >> >> >> >> -- >> >> Raildo Mascena >> >> Software Engineer. >> >> Bachelor of Computer Science. >> >> Distributed Systems Laboratory >> Federal University of Campina Grande >> Campina Grande, PB - Brazil >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ALONMA at il.ibm.com Tue Dec 23 19:07:01 2014 From: ALONMA at il.ibm.com (Alon Marx) Date: Tue, 23 Dec 2014 21:07:01 +0200 Subject: [openstack-dev] [3rd-party-ci] Cinder CI and CI accounts Message-ID: Hi All, In IBM we have several cinder drivers, with a number of CI accounts. In order to improve the CI management and maintenance, we decided to build a single Jenkins master that will run several jobs for the drivers we own. Adding the jobs to the jenkins master went ok, but we encountered a problem with the CI accounts. We have several drivers and several accounts, but in the Jenkins master, the Zuul configuration has only one gerrit account that reports. So there are several questions: 1. Was this problem encountered by others? How did they solve it? 2. Is there a way to configure Zuul on the Jenkins master to report different jobs with different CI accounts? 3. If there is no way to configure the master to use several CI accounts, should we build a Jenkins master per driver? 4. Or another alternative, should we use a single CI account for all drivers we own, and report all results under that account? We'll appreciate any input. Thanks, Alon -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.w.chadwick at kent.ac.uk Tue Dec 23 19:33:25 2014 From: d.w.chadwick at kent.ac.uk (David Chadwick) Date: Tue, 23 Dec 2014 19:33:25 +0000 Subject: [openstack-dev] [Keystone] Bug in federation In-Reply-To: <5499A7A5.2000403@redhat.com> References: <549999A1.2070003@kent.ac.uk> <5499A7A5.2000403@redhat.com> Message-ID: <5499C385.6040301@kent.ac.uk> Hi Adam On 23/12/2014 17:34, Adam Young wrote: > On 12/23/2014 11:34 AM, David Chadwick wrote: >> Hi guys >> >> we now have the ABFAB federation protocol working with Keystone, using a >> modified mod_auth_kerb plugin for Apache (available from the project >> Moonshot web site). However, we did not change Keystone configuration >> from its original SAML federation configuration, when it was talking to >> SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code >> (which I believe had to be done for OpenID connect.) We simply replaced >> mod_shibboleth with mod_auth_kerb and talked to a completely different >> IDP with a different protocol. And everything worked just fine. >> >> Consequently Keystone is broken, since you can configure it to trust a >> particular IDP, talking a particular protocol, but Apache will happily >> talk to another IDP, using a different protocol, and Keystone cannot >> tell the difference and will happily accept the authenticated user. >> Keystone should reject any authenticated user who does not come from the >> trusted IDP talking the correct protocol. Otherwise there is no point in >> configuring Keystone with this information, if it is ignored by Keystone. > The IDP and the Protocol should be passed from HTTPD in env vars. Can > you confirm/deny that this is the case now? What is passed from Apache is the 'PATH_INFO' variable, and it is set to the URL of Keystone that is being protected, which in our case is /OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth There are also the following arguments passed to Keystone 'wsgiorg.routing_args': (, {'identity_provider': u'KentProxy', 'protocol': u'saml2'}) and 'PATH_TRANSLATED': '/var/www/keystone/main/v3/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth' So Apache is telling Keystone that it has protected the URL that Keystone has configured to be protected. However, Apache has been configured to protect this URL with the ABFAB protocol and the local Radius server, rather than the KentProxy IdP and the SAML2 protocol. So we could say that Apache is lying to Keystone, and because Keystone trusts Apache, then Keystone trusts Apache's lies and wrongly thinks that the correct IDP and protocol were used. The only sure way to protect Keystone from a wrongly or mal-configured Apache is to have end to end security, where Keystone gets a token from the IDP that it can validate, to prove that it is the trusted IDP that it is talking to. In other words, if Keystone is given the original signed SAML assertion from the IDP, it will know for definite that the user was authenticated by the trusted IDP using the trusted protocol regards David > > On the Apache side we are looking to expand the set of variables set. > http://www.freeipa.org/page/Environment_Variables#Proposed_Additional_Variables > > The original SAML assertion > > mod_shib does support Shib-Identity-Provider : > > > https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPAttributeAccess#NativeSPAttributeAccess-CustomSPVariables > > > Which should be sufficient: if the user is coming in via mod_shib, they > are using SAML. > > > >> >> BTW, we are using the Juno release. We should fix this bug in Kilo. >> >> As I have been saying for many months, Keystone does not know anything >> about SAML or ABFAB or OpenID Connect protocols, so there is currently >> no point in configuring this information into Keystone. Keystone is only >> aware of environmental parameters coming from Apache. So this is the >> protocol that Keystone recognises. If you want Keystone to try to >> control the federation protocol and IDPs used by Apache, then you will >> need the Apache plugins to pass the name of the IDP and the protocol >> being used as environmental parameters to Keystone, and then Keystone >> can check that the ones that it has been configured to trust, are >> actually being used by Apache. >> >> regards >> >> David >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thinrichs at vmware.com Tue Dec 23 20:40:55 2014 From: thinrichs at vmware.com (Tim Hinrichs) Date: Tue, 23 Dec 2014 20:40:55 +0000 Subject: [openstack-dev] [Congress] simulate examples In-Reply-To: <928760976E4ACC46B58B36866F60B0DD080A8C3B@G1W3642.americas.hpqcorp.net> References: <928760976E4ACC46B58B36866F60B0DD080A8C3B@G1W3642.americas.hpqcorp.net> Message-ID: Here's a description. We need to get this added to the docs. Below is a full description of how you might utilize the Action-centric version of simulate. The idea is that if you describe the effects that an action/API-call will have on the basic tables of nova/neutron/etc. (below called an Action Description policy) then you can ask Congress to simulate the execution of that action and answer a query in the resulting state. The only downside to the action-centric application of simulate is writing the Action Policy for all of the actions you care about. The other way to utilize simulate is to give it the changes in nova/neutron/etc. directly that you?d like to make. That is, instead of an action, you?ll tell simulate what rows should be inserted and which ones should be deleted. An insertion is denoted with a plus (+) and deletion is denoted with a minus (-). For example, to compute all the errors after 1. inserting a row into the nova:servers table with ID uuid1, 2TB of disk, and 10GB of memory (this isn?t the actual schema BTW) and 2. deleting the row from neutron:security_groups with the ID ?uuid2? and name ?alice_default_group? (again not the real schema), you?d write something like the following. openstack congress policy simulate classification 'error(x)? ?nova:servers+(?uuid1?, ?2TB?, ?10 GB?) neutron:security_groups-(?uuid2?, ?alice_default_group?)' action But I?d suggest reading the following to see some of the options. ===================================== 1. CREATE ACTION DESCRIPTION POLICY ===================================== Suppose the table 'p' is a collection of key-value pairs: p(key, value). Suppose we have a single action 'set(key, newvalue)? that changes the existing value of 'key' to 'newvalue' or sets the value of 'key' to 'newvalue' if 'key' was not already assigned. We can describe the effects of ?set? using the following 3 Datalog rules. p+(x,y) :- set(x,y) p-(x,oldy) :- set(x,y), p(x,oldy) action("set") The first thing we do is add each of these 3 rules to the policy named 'action'. $ openstack congress policy rule create action 'p+(x,y) :- set(x,y)' $ openstack congress policy rule create action 'p-(x,oldy) :- set(x,y), p(x,oldy)' $ openstack congress policy rule create action 'action("set")' ========================================= 2. ADD SOME KEY/VALUE PAIRS FOR TESTING ========================================= Here?s we?ll populate the ?classification? policy with a few key/value pairs. $ openstack congress policy rule create classification 'p(101, 0)' $ openstack congress policy rule create classification 'p(202, "abc")' $ openstack congress policy rule create classification 'p(302, 9)' ================== 3. DEFINE POLICY ================== There's an error if a key's value is 9. $ openstack congress policy rule create classification 'error(x) :- p(x, 9)' =========================== 4. RUN SIMULATION QUERIES =========================== Each of the following is an example of a simulation query you might want to run. a) Simulate changing the value of key 101 to 5 and query the contents of p. $ openstack congress policy simulate classification 'p(x,y)' 'set(101, 5)' action p(101, 5) p(202, "abc") p(302, 9) b) Simulate changing the value of key 101 to 5 and query the error table $ openstack congress policy simulate classification 'error(x)' 'set(101, 5)' action error(302) c) Simulate changing the value of key 101 to 9 and query the error table. $ openstack congress policy simulate classification 'error(x)' 'set(101, 9)' action error(302) error(101) d) Simulate changing the value of key 101 to 9 and query the *change* in the error table. $ openstack congress policy simulate classification 'error(x)' 'set(101, 9)' action --delta error+(101) e) Simulate changing 101:9, 202:9, 302:1 and query the *change* in the error table. $ openstack congress policy simulate classification 'error(x)' 'set(101, 9) set(202, 9) set(302, 1)' action --delta error+(202) error+(101) error-(302) f) Simulate changing 101:9, 202:9, 302:1, and finally 101:15 (in that order). Then query the *change* in the error table. $ openstack congress policy simulate classification 'error(x)' 'set(101, 9) set(202, 9) set(302, 1) set(101, 15)' action --delta error+(202) error-(302) g) Simulate changing 101:9 and query the *change* in the error table, while asking for a debug trace of the computation. $ openstack congress policy simulate classification 'error(x)' 'set(101, 9)' action --delta --trace error+(101) RT : ** Simulate: Querying error(x) Clas : Call: error(x) Clas : | Call: p(x, 9) Clas : | Exit: p(302, 9) Clas : Exit: error(302) Clas : Redo: error(302) Clas : | Redo: p(302, 9) Clas : | Fail: p(x, 9) Clas : Fail: error(x) Clas : Found answer [error(302)] RT : Original result of error(x) is [error(302)] RT : ** Simulate: Applying sequence [set(101, 9)] Action: Call: action(x) ... Tim ________________________________ From: Tran, Steven Sent: Monday, December 22, 2014 10:38 PM To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [Congress] simulate examples Hi, Does anyone have an example on how to use ?simulate? according to the following command line usage? usage: openstack congress policy simulate [-h] [--delta] [--trace] What are the query and sequence? The example under /opt/stack/congress/examples doesn?t mention about query and sequence. It seems like all 4 parameters are required. Thanks, -Steven -------------- next part -------------- An HTML attachment was scrubbed... URL: From dolph.mathews at gmail.com Tue Dec 23 21:08:41 2014 From: dolph.mathews at gmail.com (Dolph Mathews) Date: Tue, 23 Dec 2014 15:08:41 -0600 Subject: [openstack-dev] [Keystone] Bug in federation In-Reply-To: <5499C385.6040301@kent.ac.uk> References: <549999A1.2070003@kent.ac.uk> <5499A7A5.2000403@redhat.com> <5499C385.6040301@kent.ac.uk> Message-ID: On Tue, Dec 23, 2014 at 1:33 PM, David Chadwick wrote: > Hi Adam > > On 23/12/2014 17:34, Adam Young wrote: > > On 12/23/2014 11:34 AM, David Chadwick wrote: > >> Hi guys > >> > >> we now have the ABFAB federation protocol working with Keystone, using a > >> modified mod_auth_kerb plugin for Apache (available from the project > >> Moonshot web site). However, we did not change Keystone configuration > >> from its original SAML federation configuration, when it was talking to > >> SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code > >> (which I believe had to be done for OpenID connect.) We simply replaced > >> mod_shibboleth with mod_auth_kerb and talked to a completely different > >> IDP with a different protocol. And everything worked just fine. > >> > >> Consequently Keystone is broken, since you can configure it to trust a > >> particular IDP, talking a particular protocol, but Apache will happily > >> talk to another IDP, using a different protocol, and Keystone cannot > >> tell the difference and will happily accept the authenticated user. > >> Keystone should reject any authenticated user who does not come from the > >> trusted IDP talking the correct protocol. Otherwise there is no point in > >> configuring Keystone with this information, if it is ignored by > Keystone. > > The IDP and the Protocol should be passed from HTTPD in env vars. Can > > you confirm/deny that this is the case now? > > What is passed from Apache is the 'PATH_INFO' variable, and it is set to > the URL of Keystone that is being protected, which in our case is > /OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth > > There are also the following arguments passed to Keystone > 'wsgiorg.routing_args': ( 0x7ffaba339190>, {'identity_provider': u'KentProxy', 'protocol': u'saml2'}) > > and > > 'PATH_TRANSLATED': > > '/var/www/keystone/main/v3/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth' > > So Apache is telling Keystone that it has protected the URL that > Keystone has configured to be protected. > > However, Apache has been configured to protect this URL with the ABFAB > protocol and the local Radius server, rather than the KentProxy IdP and > the SAML2 protocol. So we could say that Apache is lying to Keystone, > and because Keystone trusts Apache, then Keystone trusts Apache's lies > and wrongly thinks that the correct IDP and protocol were used. > > The only sure way to protect Keystone from a wrongly or mal-configured > Apache is to have end to end security, where Keystone gets a token from > the IDP that it can validate, to prove that it is the trusted IDP that > it is talking to. In other words, if Keystone is given the original > signed SAML assertion from the IDP, it will know for definite that the > user was authenticated by the trusted IDP using the trusted protocol > So the "bug" is a misconfiguration, not an actual bug. The goal was to trust and leverage httpd, not reimplement it and all it's extensions. > > regards > > David > > > > > On the Apache side we are looking to expand the set of variables set. > > > http://www.freeipa.org/page/Environment_Variables#Proposed_Additional_Variables > > > > > > The original SAML assertion > > > > mod_shib does support Shib-Identity-Provider : > > > > > > > https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPAttributeAccess#NativeSPAttributeAccess-CustomSPVariables > > > > > > Which should be sufficient: if the user is coming in via mod_shib, they > > are using SAML. > > > > > > > >> > >> BTW, we are using the Juno release. We should fix this bug in Kilo. > >> > >> As I have been saying for many months, Keystone does not know anything > >> about SAML or ABFAB or OpenID Connect protocols, so there is currently > >> no point in configuring this information into Keystone. Keystone is only > >> aware of environmental parameters coming from Apache. So this is the > >> protocol that Keystone recognises. If you want Keystone to try to > >> control the federation protocol and IDPs used by Apache, then you will > >> need the Apache plugins to pass the name of the IDP and the protocol > >> being used as environmental parameters to Keystone, and then Keystone > >> can check that the ones that it has been configured to trust, are > >> actually being used by Apache. > >> > >> regards > >> > >> David > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikhil.komawar at RACKSPACE.COM Tue Dec 23 21:14:21 2014 From: nikhil.komawar at RACKSPACE.COM (Nikhil Komawar) Date: Tue, 23 Dec 2014 21:14:21 +0000 Subject: [openstack-dev] [Glance] No meetings for two weeks. Message-ID: <0FBF5631AB7B504D89C7E6929695B62493031D52@ORD1EXD02.RACKSPACE.CORP> Hi all, In the spirit of the holiday season, the next two meetings for Glance have been cancelled i.e. the ones on Dec 25th and Jan 1st [0]. Let's meet back on the 7th. Of couse, please feel free to ping me on IRC/email if you've any questions, concerns or suggestions. Happy holidays! [0] https://etherpad.openstack.org/p/glance-team-meeting-agenda Cheers! -Nikhil -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Tue Dec 23 21:56:00 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Tue, 23 Dec 2014 13:56:00 -0800 Subject: [openstack-dev] [Keystone] Bug in federation In-Reply-To: References: <549999A1.2070003@kent.ac.uk> <5499A7A5.2000403@redhat.com> <5499C385.6040301@kent.ac.uk> Message-ID: <26D9ECC1-E258-401A-A545-C3C874D9C68E@gmail.com> > On Dec 23, 2014, at 1:08 PM, Dolph Mathews wrote: > > > On Tue, Dec 23, 2014 at 1:33 PM, David Chadwick > wrote: > Hi Adam > > On 23/12/2014 17:34, Adam Young wrote: > > On 12/23/2014 11:34 AM, David Chadwick wrote: > >> Hi guys > >> > >> we now have the ABFAB federation protocol working with Keystone, using a > >> modified mod_auth_kerb plugin for Apache (available from the project > >> Moonshot web site). However, we did not change Keystone configuration > >> from its original SAML federation configuration, when it was talking to > >> SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code > >> (which I believe had to be done for OpenID connect.) We simply replaced > >> mod_shibboleth with mod_auth_kerb and talked to a completely different > >> IDP with a different protocol. And everything worked just fine. > >> > >> Consequently Keystone is broken, since you can configure it to trust a > >> particular IDP, talking a particular protocol, but Apache will happily > >> talk to another IDP, using a different protocol, and Keystone cannot > >> tell the difference and will happily accept the authenticated user. > >> Keystone should reject any authenticated user who does not come from the > >> trusted IDP talking the correct protocol. Otherwise there is no point in > >> configuring Keystone with this information, if it is ignored by Keystone. > > The IDP and the Protocol should be passed from HTTPD in env vars. Can > > you confirm/deny that this is the case now? > > What is passed from Apache is the 'PATH_INFO' variable, and it is set to > the URL of Keystone that is being protected, which in our case is > /OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth > > There are also the following arguments passed to Keystone > 'wsgiorg.routing_args': ( 0x7ffaba339190>, {'identity_provider': u'KentProxy', 'protocol': u'saml2'}) > > and > > 'PATH_TRANSLATED': > '/var/www/keystone/main/v3/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth' > > So Apache is telling Keystone that it has protected the URL that > Keystone has configured to be protected. > > However, Apache has been configured to protect this URL with the ABFAB > protocol and the local Radius server, rather than the KentProxy IdP and > the SAML2 protocol. So we could say that Apache is lying to Keystone, > and because Keystone trusts Apache, then Keystone trusts Apache's lies > and wrongly thinks that the correct IDP and protocol were used. > > The only sure way to protect Keystone from a wrongly or mal-configured > Apache is to have end to end security, where Keystone gets a token from > the IDP that it can validate, to prove that it is the trusted IDP that > it is talking to. In other words, if Keystone is given the original > signed SAML assertion from the IDP, it will know for definite that the > user was authenticated by the trusted IDP using the trusted protocol > > So the "bug" is a misconfiguration, not an actual bug. The goal was to trust and leverage httpd, not reimplement it and all it's extensions. Fixing this ?bug? would be moving towards Keystone needing to implement all of the various protocols to avoid ?misconfigurations?. There are probably some more values that can be passed down from the Apache layer to help provide more confidence in the IDP that is being used. I don?t see a real tangible benefit to moving away from leveraging HTTPD for handling the heavy lifting when handling federated Identity. ?Morgan > > regards > > David > > > > > On the Apache side we are looking to expand the set of variables set. > > http://www.freeipa.org/page/Environment_Variables#Proposed_Additional_Variables > > > > > > The original SAML assertion > > > > mod_shib does support Shib-Identity-Provider : > > > > > > https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPAttributeAccess#NativeSPAttributeAccess-CustomSPVariables > > > > > > Which should be sufficient: if the user is coming in via mod_shib, they > > are using SAML. > > > > > > > >> > >> BTW, we are using the Juno release. We should fix this bug in Kilo. > >> > >> As I have been saying for many months, Keystone does not know anything > >> about SAML or ABFAB or OpenID Connect protocols, so there is currently > >> no point in configuring this information into Keystone. Keystone is only > >> aware of environmental parameters coming from Apache. So this is the > >> protocol that Keystone recognises. If you want Keystone to try to > >> control the federation protocol and IDPs used by Apache, then you will > >> need the Apache plugins to pass the name of the IDP and the protocol > >> being used as environmental parameters to Keystone, and then Keystone > >> can check that the ones that it has been configured to trust, are > >> actually being used by Apache. > >> > >> regards > >> > >> David > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Tue Dec 23 22:23:30 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Tue, 23 Dec 2014 14:23:30 -0800 Subject: [openstack-dev] [Keystone] Next IRC Meeting January 6th Message-ID: <3D1578EF-0840-433E-9898-15B66C4DB6C2@gmail.com> The Keystone IRC meetings will be on hiatus over the holidays. They will resume as per normal on January 6th. Have a good end of the year! Cheers, Morgan From m4d.coder at gmail.com Tue Dec 23 22:32:09 2014 From: m4d.coder at gmail.com (W Chan) Date: Tue, 23 Dec 2014 14:32:09 -0800 Subject: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs Message-ID: After some online discussions with Renat, the following is a revision of the proposal to address the following related blueprints. * https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment * https://blueprints.launchpad.net/mistral/+spec/mistral-global-context * https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values * https://blueprints.launchpad.net/mistral/+spec/mistral-runtime-context Please refer to the following threads for backgrounds. * http://lists.openstack.org/pipermail/openstack-dev/2014-December/052643.html * http://lists.openstack.org/pipermail/openstack-dev/2014-December/052960.html * http://lists.openstack.org/pipermail/openstack-dev/2014-December/052824.html *Workflow Context Scope* 1. context to workflow is passed to all its subflows and subtasks/actions (aka children) only explicitly via inputs 2. context are passed by value (copy.deepcopy) to children 3. change to context is passed to parent only when it's explicitly published at the end of the child execution 4. change to context at the parent (after a publish from a child) is passed to subsequent children *Environment Variables* Solves the problem for quickly passing pre-defined inputs to a WF execution. In the WF spec, environment variables are referenced as $.env.var1, $.env.var2, etc. We should implement an API and DB model where users can pre-defined different environments with their own set of variables. An environment can be passed either by name from the DB or adhoc by dict in start_workflow. On workflow execution, a copy of the environment is saved with the execution object. Action inputs are still declared explicitly in the WF spec. This does not solve the problem where common inputs are specified over and over again. So if there are multiple SQL tasks in the WF, the WF author still needs to supply the conn_str explicitly for each task. In the example below, let's say we have a SQL Query Action that takes a connection string and a query statement as inputs. The WF author can specify that the conn_str input is supplied from the $.env.conn_str. *Example:* # Assume this SqlAction is registered as std.sql in Mistral's Action table. class SqlAction(object): def __init__(self, conn_str, query): ... ... version: "2.0" workflows: demo: type: direct input: - query output: - records tasks: query: action: std.sql conn_str={$.env.conn_str} query={$.query} publish: records: $ ... my_adhoc_env = { "conn_str": "mysql://admin:secrete @localhost /test" } ... # adhoc by dict start_workflow(wf_name, wf_inputs, env=my_adhoc_env) OR # lookup by name from DB model start_workflow(wf_name, wf_inputs, env="my_lab_env") *Define Default Action Inputs as Environment Variables* Solves the problem where we're specifying the same inputs to subflows and subtasks/actions over and over again. On command execution, if action inputs are not explicitly supplied, then defaults will be lookup from the environment. *Example:* Using the same example from above, the WF author can still supply both conn_str and query inputs in the WF spec. However, the author also has the option to supply that as default action inputs. An example environment structure is below. "__actions" should be reserved and immutable. Users can specific one or more default inputs for the sql action as nested dict under "__actions". Recursive YAQL eval should be supported in the env variables. version: "2.0" workflows: demo: type: direct input: - query output: - records tasks: query: action: std.sql query={$.query} publish: records: $ ... my_adhoc_env = { "sql_server": "localhost", "__actions": { "std.sql": { "conn_str": "mysql://admin:secrete@{$.env.sql_server}/test" } } } *Default Input Values Supplied Explicitly in WF Spec* Please refer to this blueprint for background. This is a different use case. To support, we just need to set the correct order of precedence in applying values. 1. Input explicitly given to the sub flow/task in the WF spec 2. Default input supplied from env 3. Default input supplied at WF spec *Putting this together...* At runtime, the WF context would be similar to the following example. This will be use to recursively eval the inputs for subflow/tasks/actions. ctx = { "var1": ?, "var2": ?, "my_server_ip": "10.1.23.250", "env": { "sql_server": "localhost", "__actions": { "std.sql": { "conn": "mysql://admin:secrete@{$.env.sql_server}/test" }, "my.action": { "endpoint": "http://{$.my_server_ip}/v1/foo" } } } } *Runtime Context* Please refer to this thread for the background and discussion. The only change here is that on run_action, we will pass the runtime data as kwargs to all action invocation. This means all Action classes should have at least **kwargs added to the __init__ method. runtime = { "execution_id": ..., "task_id": ..., ... } ... action = SomeAction(..., runtime=runtime) -------------- next part -------------- An HTML attachment was scrubbed... URL: From krotscheck at gmail.com Tue Dec 23 22:34:48 2014 From: krotscheck at gmail.com (Michael Krotscheck) Date: Tue, 23 Dec 2014 22:34:48 +0000 Subject: [openstack-dev] [infra] [storyboard] Nominating Yolanda Robla for StoryBoard Core Message-ID: Hello everyone! StoryBoard is the much anticipated successor to Launchpad, and is a component of the Infrastructure Program. The storyboard-core group is intended to be a superset of the infra-core group, with additional reviewers who specialize in the field. Yolanda has been working on StoryBoard ever since the Atlanta Summit, and has provided a diligent and cautious voice to our development effort. She has consistently provided feedback on our reviews, and is neither afraid of asking for clarification, nor of providing constructive criticism. In return, she has been nothing but gracious and responsive when improvements were suggested to her own submissions. Furthermore, Yolanda has been quite active in the infrastructure team as a whole, and provides valuable context for us in the greater realm of infra. Please respond within this thread with either supporting commentary, or concerns about her promotion. Since many western countries are currently celebrating holidays, the review period will remain open until January 9th. If the consensus is positive, we will promote her then! Thanks, Michael References: https://review.openstack.org/#/q/reviewer:%22yolanda.robla+%253Cinfo%2540ysoft.biz%253E%22,n,z http://stackalytics.com/?user_id=yolanda.robla&metric=marks -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.griffith8 at gmail.com Tue Dec 23 23:04:13 2014 From: john.griffith8 at gmail.com (John Griffith) Date: Tue, 23 Dec 2014 16:04:13 -0700 Subject: [openstack-dev] [3rd-party-ci] Cinder CI and CI accounts In-Reply-To: References: Message-ID: On Tue, Dec 23, 2014 at 12:07 PM, Alon Marx wrote: > Hi All, > > In IBM we have several cinder drivers, with a number of CI accounts. In > order to improve the CI management and maintenance, we decided to build a > single Jenkins master that will run several jobs for the drivers we own. > Adding the jobs to the jenkins master went ok, but we encountered a problem > with the CI accounts. We have several drivers and several accounts, but in > the Jenkins master, the Zuul configuration has only one gerrit account that > reports. > > So there are several questions: > 1. Was this problem encountered by others? How did they solve it? > 2. Is there a way to configure Zuul on the Jenkins master to report > different jobs with different CI accounts? > 3. If there is no way to configure the master to use several CI accounts, > should we build a Jenkins master per driver? > 4. Or another alternative, should we use a single CI account for all drivers > we own, and report all results under that account? > > We'll appreciate any input. > > Thanks, > Alon > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > If you have a look at a review in gerrit you can see others appear to have a single account with multiple tests/results submitted. HP, EMC and NetApp all appear to be pretty clear examples of how to go about doing this. My personal preference on this has always been a single CI account anyway with the different drivers consolidated under it; if nothing else it reduces clutter in the review posting and makes it "easier" to find what you might be looking for. From mscherbakov at mirantis.com Tue Dec 23 23:20:14 2014 From: mscherbakov at mirantis.com (Mike Scherbakov) Date: Wed, 24 Dec 2014 02:20:14 +0300 Subject: [openstack-dev] [Fuel] Feature delivery rules and automated tests In-Reply-To: References: Message-ID: Igor, would that be possible? On Mon, Dec 22, 2014 at 7:49 PM, Anastasia Urlapova wrote: > > Mike, Dmitry, team, > let me add 5 cents - tests per feature have to run on CI before SCF, it is > mean that jobs configuration also should be implemented. > > On Wed, Dec 17, 2014 at 7:33 AM, Mike Scherbakov > wrote: > >> I fully support the idea. >> >> Feature Lead has to know, that his feature is under threat if it's not >> yet covered by system tests (unit/integration tests are not enough!!!), and >> should proactive work with QA engineers to get tests implemented and >> passing before SCF. >> >> On Fri, Dec 12, 2014 at 5:55 PM, Dmitry Pyzhov >> wrote: >> >>> Guys, >>> >>> we've done a good job in 6.0. Most of the features were merged before >>> feature freeze. Our QA were involved in testing even earlier. It was much >>> better than before. >>> >>> We had a discussion with Anastasia. There were several bug reports for >>> features yesterday, far beyond HCF. So we still have a long way to be >>> perfect. We should add one rule: we need to have automated tests before HCF. >>> >>> Actually, we should have results of these tests just after FF. It is >>> quite challengeable because we have a short development cycle. So my >>> proposal is to require full deployment and run of automated tests for each >>> feature before soft code freeze. And it needs to be tracked in checklists >>> and on feature syncups. >>> >>> Your opinion? >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> -- >> Mike Scherbakov >> #mihgen >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Mike Scherbakov #mihgen -------------- next part -------------- An HTML attachment was scrubbed... URL: From ishishkin at mirantis.com Tue Dec 23 23:26:10 2014 From: ishishkin at mirantis.com (Igor Shishkin) Date: Wed, 24 Dec 2014 02:26:10 +0300 Subject: [openstack-dev] [Fuel] Feature delivery rules and automated tests In-Reply-To: References: Message-ID: <8B7EA75C-25E6-41CC-A4B3-56494CB9ABE6@mirantis.com> I believe yes. With jenkins job builder we could create jobs faster, QA can be involved in that or even create jobs on their own. I think we have to try it during next release cycle, currently I can?t see blockers/problems here. -- Igor Shishkin DevOps > On 24 Dec 2014, at 2:20 am GMT+3, Mike Scherbakov wrote: > > Igor, > would that be possible? > > On Mon, Dec 22, 2014 at 7:49 PM, Anastasia Urlapova wrote: > Mike, Dmitry, team, > let me add 5 cents - tests per feature have to run on CI before SCF, it is mean that jobs configuration also should be implemented. > > On Wed, Dec 17, 2014 at 7:33 AM, Mike Scherbakov wrote: > I fully support the idea. > > Feature Lead has to know, that his feature is under threat if it's not yet covered by system tests (unit/integration tests are not enough!!!), and should proactive work with QA engineers to get tests implemented and passing before SCF. > > On Fri, Dec 12, 2014 at 5:55 PM, Dmitry Pyzhov wrote: > Guys, > > we've done a good job in 6.0. Most of the features were merged before feature freeze. Our QA were involved in testing even earlier. It was much better than before. > > We had a discussion with Anastasia. There were several bug reports for features yesterday, far beyond HCF. So we still have a long way to be perfect. We should add one rule: we need to have automated tests before HCF. > > Actually, we should have results of these tests just after FF. It is quite challengeable because we have a short development cycle. So my proposal is to require full deployment and run of automated tests for each feature before soft code freeze. And it needs to be tracked in checklists and on feature syncups. > > Your opinion? > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Mike Scherbakov > #mihgen > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Mike Scherbakov > #mihgen > From zaro0508 at gmail.com Tue Dec 23 23:38:13 2014 From: zaro0508 at gmail.com (Zaro) Date: Tue, 23 Dec 2014 15:38:13 -0800 Subject: [openstack-dev] [infra] [storyboard] Nominating Yolanda Robla for StoryBoard Core In-Reply-To: References: Message-ID: +1 On Tue, Dec 23, 2014 at 2:34 PM, Michael Krotscheck wrote: > Hello everyone! > > StoryBoard is the much anticipated successor to Launchpad, and is a > component of the Infrastructure Program. The storyboard-core group is > intended to be a superset of the infra-core group, with additional > reviewers who specialize in the field. > > Yolanda has been working on StoryBoard ever since the Atlanta Summit, and > has provided a diligent and cautious voice to our development effort. She > has consistently provided feedback on our reviews, and is neither afraid of > asking for clarification, nor of providing constructive criticism. In > return, she has been nothing but gracious and responsive when improvements > were suggested to her own submissions. > > Furthermore, Yolanda has been quite active in the infrastructure team as a > whole, and provides valuable context for us in the greater realm of infra. > > Please respond within this thread with either supporting commentary, or > concerns about her promotion. Since many western countries are currently > celebrating holidays, the review period will remain open until January 9th. > If the consensus is positive, we will promote her then! > > Thanks, > > Michael > > > References: > > https://review.openstack.org/#/q/reviewer:%22yolanda.robla+%253Cinfo%2540ysoft.biz%253E%22,n,z > > http://stackalytics.com/?user_id=yolanda.robla&metric=marks > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xarses at gmail.com Tue Dec 23 23:59:09 2014 From: xarses at gmail.com (Andrew Woodward) Date: Tue, 23 Dec 2014 15:59:09 -0800 Subject: [openstack-dev] [Fuel] Relocation of freshly deployed OpenStack by Fuel In-Reply-To: References: Message-ID: Pawel, I'm not sure that it's common at all to move the deployed cloud. Hopefully fuel made it easy enough to deploy that you could simply reset the cluster and re-deploy with the new network settings. I'd be interested in understanding why this would be more painful than re-configuring the public network settings. Things that need to be changed: all of the keystone public endpoints all of the config files using the public endpoints, so anything that speaks with another endpoint (usually nova [compute & controller], neutron, possibly others) corosync config for public vip (6.0) corosync config for ping_public_gw host-os nic settings ie /etc/networking/interfaces.d/ now with all that said, I think rather than updating these by hand, we could get puppet to update these values for us. The non-repeatable way is to hack on /etc/astute.yaml and then re-apply puppet (/etc/puppet/manifests/site.pp for each "role: " you would have had for /etc/astute.yaml the more-repeatable way is to hack out the public range in the nailgun database, as well as replace the public_vip value once these are changed, you should be able to manually apply puppet using the deploy api (fuelclient can call this) 'fuel --env 1 --node 1,2,3 --deploy' I've never done this before, but it should be that simple, and puppet will re-apply based on the current value in the database (as long as you didn't upload custom node yaml prior to your initial deployment) On Sat, Dec 20, 2014 at 11:27 AM, Skowron, Pawel wrote: > -Need a little guidance with Mirantis version of OpenStack. > > > > We want move freshly deployed cloud, without running instances but with HA > option to other physical location. > > The other location means different ranges of public network. And I really > want move my installation without cloud redeployment. > > > > What I think is required to change is public network settings. The public > network settings can be divided in two different areas: > > 1) Floating ip range for external access to running VM instances > > 2) Fuel reserved pool for service endpoints (virtual ips and staticly > assigned ips) > > > > The first one 1) I believe but I haven't tested that _is not a problem_ but > any insight will be invaluable. > > I think it would be possible change to floating network ranges, as an admin > in OpenStack itself. I will just add another "network" as external network. > > > > But the second issue 2) is I am worried about. What I found the virtual ips > (vip) are assigned to one of controller (primary role of HA) > > and written in haproxy/pacemaker configuration. To allow access from public > network by this ips I would probably need > > to reconfigure all HA support services which have hardcoded vips in its > configuration files, but it looks very complicated and fragile. > > > > I have even found that public_vip is used in nova.conf (to get access to > glance). So the relocation will require reconfiguration of nova and maybe > other openstack services. > > In the case of KeyStone it would be a real problem (ips are stored in > database). > > > > Has someone any experience with this kind of scenario and would be kind to > share it ? Please help. > > > > I have used Fuel 6.0 technical preview. > > > > Pawel Skowron > > pawel.skowron at intel.com > > > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andrew Mirantis Ceph community From y-goto at jp.fujitsu.com Wed Dec 24 00:16:18 2014 From: y-goto at jp.fujitsu.com (Yasunori Goto) Date: Wed, 24 Dec 2014 09:16:18 +0900 Subject: [openstack-dev] [Heat] How can I write at milestone section of blueprint? In-Reply-To: <698A6975-A43C-47ED-B0D0-0DA4CC5C7E69@rackspace.com> References: <20141222151911.4F09.E1E9C6FF@jp.fujitsu.com> <698A6975-A43C-47ED-B0D0-0DA4CC5C7E69@rackspace.com> Message-ID: <20141224091607.B204.E1E9C6FF@jp.fujitsu.com> > Its been discussed at several summits. We have settled on a general solution using Zaqar, > but no work has been done that I know of. I was just pointing out that similar blueprints/specs > exist and you may want to look through those to get some ideas about writing your own and/or > basing your proposal off of one of them. I see. Thanks for your information. -- Yasunori Goto From mdorman at godaddy.com Wed Dec 24 01:10:47 2014 From: mdorman at godaddy.com (Michael Dorman) Date: Wed, 24 Dec 2014 01:10:47 +0000 Subject: [openstack-dev] Hierarchical Multitenancy In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E501025E4E00@CERNXCHG43.cern.ch> References: <2964CB70-9ED7-42DE-9F17-498458A0031B@gmail.com> <5D7F9996EA547448BC6C54C8C5AAF4E501025E032F@CERNXCHG43.cern.ch> <5D7F9996EA547448BC6C54C8C5AAF4E501025E4E00@CERNXCHG43.cern.ch> Message-ID: <002433C1-D4A6-4994-9F0B-FA4011F395A1@godaddy.com> +1 to Nova support for this getting in to Kilo. We have a similar use case. I?d really like to doll out quota on a department level, and let individual departments manage sub projects and quotas on their own. I agree that HMT has limited value without Nova support. Thanks! Mike From: Tim Bell > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, December 23, 2014 at 11:01 AM To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] Hierarchical Multitenancy Joe, Thanks? there seems to be good agreement on the spec and the matching implementation is well advanced with BARC so the risk is not too high. Launching HMT with quota in Nova in the same release cycle would also provide a more complete end user experience. For CERN, this functionality is very interesting as it allows the central cloud providers to delegate the allocation of quotas to the LHC experiments. Thus, from a central perspective, we are able to allocate N thousand cores to an experiment and delegate their resource co-ordinator to prioritise the work within the experiment. Currently, we have many manual helpdesk tickets with significant latency to adjust the quotas. Tim From: Joe Gordon [mailto:joe.gordon0 at gmail.com] Sent: 23 December 2014 17:35 To: OpenStack Development Mailing List Subject: Re: [openstack-dev] Hierarchical Multitenancy On Dec 23, 2014 12:26 AM, "Tim Bell" > wrote: > > > > It would be great if we can get approval for the Hierachical Quota handling in Nova too (https://review.openstack.org/#/c/129420/). Nova's spec deadline has passed, but I think this is a good candidate for an exception. We will announce the process for asking for a formal spec exception shortly after new years. > > > > Tim > > > > From: Morgan Fainberg [mailto:morgan.fainberg at gmail.com] > Sent: 23 December 2014 01:22 > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] Hierarchical Multitenancy > > > > Hi Raildo, > > > > Thanks for putting this post together. I really appreciate all the work you guys have done (and continue to do) to get the Hierarchical Mulittenancy code into Keystone. It?s great to have the base implementation merged into Keystone for the K1 milestone. I look forward to seeing the rest of the development land during the rest of this cycle and what the other OpenStack projects build around the HMT functionality. > > > > Cheers, > > Morgan > > > > > > >> >> On Dec 22, 2014, at 1:49 PM, Raildo Mascena > wrote: >> >> >> >> Hello folks, My team and I developed the Hierarchical Multitenancy concept for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we implemented? What are the next steps for kilo? >> >> To answers these questions, I created a blog post http://raildo.me/hierarchical-multitenancy-in-openstack/ >> >> >> >> Any question, I'm available. >> >> >> >> -- >> >> Raildo Mascena >> >> Software Engineer. >> >> Bachelor of Computer Science. >> >> Distributed Systems Laboratory >> Federal University of Campina Grande >> Campina Grande, PB - Brazil >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramy.asselin at hp.com Wed Dec 24 02:11:38 2014 From: ramy.asselin at hp.com (Asselin, Ramy) Date: Wed, 24 Dec 2014 02:11:38 +0000 Subject: [openstack-dev] [3rd-party-ci] Cinder CI and CI accounts In-Reply-To: References: Message-ID: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4365E5@G4W3223.americas.hpqcorp.net> I agree with John. Option 4: one ci account for all drivers. The only valid reasons I'm aware of to use multiple accounts for a single vendor is if the hardware required to run the tests are not accessible from a 'central' ci system, or if the ci systems are managed by different teams. Otherwise, as you stated, it's more complicated to manage & maintain. Ramy -----Original Message----- From: John Griffith [mailto:john.griffith8 at gmail.com] Sent: Tuesday, December 23, 2014 3:04 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [3rd-party-ci] Cinder CI and CI accounts On Tue, Dec 23, 2014 at 12:07 PM, Alon Marx wrote: > Hi All, > > In IBM we have several cinder drivers, with a number of CI accounts. > In order to improve the CI management and maintenance, we decided to > build a single Jenkins master that will run several jobs for the drivers we own. > Adding the jobs to the jenkins master went ok, but we encountered a > problem with the CI accounts. We have several drivers and several > accounts, but in the Jenkins master, the Zuul configuration has only > one gerrit account that reports. > > So there are several questions: > 1. Was this problem encountered by others? How did they solve it? > 2. Is there a way to configure Zuul on the Jenkins master to report > different jobs with different CI accounts? > 3. If there is no way to configure the master to use several CI > accounts, should we build a Jenkins master per driver? > 4. Or another alternative, should we use a single CI account for all > drivers we own, and report all results under that account? > > We'll appreciate any input. > > Thanks, > Alon > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > If you have a look at a review in gerrit you can see others appear to have a single account with multiple tests/results submitted. HP, EMC and NetApp all appear to be pretty clear examples of how to go about doing this. My personal preference on this has always been a single CI account anyway with the different drivers consolidated under it; if nothing else it reduces clutter in the review posting and makes it "easier" to find what you might be looking for. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jsbryant at electronicjungle.net Wed Dec 24 04:08:05 2014 From: jsbryant at electronicjungle.net (Jay S. Bryant) Date: Tue, 23 Dec 2014 22:08:05 -0600 Subject: [openstack-dev] [3rd-party-ci] Cinder CI and CI accounts In-Reply-To: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4365E5@G4W3223.americas.hpqcorp.net> References: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4365E5@G4W3223.americas.hpqcorp.net> Message-ID: <549A3C25.5090208@electronicjungle.net> John and Ramy, Thanks for the feedback. So, we will create an IBM Storage CI Check account and slowly deprecate the multiple accounts as we consolidate the hardware. Jay On 12/23/2014 08:11 PM, Asselin, Ramy wrote: > I agree with John. Option 4: one ci account for all drivers. > > The only valid reasons I'm aware of to use multiple accounts for a single vendor is if the hardware required to run the tests are not accessible from a 'central' ci system, or if the ci systems are managed by different teams. > > Otherwise, as you stated, it's more complicated to manage & maintain. > > Ramy > > -----Original Message----- > From: John Griffith [mailto:john.griffith8 at gmail.com] > Sent: Tuesday, December 23, 2014 3:04 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [3rd-party-ci] Cinder CI and CI accounts > > On Tue, Dec 23, 2014 at 12:07 PM, Alon Marx wrote: >> Hi All, >> >> In IBM we have several cinder drivers, with a number of CI accounts. >> In order to improve the CI management and maintenance, we decided to >> build a single Jenkins master that will run several jobs for the drivers we own. >> Adding the jobs to the jenkins master went ok, but we encountered a >> problem with the CI accounts. We have several drivers and several >> accounts, but in the Jenkins master, the Zuul configuration has only >> one gerrit account that reports. >> >> So there are several questions: >> 1. Was this problem encountered by others? How did they solve it? >> 2. Is there a way to configure Zuul on the Jenkins master to report >> different jobs with different CI accounts? >> 3. If there is no way to configure the master to use several CI >> accounts, should we build a Jenkins master per driver? >> 4. Or another alternative, should we use a single CI account for all >> drivers we own, and report all results under that account? >> >> We'll appreciate any input. >> >> Thanks, >> Alon >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > If you have a look at a review in gerrit you can see others appear to have a single account with multiple tests/results submitted. HP, EMC and NetApp all appear to be pretty clear examples of how to go about doing this. My personal preference on this has always been a single CI account anyway with the different drivers consolidated under it; if nothing else it reduces clutter in the review posting and makes it "easier" to find what you might be looking for. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From egon at egon.cc Wed Dec 24 05:17:06 2014 From: egon at egon.cc (James Downs) Date: Tue, 23 Dec 2014 21:17:06 -0800 Subject: [openstack-dev] Hierarchical Multitenancy In-Reply-To: <002433C1-D4A6-4994-9F0B-FA4011F395A1@godaddy.com> References: <2964CB70-9ED7-42DE-9F17-498458A0031B@gmail.com> <5D7F9996EA547448BC6C54C8C5AAF4E501025E032F@CERNXCHG43.cern.ch> <5D7F9996EA547448BC6C54C8C5AAF4E501025E4E00@CERNXCHG43.cern.ch> <002433C1-D4A6-4994-9F0B-FA4011F395A1@godaddy.com> Message-ID: On Dec 23, 2014, at 5:10 PM, Michael Dorman wrote: > +1 to Nova support for this getting in to Kilo. > > We have a similar use case. I?d really like to doll out quota on a department level, and let individual departments manage sub projects and quotas on their own. I agree that HMT has limited value without Nova support. +1, same for the use case. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Wed Dec 24 05:20:14 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Wed, 24 Dec 2014 11:20:14 +0600 Subject: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs In-Reply-To: References: Message-ID: <987B4DBE-F3F8-467A-8867-A95B5BC769BB@mirantis.com> Thanks Winson, Since we discussed all this already I just want to confirm that I fully support this model, it will significantly help us make much more concise, readable and maintainable workflows. I spent a lot of time thinking about it and don?t see any problems with it. Nice job! However, all additional comments and questions are more than welcomed! Renat Akhmerov @ Mirantis Inc. > On 24 Dec 2014, at 04:32, W Chan wrote: > > After some online discussions with Renat, the following is a revision of the proposal to address the following related blueprints. > * https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment > * https://blueprints.launchpad.net/mistral/+spec/mistral-global-context > * https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values > * https://blueprints.launchpad.net/mistral/+spec/mistral-runtime-context > > Please refer to the following threads for backgrounds. > * http://lists.openstack.org/pipermail/openstack-dev/2014-December/052643.html > * http://lists.openstack.org/pipermail/openstack-dev/2014-December/052960.html > * http://lists.openstack.org/pipermail/openstack-dev/2014-December/052824.html > > > Workflow Context Scope > 1. context to workflow is passed to all its subflows and subtasks/actions (aka children) only explicitly via inputs > 2. context are passed by value (copy.deepcopy) to children > 3. change to context is passed to parent only when it's explicitly published at the end of the child execution > 4. change to context at the parent (after a publish from a child) is passed to subsequent children > > Environment Variables > Solves the problem for quickly passing pre-defined inputs to a WF execution. In the WF spec, environment variables are referenced as $.env.var1, $.env.var2, etc. We should implement an API and DB model where users can pre-defined different environments with their own set of variables. An environment can be passed either by name from the DB or adhoc by dict in start_workflow. On workflow execution, a copy of the environment is saved with the execution object. Action inputs are still declared explicitly in the WF spec. This does not solve the problem where common inputs are specified over and over again. So if there are multiple SQL tasks in the WF, the WF author still needs to supply the conn_str explicitly for each task. In the example below, let's say we have a SQL Query Action that takes a connection string and a query statement as inputs. The WF author can specify that the conn_str input is supplied from the $.env.conn_str. > > Example: > > # Assume this SqlAction is registered as std.sql in Mistral's Action table. > class SqlAction(object): > def __init__(self, conn_str, query): > ... > > ... > > version: "2.0" > workflows: > demo: > type: direct > input: > - query > output: > - records > tasks: > query: > action: std.sql conn_str={$.env.conn_str} query={$.query} > publish: > records: $ > > ... > > my_adhoc_env = { > "conn_str": "mysql://admin:secrete @localhost/test" > } > > ... > > # adhoc by dict > start_workflow(wf_name, wf_inputs, env=my_adhoc_env) > > OR > > # lookup by name from DB model > start_workflow(wf_name, wf_inputs, env="my_lab_env") > > Define Default Action Inputs as Environment Variables > Solves the problem where we're specifying the same inputs to subflows and subtasks/actions over and over again. On command execution, if action inputs are not explicitly supplied, then defaults will be lookup from the environment. > > Example: > Using the same example from above, the WF author can still supply both conn_str and query inputs in the WF spec. However, the author also has the option to supply that as default action inputs. An example environment structure is below. "__actions" should be reserved and immutable. Users can specific one or more default inputs for the sql action as nested dict under "__actions". Recursive YAQL eval should be supported in the env variables. > > version: "2.0" > workflows: > demo: > type: direct > input: > - query > output: > - records > tasks: > query: > action: std.sql query={$.query} > publish: > records: $ > > ... > > my_adhoc_env = { > "sql_server": "localhost", > "__actions": { > "std.sql": { > "conn_str": "mysql://admin:secrete@ {$.env.sql_server}/test" > } > } > } > > Default Input Values Supplied Explicitly in WF Spec > Please refer to this blueprint for background. This is a different use case. To support, we just need to set the correct order of precedence in applying values. > 1. Input explicitly given to the sub flow/task in the WF spec > 2. Default input supplied from env > 3. Default input supplied at WF spec > > Putting this together... > At runtime, the WF context would be similar to the following example. This will be use to recursively eval the inputs for subflow/tasks/actions. > > ctx = { > "var1": ?, > "var2": ?, > "my_server_ip": "10.1.23.250", > "env": { > "sql_server": "localhost", > "__actions": { > "std.sql": { > "conn": "mysql://admin:secrete@{$.env.sql_server}/test " > }, > "my.action": { > "endpoint": "http://{$.my_server_ip}/v1/foo" > } > } > } > } > > Runtime Context > Please refer to this thread for the background and discussion. The only change here is that on run_action, we will pass the runtime data as kwargs to all action invocation. This means all Action classes should have at least **kwargs added to the __init__ method. > > runtime = { > "execution_id": ..., > "task_id": ..., > ... > } > > ... > > action = SomeAction(..., runtime=runtime) > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdubov at mirantis.com Wed Dec 24 06:15:05 2014 From: mdubov at mirantis.com (Mikhail Dubov) Date: Wed, 24 Dec 2014 10:15:05 +0400 Subject: [openstack-dev] [Rally] [ReadTheDocs] Having the default docs theme in gates Message-ID: Hi all, in Rally gates, we have a job that automatically builds the Sphinx documentation for ReadTheDocs so that we can surf through it while reviewing new patches. It produces, however, HTML files in a format which is different from the default one (used on ReadTheDocs). We've found the following snippet that enables the default ReadTheDocs theme locally (and so in gates): # on_rtd is whether we are on readthedocs.orgimport oson_rtd = os.environ.get('READTHEDOCS', None) == 'True' if not on_rtd: # only import and set the theme if we're building docs locally import sphinx_rtd_theme html_theme = 'sphinx_rtd_theme' html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] # otherwise, readthedocs.org uses their theme by default, so no need to specify it The problem is that it uses the sphinx_rtd_theme package, which is not included in the global requirements. What should we do then do have that default ReadTheDocs theme in our gates? Best regards, Mikhail Dubov Engineering OPS Mirantis, Inc. E-Mail: mdubov at mirantis.com Skype: msdubov -------------- next part -------------- An HTML attachment was scrubbed... URL: From akuznetsova at mirantis.com Wed Dec 24 08:06:54 2014 From: akuznetsova at mirantis.com (Anastasia Kuznetsova) Date: Wed, 24 Dec 2014 11:06:54 +0300 Subject: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs In-Reply-To: <987B4DBE-F3F8-467A-8867-A95B5BC769BB@mirantis.com> References: <987B4DBE-F3F8-467A-8867-A95B5BC769BB@mirantis.com> Message-ID: Winson, Renat, I have a few questions, because some of aspects aren't clear to me. 1) How does the end user will pass env variables to workflow?Will you add one more optional parameter to execution-create command? mistral execution-create wf wf_input wf_params wf_env If yes than what is wf_env will be, json file? 2) Retuning to first example: ... action: std.sql conn_str={$.env.conn_str} query={$.query} ... $.env - is it a name of environment or it will be a registered syntax to getting access to values from env ? 3) Can user has a few environments? On Wed, Dec 24, 2014 at 8:20 AM, Renat Akhmerov wrote: > Thanks Winson, > > Since we discussed all this already I just want to confirm that I fully > support this model, it will significantly help us make much more concise, > readable and maintainable workflows. I spent a lot of time thinking about > it and don?t see any problems with it. Nice job! > > However, all additional comments and questions are more than welcomed! > > > Renat Akhmerov > @ Mirantis Inc. > > > > On 24 Dec 2014, at 04:32, W Chan wrote: > > After some online discussions with Renat, the following is a revision of > the proposal to address the following related blueprints. > * > https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment > * https://blueprints.launchpad.net/mistral/+spec/mistral-global-context > * > https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values > * https://blueprints.launchpad.net/mistral/+spec/mistral-runtime-context > > Please refer to the following threads for backgrounds. > * > http://lists.openstack.org/pipermail/openstack-dev/2014-December/052643.html > * > http://lists.openstack.org/pipermail/openstack-dev/2014-December/052960.html > * > http://lists.openstack.org/pipermail/openstack-dev/2014-December/052824.html > > > *Workflow Context Scope* > 1. context to workflow is passed to all its subflows and subtasks/actions > (aka children) only explicitly via inputs > 2. context are passed by value (copy.deepcopy) to children > 3. change to context is passed to parent only when it's explicitly > published at the end of the child execution > 4. change to context at the parent (after a publish from a child) is > passed to subsequent children > > *Environment Variables* > Solves the problem for quickly passing pre-defined inputs to a WF > execution. In the WF spec, environment variables are referenced as > $.env.var1, $.env.var2, etc. We should implement an API and DB model > where users can pre-defined different environments with their own set of > variables. An environment can be passed either by name from the DB or > adhoc by dict in start_workflow. On workflow execution, a copy of the > environment is saved with the execution object. Action inputs are still > declared explicitly in the WF spec. This does not solve the problem where > common inputs are specified over and over again. So if there are multiple > SQL tasks in the WF, the WF author still needs to supply the conn_str > explicitly for each task. In the example below, let's say we have a SQL > Query Action that takes a connection string and a query statement as > inputs. The WF author can specify that the conn_str input is supplied from > the $.env.conn_str. > > *Example:* > > # Assume this SqlAction is registered as std.sql in Mistral's Action table. > class SqlAction(object): > def __init__(self, conn_str, query): > ... > > ... > > version: "2.0" > workflows: > demo: > type: direct > input: > - query > output: > - records > tasks: > query: > action: std.sql conn_str={$.env.conn_str} query={$.query} > publish: > records: $ > > ... > > my_adhoc_env = { > "conn_str": "mysql://admin:secrete at localhost/test" > } > > ... > > # adhoc by dict > start_workflow(wf_name, wf_inputs, env=my_adhoc_env) > > OR > > # lookup by name from DB model > start_workflow(wf_name, wf_inputs, env="my_lab_env") > > > *Define Default Action Inputs as Environment Variables* > Solves the problem where we're specifying the same inputs to subflows and > subtasks/actions over and over again. On command execution, if action > inputs are not explicitly supplied, then defaults will be lookup from the > environment. > > *Example:* > Using the same example from above, the WF author can still supply both > conn_str and query inputs in the WF spec. However, the author also has the > option to supply that as default action inputs. An example environment > structure is below. "__actions" should be reserved and immutable. Users > can specific one or more default inputs for the sql action as nested dict > under "__actions". Recursive YAQL eval should be supported in the env > variables. > > version: "2.0" > workflows: > demo: > type: direct > input: > - query > output: > - records > tasks: > query: > action: std.sql query={$.query} > publish: > records: $ > > ... > > my_adhoc_env = { > "sql_server": "localhost", > "__actions": { > "std.sql": { > "conn_str": "mysql://admin:secrete@{$.env.sql_server}/test" > } > } > } > > > *Default Input Values Supplied Explicitly in WF Spec* > Please refer to this blueprint > > for background. This is a different use case. To support, we just need to > set the correct order of precedence in applying values. > 1. Input explicitly given to the sub flow/task in the WF spec > 2. Default input supplied from env > 3. Default input supplied at WF spec > > *Putting this together...* > At runtime, the WF context would be similar to the following example. > This will be use to recursively eval the inputs for subflow/tasks/actions. > > ctx = { > "var1": ?, > "var2": ?, > "my_server_ip": "10.1.23.250", > "env": { > "sql_server": "localhost", > "__actions": { > "std.sql": { > "conn": "mysql://admin:secrete@{$.env.sql_server}/test" > }, > "my.action": { > "endpoint": "http://{$.my_server_ip}/v1/foo" > } > } > } > } > > *Runtime Context* > Please refer to this thread > for > the background and discussion. The only change here is that on run_action, > we will pass the runtime data as kwargs to all action invocation. This > means all Action classes should have at least **kwargs added to the > __init__ method. > > runtime = { > "execution_id": ..., > "task_id": ..., > ... > } > > ... > > action = SomeAction(..., runtime=runtime) > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricardo.carrillo.cruz at gmail.com Wed Dec 24 08:58:57 2014 From: ricardo.carrillo.cruz at gmail.com (Ricardo Carrillo Cruz) Date: Wed, 24 Dec 2014 09:58:57 +0100 Subject: [openstack-dev] [infra] [storyboard] Nominating Yolanda Robla for StoryBoard Core In-Reply-To: References: Message-ID: Big +1 from me :-). Yolanda is an amazing engineer, both frontend and backend. As Michael said, she's not only been doing Storyboard but a bunch of other infra stuff that will be beneficial for the project. Regards! 2014-12-24 0:38 GMT+01:00 Zaro : > +1 > > On Tue, Dec 23, 2014 at 2:34 PM, Michael Krotscheck > wrote: > >> Hello everyone! >> >> StoryBoard is the much anticipated successor to Launchpad, and is a >> component of the Infrastructure Program. The storyboard-core group is >> intended to be a superset of the infra-core group, with additional >> reviewers who specialize in the field. >> >> Yolanda has been working on StoryBoard ever since the Atlanta Summit, and >> has provided a diligent and cautious voice to our development effort. She >> has consistently provided feedback on our reviews, and is neither afraid of >> asking for clarification, nor of providing constructive criticism. In >> return, she has been nothing but gracious and responsive when improvements >> were suggested to her own submissions. >> >> Furthermore, Yolanda has been quite active in the infrastructure team as >> a whole, and provides valuable context for us in the greater realm of infra. >> >> Please respond within this thread with either supporting commentary, or >> concerns about her promotion. Since many western countries are currently >> celebrating holidays, the review period will remain open until January 9th. >> If the consensus is positive, we will promote her then! >> >> Thanks, >> >> Michael >> >> >> References: >> >> https://review.openstack.org/#/q/reviewer:%22yolanda.robla+%253Cinfo%2540ysoft.biz%253E%22,n,z >> >> http://stackalytics.com/?user_id=yolanda.robla&metric=marks >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From obondarev at mirantis.com Wed Dec 24 09:07:45 2014 From: obondarev at mirantis.com (Oleg Bondarev) Date: Wed, 24 Dec 2014 12:07:45 +0300 Subject: [openstack-dev] [Openstack-operators] The state of nova-network to neutron migration In-Reply-To: <54986C4A.8020708@anteaya.info> References: <54945973.1010904@anteaya.info> <54986C4A.8020708@anteaya.info> Message-ID: On Mon, Dec 22, 2014 at 10:08 PM, Anita Kuno wrote: > > On 12/22/2014 01:32 PM, Joe Gordon wrote: > > On Fri, Dec 19, 2014 at 9:28 AM, Kyle Mestery > wrote: > > > >> On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno > wrote: > >>> > >>> Rather than waste your time making excuses let me state where we are > and > >>> where I would like to get to, also sharing my thoughts about how you > can > >>> get involved if you want to see this happen as badly as I have been > told > >>> you do. > >>> > >>> Where we are: > >>> * a great deal of foundation work has been accomplished to achieve > >>> parity with nova-network and neutron to the extent that those involved > >>> are ready for migration plans to be formulated and be put in place > >>> * a summit session happened with notes and intentions[0] > >>> * people took responsibility and promptly got swamped with other > >>> responsibilities > >>> * spec deadlines arose and in neutron's case have passed > >>> * currently a neutron spec [1] is a work in progress (and it needs > >>> significant work still) and a nova spec is required and doesn't have a > >>> first draft or a champion > >>> > >>> Where I would like to get to: > >>> * I need people in addition to Oleg Bondarev to be available to > help > >>> come up with ideas and words to describe them to create the specs in a > >>> very short amount of time (Oleg is doing great work and is a fabulous > >>> person, yay Oleg, he just can't do this alone) > >>> * specifically I need a contact on the nova side of this complex > >>> problem, similar to Oleg on the neutron side > >>> * we need to have a way for people involved with this effort to > find > >>> each other, talk to each other and track progress > >>> * we need to have representation at both nova and neutron weekly > >>> meetings to communicate status and needs > >>> > >>> We are at K-2 and our current status is insufficient to expect this > work > >>> will be accomplished by the end of K-3. I will be championing this > work, > >>> in whatever state, so at least it doesn't fall off the map. If you > would > >>> like to help this effort please get in contact. I will be thinking of > >>> ways to further this work and will be communicating to those who > >>> identify as affected by these decisions in the most effective methods > of > >>> which I am capable. > >>> > >>> Thank you to all who have gotten us as far as well have gotten in this > >>> effort, it has been a long haul and you have all done great work. Let's > >>> keep going and finish this. > >>> > >>> Thank you, > >>> Anita. > >>> > >>> Thank you for volunteering to drive this effort Anita, I am very happy > >> about this. I support you 100%. > >> > >> I'd like to point out that we really need a point of contact on the nova > >> side, similar to Oleg on the Neutron side. IMHO, this is step 1 here to > >> continue moving this forward. > >> > > > > At the summit the nova team marked the nova-network to neutron migration > as > > a priority [0], so we are collectively interested in seeing this happen > and > > want to help in any way possible. With regard to a nova point of > contact, > > anyone in nova-specs-core should work, that way we can cover more time > > zones. > > > > From what I can gather the first step is to finish fleshing out the first > > spec [1], and it sounds like it would be good to get a few nova-cores > > reviewing it as well. > > > > > > > > > > [0] > > > http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html > > [1] https://review.openstack.org/#/c/142456/ > > > > > Wonderful, thank you for the support Joe. > > It appears that we need to have a regular weekly meeting to track > progress in an archived manner. > > I know there was one meeting November but I don't know what it was > called so so far I can't find the logs for that. > It wasn't official, we just gathered together on #novamigration. Attaching the log here. > So if those affected by this issue can identify what time (UTC please, > don't tell me what time zone you are in it is too hard to guess what UTC > time you are available) and day of the week you are available for a > meeting I'll create one and we can start talking to each other. > > I need to avoid Monday 1500 and 2100 UTC, Tuesday 0800 UTC, 1400 UTC and > 1900 - 2200 UTC, Wednesdays 1500 - 1700 UTC, Thursdays 1400 and 2100 UTC. > I'm available each weekday 0700-1600 UTC, 1700-1800 UTC is also acceptable. Thanks, Oleg > > Thanks, > Anita. > > >> > >> Thanks, > >> Kyle > >> > >> > >>> [0] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron > >>> [1] https://review.openstack.org/#/c/142456/ > >>> > >>> _______________________________________________ > >>> OpenStack-operators mailing list > >>> OpenStack-operators at lists.openstack.org > >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >>> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- hi all -*- marios_ lurking - thanks for the cc Oleg hi so I've created an etherpad https://etherpad.openstack.org/p/neutron-migration-discussion may be useful to record some thoughts --> belmoreira (~belmoreir at pb-d-128-141-237-209.cern.ch) has joined #novamigration hi --> jlibosva (~Adium at ip4-95-82-156-85.cust.nbox.cz) has joined #novamigration hi belmoreira, jlibosva: https://etherpad.openstack.org/p/neutron-migration-discussion obondarev: thanks for filling out the etherpad so I think we're here to discuss proposed migration path in a bit more details I just fad a meeting discussing our internal migration to neutron :) belmoreira: cool) belmoreira: I guess we have good timing obondarev: right? so should we walk through the steps? markmcclain: yeah. I've put some questions to the etherpad, we may go through them to drive the discussion obondarev: I add a question about how iptables will be handled when moving to neutron belmoreira: good question, thanks so starting with Step 0 --> josecastroleon (~josecastr at pcitis153.cern.ch) has joined #novamigration the idea of the reverse proxy to Nova is to ensure that we have a single source of truth for L3 information should it be some new special neutron plugin? <-- belmoreira (~belmoreir at pb-d-128-141-237-209.cern.ch) has quit (Max SendQ exceeded) initially I thought that would be approach, but dansmith had a different suggestion which might be easier --> belmoreira (~belmoreir at pb-d-128-141-28-112.cern.ch) has joined #novamigration oh, interesting also once we started discussing pluggable IPAM in neutron might not need to write a full proxy yeah, the only issue is we don't know when pluggable IPAM will be there I'm afraid right? but it might be easier than writing a full reverse proxy agree even with pluggable IPAM we would still need to handle case of security groups I'm thinking in our infrastructure migration and I still don't think that a proxy (in the way I'm understanding it) will be useful ok, so when speking about proxy the first question I get is what exact calls should be proxied, who issues that calls belmoreira: so the idea/purpose of the proxy is to ensure that the control plane elements can continue to function and that an operator could perform a rolling upgrade markmcclain: this means configuring the neutron agent on compute nodes, but still using nova network? right so initially the proxy would enable neutron API calls to be serviced from same source of truth and that using that one could run the both nova-net and neutron agent together markmcclain: ok, thanks dansmith's suggestion was to then create a special purpose nova management util does it matter at this point which neutron agent is used? is it a special nova-aware agent already? markmcclain: yes, so looking into the proposal the proxy is only the have a small downtime obondarev: so we have two choices either implement a special L2 or proceed with ML2 on linuxbridge I think that for easy we do ML2+bridge markmcclain: got it that way the new nova-migrate command could be used to let the nova API know that the vifs are now managed by neutron and not nova I guess neutron should also be notified of it obondarev: not sure we need to notify neutron the neutron db will still be proxied back to Nova I mean currently ml2 + linuxbridge works slightly different than nova-net what if user wants to update net config of an instance which was originally created with nova-net right but that's where the conversion step moves the tap device from the nova-net bridge onto ml2 managed bridge so during this process the support would be limited operations supported by nova-net ok, so we still have this bridge switching not sure how can this be done for tap devices of all instances at a time so basically we create a command that would cause nova-compute to migrate the instances it manages is it similar to the approach I proposed in summer: https://review.openstack.org/#/c/101921/8 obondarev: except for we're sticking with linuxbridge and maintain a single source of truth got it the split brain migration we explored this summer made some operators uncomfortable and this should not be a nova api extension but some nova-manage command, right? dan suggested making is a special management command that way we don't have to worry about all of the hoops necessary to update the API all we'd need is make the changes to compute and conductor so, for the single source of truth, what is the contact point in nova for Neutron? but since these are internal interfaces lot less hassle initially I had considered proxying the to Nova REST API where possible it won't have great performance, but this is meant to be a transitional phase so that an operator does not need a long outage I'm afraid rest api would not be enough just trying to understand I think we'll have access to everything we need let's say we have neutron running referencing nova as a source of truth then a user creates a port what neutron should do is to allocate new fixed ip in nova first not sure this can be done through nova rest api at the moment correct I might miss something yeah the other alternative now that pluggable IPAM is on the horizon pluggable IPAM becomes a must for nova-net to neutron migration I guess :) --> marekd (~marek at skamander.octogan.net) has joined #novamigration yeah pluggable IPAM actually makes this a bit easier anyway, once we have pluggable IPAM in neutron is that the only part of the api that we can't directly currently proxy to nova-compute? we'll need to have nova-net driver for it in neutron, right? (thanks, sorry for noise, just trying to understand/follow, pls ignore) marios_: I believe so obondarev: yes or either a temporary monolithic plugin and then once all of the hypervisors are transitioned to being managed by neutron the api would be frozen and we would do a dump > translate > restore of the data from nova to neutron during this step we'd also need to switch the L3 elements and bring up any routers, DHCP servers, etc I see, cool can we/do we already have this in a spec? I think it would be hugely helpful marios_: no spec yet, I guess we'll need two specs at least for nova and neutron correct we'll need two specs the nova team is expecting one from me for that side of it and I was thinking obondarev you could lead writing the neutron one I can work on neutron spec then yeah) obondarev: i'd be grateful if you added me as reviewer - will keep a lookout anyway marios_: sure, I will I'll review too so waiting for pluggable IPAM is kind of riscky for neutorn spec to wait for yeah that is a concern of mine probably need to include both options there there is a spec in the review queue for it, but not sure that close enough to final form yeah I think options are good IPAM and monolithic was also hoping by the end of the SLC sprint we'd have better idea of IPAM spec you mean that one in December? yes it is something we can work on over IRC/email but there is so much in flight that the ipam is a bit held up on the API refactor stuff sorry, you mean what? the migration spec is something we work on over IRC/email oh, got it right and I'm hoping the IPAM spec will start to converge in the next week or so would be great the holiday here in the US will create a drag on velocity :) but mestery and I are really wanting to get many of the items solidified before everyone gets distracted with end of the year stuff would be cool to have speck landed in kilo-1 specks* agreed? if we don't I'll have an army of nova cores asking questions :) haha) anyone has more questions at this point? ok, cool so there was the one item I wanted to circle back to sure belmoreira: mentioned IPtables we'll have to migrate the rules when move the device to a new bridge otherwise we'll need to make ml2 aware of how nova writes the rules which might be messy does migrate means clearing old ones and creating new? yeah we will need to clear the old ones but I think that should be accomplished with nova no longer thinks that it is managing that vif so nova will clear rules, right? that's my working plan we'll be interuptting connections but that seems like the best approach agree seems we're running out of time.. -*- marios_ joins another call thanks everyone for joining the chat thanks for the invite oleg, will try tag along and help out with whatever bits it is useful we have different chains in nova and neutron can nova chains be removed only when all neutron ones are ready? belmoreira: yes.. I think that we can orchestrate it so that occurs thanks everyone, let's continue on irc/ML, and we can have another meeting like this if needed thanks and see you on irc obondarev: thanks for organizing this thanks thanks From d.w.chadwick at kent.ac.uk Wed Dec 24 09:08:46 2014 From: d.w.chadwick at kent.ac.uk (David Chadwick) Date: Wed, 24 Dec 2014 09:08:46 +0000 Subject: [openstack-dev] [Keystone] Bug in federation In-Reply-To: <26D9ECC1-E258-401A-A545-C3C874D9C68E@gmail.com> References: <549999A1.2070003@kent.ac.uk> <5499A7A5.2000403@redhat.com> <5499C385.6040301@kent.ac.uk> <26D9ECC1-E258-401A-A545-C3C874D9C68E@gmail.com> Message-ID: <549A829E.4080703@kent.ac.uk> On 23/12/2014 21:56, Morgan Fainberg wrote: > >> On Dec 23, 2014, at 1:08 PM, Dolph Mathews > > wrote: >> >> >> On Tue, Dec 23, 2014 at 1:33 PM, David >> Chadwick > wrote: >> >> Hi Adam >> >> On 23/12/2014 17:34, Adam Young wrote: >> > On 12/23/2014 11:34 AM, David Chadwick wrote: >> >> Hi guys >> >> >> >> we now have the ABFAB federation protocol working with Keystone, using a >> >> modified mod_auth_kerb plugin for Apache (available from the project >> >> Moonshot web site). However, we did not change Keystone configuration >> >> from its original SAML federation configuration, when it was talking to >> >> SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code >> >> (which I believe had to be done for OpenID connect.) We simply replaced >> >> mod_shibboleth with mod_auth_kerb and talked to a completely different >> >> IDP with a different protocol. And everything worked just fine. >> >> >> >> Consequently Keystone is broken, since you can configure it to trust a >> >> particular IDP, talking a particular protocol, but Apache will happily >> >> talk to another IDP, using a different protocol, and Keystone cannot >> >> tell the difference and will happily accept the authenticated user. >> >> Keystone should reject any authenticated user who does not come from the >> >> trusted IDP talking the correct protocol. Otherwise there is no point in >> >> configuring Keystone with this information, if it is ignored by Keystone. >> > The IDP and the Protocol should be passed from HTTPD in env vars. Can >> > you confirm/deny that this is the case now? >> >> What is passed from Apache is the 'PATH_INFO' variable, and it is >> set to >> the URL of Keystone that is being protected, which in our case is >> /OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth >> >> There are also the following arguments passed to Keystone >> 'wsgiorg.routing_args': (> 0x7ffaba339190>, {'identity_provider': u'KentProxy', 'protocol': >> u'saml2'}) >> >> and >> >> 'PATH_TRANSLATED': >> '/var/www/keystone/main/v3/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth' >> >> So Apache is telling Keystone that it has protected the URL that >> Keystone has configured to be protected. >> >> However, Apache has been configured to protect this URL with the ABFAB >> protocol and the local Radius server, rather than the KentProxy >> IdP and >> the SAML2 protocol. So we could say that Apache is lying to Keystone, >> and because Keystone trusts Apache, then Keystone trusts Apache's lies >> and wrongly thinks that the correct IDP and protocol were used. >> >> The only sure way to protect Keystone from a wrongly or mal-configured >> Apache is to have end to end security, where Keystone gets a token >> from >> the IDP that it can validate, to prove that it is the trusted IDP that >> it is talking to. In other words, if Keystone is given the original >> signed SAML assertion from the IDP, it will know for definite that the >> user was authenticated by the trusted IDP using the trusted protocol >> >> >> So the "bug" is a misconfiguration, not an actual bug. The goal was to >> trust and leverage httpd, not reimplement it and all it's extensions. > > Fixing this ?bug? would be moving towards Keystone needing to implement > all of the various protocols to avoid ?misconfigurations?. There are > probably some more values that can be passed down from the Apache layer > to help provide more confidence in the IDP that is being used. I don?t > see a real tangible benefit to moving away from leveraging HTTPD for > handling the heavy lifting when handling federated Identity. Its not as heavy as you suggest. Apache would still do all the protocol negotiation and validation. Keystone would only need to verify the signature of the incoming SAML assertion in order to validate who the IDP was, and that it was SAML. (Remember that Keystone already implements SAML for sending out SAML assertions, which is much more heavyweight.) ABFAB sends an unsigned SAML assertion embedded in a Radius attribute, so obtaining this and doing a minimum of field checking would be sufficient. There will be something similar that can be done for OpenID Connect. So we are not talking about redoing all the protocol handling, simply checking that the trust rules that have already been configured into Keystone, are actually being followed by Apache. "Trust but verify" in the words of Ronald Regan. regards David > > ?Morgan > >> >> regards >> >> David >> >> > >> > On the Apache side we are looking to expand the set of variables set. >> > http://www.freeipa.org/page/Environment_Variables#Proposed_Additional_Variables >> > >> > >> >> The original SAML assertion >> > >> > mod_shib does support Shib-Identity-Provider : >> > >> > >> > https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPAttributeAccess#NativeSPAttributeAccess-CustomSPVariables >> > >> > >> > Which should be sufficient: if the user is coming in via >> mod_shib, they >> > are using SAML. >> > >> > >> > >> >> >> >> BTW, we are using the Juno release. We should fix this bug in Kilo. >> >> >> >> As I have been saying for many months, Keystone does not know >> anything >> >> about SAML or ABFAB or OpenID Connect protocols, so there is >> currently >> >> no point in configuring this information into Keystone. >> Keystone is only >> >> aware of environmental parameters coming from Apache. So this >> is the >> >> protocol that Keystone recognises. If you want Keystone to try to >> >> control the federation protocol and IDPs used by Apache, then >> you will >> >> need the Apache plugins to pass the name of the IDP and the >> protocol >> >> being used as environmental parameters to Keystone, and then >> Keystone >> >> can check that the ones that it has been configured to trust, are >> >> actually being used by Apache. >> >> >> >> regards >> >> >> >> David >> >> >> >> _______________________________________________ >> >> OpenStack-dev mailing list >> >> OpenStack-dev at lists.openstack.org >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tengqim at linux.vnet.ibm.com Wed Dec 24 09:48:56 2014 From: tengqim at linux.vnet.ibm.com (Qiming Teng) Date: Wed, 24 Dec 2014 17:48:56 +0800 Subject: [openstack-dev] [Heat][oslo-incubator][oslo-log] Logging Unicode characters Message-ID: <20141224094855.GA28021@localhost> Hi, When trying to enable stack names in Heat to use unicode strings, I am stuck by a weird behavior of logging. Suppose I have a stack name assigned some non-ASCII string, then when stack tries to log something here: heat/engine/stack.py: 536 LOG.info(_LI('Stack %(action)s %(status)s (%(name)s): ' 537 '%(reason)s'), 538 {'action': action, 539 'status': status, 540 'name': self.name, # type(self.name)==unicode here 541 'reason': reason}) I'm seeing the following errors from h-eng session: Traceback (most recent call last): File "/usr/lib64/python2.6/logging/__init__.py", line 799, in emit stream.write(fs % msg.decode('utf-8')) File "/usr/lib64/python2.6/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeEncodeError: 'ascii' codec can't encode characters in position 114-115: ordinal not in range(128) This means logging cannot handle Unicode correctly? No. I did the following experiments: $ cat logtest #!/usr/bin/env python import sys from oslo.utils import encodeutils from oslo import i18n from heat.common.i18n import _LI from heat.openstack.common import log as logging i18n.enable_lazy() LOG = logging.getLogger('logtest') logging.setup('heat') print('sys.stdin.encoding: %s' % sys.stdin.encoding) print('sys.getdefaultencoding: %s' % sys.getdefaultencoding()) s = sys.argv[1] print('s is: %s' % type(s)) stack_name = encodeutils.safe_decode(unis) print('stack_name is: %s' % type(stack_name)) # stack_name is unicode here LOG.error(_LI('stack name: %(name)s') % {'name': stack_name}) $ ./logtest [tengqm at node1 heat]$ ./logtest ?? sys.stdin.encoding: UTF-8 sys.getdefaultencoding: ascii s is: stack_name is: 2014-12-24 17:51:13.799 29194 ERROR logtest [-] stack name: ?? It worked. After spending more than one day on this, I'm seeking help from people here. What's wrong with Unicode stack names here? Any hints are appreciated. Regards, - Qiming From shardy at redhat.com Wed Dec 24 10:17:46 2014 From: shardy at redhat.com (Steven Hardy) Date: Wed, 24 Dec 2014 10:17:46 +0000 Subject: [openstack-dev] [heat] Application level HA via Heat In-Reply-To: <5498823D.3050005@redhat.com> References: <20141222182123.GC14130@t430slt.redhat.com> <5498823D.3050005@redhat.com> Message-ID: <20141224101744.GB16954@t430slt.redhat.com> On Mon, Dec 22, 2014 at 03:42:37PM -0500, Zane Bitter wrote: > On 22/12/14 13:21, Steven Hardy wrote: > >Hi all, > > > >So, lately I've been having various discussions around $subject, and I know > >it's something several folks in our community are interested in, so I > >wanted to get some ideas I've been pondering out there for discussion. > > > >I'll start with a proposal of how we might replace HARestarter with > >AutoScaling group, then give some initial ideas of how we might evolve that > >into something capable of a sort-of active/active failover. > > > >1. HARestarter replacement. > > > >My position on HARestarter has long been that equivalent functionality > >should be available via AutoScalingGroups of size 1. Turns out that > >shouldn't be too hard to do: > > > > resources: > > server_group: > > type: OS::Heat::AutoScalingGroup > > properties: > > min_size: 1 > > max_size: 1 > > resource: > > type: ha_server.yaml > > > > server_replacement_policy: > > type: OS::Heat::ScalingPolicy > > properties: > > # FIXME: this adjustment_type doesn't exist yet > > adjustment_type: replace_oldest > > auto_scaling_group_id: {get_resource: server_group} > > scaling_adjustment: 1 > > One potential issue with this is that it is a little bit _too_ equivalent to > HARestarter - it will replace your whole scaled unit (ha_server.yaml in this > case) rather than just the failed resource inside. Personally I don't see that as a problem, because the interface makes that explicit - if you put a resource in an AutoScalingGroup, you expect it to get created/deleted on group adjustment, so anything you don't want replaced stays outside the group. Happy to consider other alternatives which do less destructive replacement, but to me this seems like the simplest possible way to replace HARestarter with something we can actually support long term. Even if "just replace failed resource" is somehow made available later, we'll still want to support AutoScalingGroup, and "replace_oldest" is likely to be useful in other situations, not just this use-case. Do you have specific ideas of how the just-replace-failed-resource feature might be implemented? A way for a signal to declare a resource failed so convergence auto-healing does a less destructive replacement? > >So, currently our ScalingPolicy resource can only support three adjustment > >types, all of which change the group capacity. AutoScalingGroup already > >supports batched replacements for rolling updates, so if we modify the > >interface to allow a signal to trigger replacement of a group member, then > >the snippet above should be logically equivalent to HARestarter AFAICT. > > > >The steps to do this should be: > > > > - Standardize the ScalingPolicy-AutoScaling group interface, so > >aynchronous adjustments (e.g signals) between the two resources don't use > >the "adjust" method. > > > > - Add an option to replace a member to the signal interface of > >AutoScalingGroup > > > > - Add the new "replace adjustment type to ScalingPolicy > > I think I am broadly in favour of this. Ok, great - I think we'll probably want replace_oldest, replace_newest, and replace_specific, such that both alarm and operator driven replacement have flexibility over what member is replaced. > >I posted a patch which implements the first step, and the second will be > >required for TripleO, e.g we should be doing it soon. > > > >https://review.openstack.org/#/c/143496/ > >https://review.openstack.org/#/c/140781/ > > > >2. A possible next step towards active/active HA failover > > > >The next part is the ability to notify before replacement that a scaling > >action is about to happen (just like we do for LoadBalancer resources > >already) and orchestrate some or all of the following: > > > >- Attempt to quiesce the currently active node (may be impossible if it's > > in a bad state) > > > >- Detach resources (e.g volumes primarily?) from the current active node, > > and attach them to the new active node > > > >- Run some config action to activate the new node (e.g run some config > > script to fsck and mount a volume, then start some application). > > > >The first step is possible by putting a SofwareConfig/SoftwareDeployment > >resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the > >node is too bricked to respond and specifying DELETE action so it only runs > >when we replace the resource). > > > >The third step is possible either via a script inside the box which polls > >for the volume attachment, or possibly via an update-only software config. > > > >The second step is the missing piece AFAICS. > > > >I've been wondering if we can do something inside a new heat resource, > >which knows what the current "active" member of an ASG is, and gets > >triggered on a "replace" signal to orchestrate e.g deleting and creating a > >VolumeAttachment resource to move a volume between servers. > > > >Something like: > > > > resources: > > server_group: > > type: OS::Heat::AutoScalingGroup > > properties: > > min_size: 2 > > max_size: 2 > > resource: > > type: ha_server.yaml > > > > server_failover_policy: > > type: OS::Heat::FailoverPolicy > > properties: > > auto_scaling_group_id: {get_resource: server_group} > > resource: > > type: OS::Cinder::VolumeAttachment > > properties: > > # FIXME: "refs" is a ResourceGroup interface not currently > > # available in AutoScalingGroup > > instance_uuid: {get_attr: [server_group, refs, 1]} > > > > server_replacement_policy: > > type: OS::Heat::ScalingPolicy > > properties: > > # FIXME: this adjustment_type doesn't exist yet > > adjustment_type: replace_oldest > > auto_scaling_policy_id: {get_resource: server_failover_policy} > > scaling_adjustment: 1 > > This actually fails because a VolumeAttachment needs to be updated in place; > if you try to switch servers but keep the same Volume when replacing the > attachment you'll get an error. Doh, you're right, so FailoverPolicy would need to know how to delete then recreate the resource instead of doing an in-place update. > TBH {get_attr: [server_group, refs, 1]} is doing most of the heavy lifting > here, so in theory you could just have an OS::Cinder::VolumeAttachment > instead of the FailoverPolicy and then all you need is a way of triggering a > stack update with the same template & params. I know Ton added a PATCH > method to update in Juno so that you don't have to pass parameters any more, > and I believe it's planned to do the same with the template. Interesting, any thoughts on what the template-level interface to that PATCH update might look like? (I'm guessing you'll probably say a mistral resource?) > >By chaining policies like this we could trigger an update on the attachment > >resource (or a nested template via a provider resource containing many > >attachments or other resources) every time the ScalingPolicy is triggered. > > > >For the sake of clarity, I've not included the existing stuff like > >ceilometer alarm resources etc above, but hopefully it gets the idea > >accross so we can discuss further, what are peoples thoughts? I'm quite > >happy to iterate on the idea if folks have suggestions for a better > >interface etc :) > > > >One problem I see with the above approach is you'd have to trigger a > >failover after stack create to get the initial volume attached, still > >pondering ideas on how best to solve that.. > > To me this is falling into the same old trap of "hey, we want to run this > custom workflow, all we need to do is add a new resource type to hang some > code on". That's pretty much how we got HARestarter. > > Also, like HARestarter, this cannot hope to cover the range of possible > actions that might be needed by various applications. > > IMHO the "right" way to implement this is that the Ceilometer alarm triggers > a workflow in Mistral that takes the appropriate action defined by the user, > which may (or may not) include updating the Heat stack to a new template > where the shared storage gets attached to a different server. Ok, I'm quite happy to accept this may be a better long-term solution, but can anyone comment on the current maturity level of Mistral? Questions which spring to mind are: - Is the DSL stable now? - What's the roadmap re incubation (there are a lot of TBD's here: https://wiki.openstack.org/wiki/Mistral/Incubation) - How does deferred authentication work for alarm triggered workflows, e.g if a ceilometer alarm (which authenticates as a stack domain user) needs to signal Mistral to start a workflow? I guess a first step is creating a contrib Mistral resource and investigating it, but it would be great if anyone has first-hand experiences they can share before we burn too much time digging into it. Cheers, Steve From marco.fargetta at ct.infn.it Wed Dec 24 10:19:49 2014 From: marco.fargetta at ct.infn.it (Marco Fargetta) Date: Wed, 24 Dec 2014 11:19:49 +0100 Subject: [openstack-dev] [Keystone] Bug in federation In-Reply-To: <549A829E.4080703@kent.ac.uk> References: <549999A1.2070003@kent.ac.uk> <5499A7A5.2000403@redhat.com> <5499C385.6040301@kent.ac.uk> <26D9ECC1-E258-401A-A545-C3C874D9C68E@gmail.com> <549A829E.4080703@kent.ac.uk> Message-ID: <60568277-58C2-49B7-A7CC-CE12E94800DE@ct.infn.it> Hi All, this bug was already reported and fixed in two steps: https://bugs.launchpad.net/ossn/+bug/1390124 The first step is in the documentation. There should be also an OSS advice for previous version of OpenStack. The solution consist in configuring shibboleth to use different IdPs for different URLs. The second step, still in progress, is to include an ID in the IdP configuration. My patch is under review here: https://review.openstack.org/#/c/142743/ Let me know if it is enough to solve the issue in your case. Marco > On 24 Dec 2014, at 10:08, David Chadwick wrote: > > > > On 23/12/2014 21:56, Morgan Fainberg wrote: >> >>> On Dec 23, 2014, at 1:08 PM, Dolph Mathews >> > wrote: >>> >>> >>> On Tue, Dec 23, 2014 at 1:33 PM, David >>> Chadwick > wrote: >>> >>> Hi Adam >>> >>> On 23/12/2014 17:34, Adam Young wrote: >>>> On 12/23/2014 11:34 AM, David Chadwick wrote: >>>>> Hi guys >>>>> >>>>> we now have the ABFAB federation protocol working with Keystone, using a >>>>> modified mod_auth_kerb plugin for Apache (available from the project >>>>> Moonshot web site). However, we did not change Keystone configuration >>>>> from its original SAML federation configuration, when it was talking to >>>>> SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code >>>>> (which I believe had to be done for OpenID connect.) We simply replaced >>>>> mod_shibboleth with mod_auth_kerb and talked to a completely different >>>>> IDP with a different protocol. And everything worked just fine. >>>>> >>>>> Consequently Keystone is broken, since you can configure it to trust a >>>>> particular IDP, talking a particular protocol, but Apache will happily >>>>> talk to another IDP, using a different protocol, and Keystone cannot >>>>> tell the difference and will happily accept the authenticated user. >>>>> Keystone should reject any authenticated user who does not come from the >>>>> trusted IDP talking the correct protocol. Otherwise there is no point in >>>>> configuring Keystone with this information, if it is ignored by Keystone. >>>> The IDP and the Protocol should be passed from HTTPD in env vars. Can >>>> you confirm/deny that this is the case now? >>> >>> What is passed from Apache is the 'PATH_INFO' variable, and it is >>> set to >>> the URL of Keystone that is being protected, which in our case is >>> /OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth >>> >>> There are also the following arguments passed to Keystone >>> 'wsgiorg.routing_args': (>> 0x7ffaba339190>, {'identity_provider': u'KentProxy', 'protocol': >>> u'saml2'}) >>> >>> and >>> >>> 'PATH_TRANSLATED': >>> '/var/www/keystone/main/v3/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth' >>> >>> So Apache is telling Keystone that it has protected the URL that >>> Keystone has configured to be protected. >>> >>> However, Apache has been configured to protect this URL with the ABFAB >>> protocol and the local Radius server, rather than the KentProxy >>> IdP and >>> the SAML2 protocol. So we could say that Apache is lying to Keystone, >>> and because Keystone trusts Apache, then Keystone trusts Apache's lies >>> and wrongly thinks that the correct IDP and protocol were used. >>> >>> The only sure way to protect Keystone from a wrongly or mal-configured >>> Apache is to have end to end security, where Keystone gets a token >>> from >>> the IDP that it can validate, to prove that it is the trusted IDP that >>> it is talking to. In other words, if Keystone is given the original >>> signed SAML assertion from the IDP, it will know for definite that the >>> user was authenticated by the trusted IDP using the trusted protocol >>> >>> >>> So the "bug" is a misconfiguration, not an actual bug. The goal was to >>> trust and leverage httpd, not reimplement it and all it's extensions. >> >> Fixing this ?bug? would be moving towards Keystone needing to implement >> all of the various protocols to avoid ?misconfigurations?. There are >> probably some more values that can be passed down from the Apache layer >> to help provide more confidence in the IDP that is being used. I don?t >> see a real tangible benefit to moving away from leveraging HTTPD for >> handling the heavy lifting when handling federated Identity. > > Its not as heavy as you suggest. Apache would still do all the protocol > negotiation and validation. Keystone would only need to verify the > signature of the incoming SAML assertion in order to validate who the > IDP was, and that it was SAML. (Remember that Keystone already > implements SAML for sending out SAML assertions, which is much more > heavyweight.) ABFAB sends an unsigned SAML assertion embedded in a > Radius attribute, so obtaining this and doing a minimum of field > checking would be sufficient. There will be something similar that can > be done for OpenID Connect. > > So we are not talking about redoing all the protocol handling, simply > checking that the trust rules that have already been configured into > Keystone, are actually being followed by Apache. "Trust but verify" in > the words of Ronald Regan. > > regards > > David > >> >> ?Morgan >> >>> >>> regards >>> >>> David >>> >>>> >>>> On the Apache side we are looking to expand the set of variables set. >>>> http://www.freeipa.org/page/Environment_Variables#Proposed_Additional_Variables >>>> >>>> >>> >>> The original SAML assertion >>>> >>>> mod_shib does support Shib-Identity-Provider : >>>> >>>> >>>> https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPAttributeAccess#NativeSPAttributeAccess-CustomSPVariables >>>> >>>> >>>> Which should be sufficient: if the user is coming in via >>> mod_shib, they >>>> are using SAML. >>>> >>>> >>>> >>>>> >>>>> BTW, we are using the Juno release. We should fix this bug in Kilo. >>>>> >>>>> As I have been saying for many months, Keystone does not know >>> anything >>>>> about SAML or ABFAB or OpenID Connect protocols, so there is >>> currently >>>>> no point in configuring this information into Keystone. >>> Keystone is only >>>>> aware of environmental parameters coming from Apache. So this >>> is the >>>>> protocol that Keystone recognises. If you want Keystone to try to >>>>> control the federation protocol and IDPs used by Apache, then >>> you will >>>>> need the Apache plugins to pass the name of the IDP and the >>> protocol >>>>> being used as environmental parameters to Keystone, and then >>> Keystone >>>>> can check that the ones that it has been configured to trust, are >>>>> actually being used by Apache. >>>>> >>>>> regards >>>>> >>>>> David >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ==================================================== Eng. Marco Fargetta, PhD Istituto Nazionale di Fisica Nucleare (INFN) Catania, Italy EMail: Marco.Fargetta at ct.infn.it ==================================================== -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4551 bytes Desc: not available URL: From rakhmerov at mirantis.com Wed Dec 24 11:19:12 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Wed, 24 Dec 2014 17:19:12 +0600 Subject: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs In-Reply-To: References: <987B4DBE-F3F8-467A-8867-A95B5BC769BB@mirantis.com> Message-ID: <8D93C1AB-F879-4FDF-ABE1-5D6384C84F4C@mirantis.com> > On 24 Dec 2014, at 14:06, Anastasia Kuznetsova wrote: > > 1) How does the end user will pass env variables to workflow?Will you add one more optional parameter to execution-create command? > mistral execution-create wf wf_input wf_params wf_env > If yes than what is wf_env will be, json file? Yes. IMO it should be possible to specify either a string (name of a previously stored environment) or a json file (so called ad-hoc environment). > 2) Retuning to first example: > ... > action: std.sql conn_str={$.env.conn_str} query={$.query} > ... > $.env - is it a name of environment or it will be a registered syntax to getting access to values from env ? So far we agreed that ?key' should not be a registered key. Environment (optionally specified) is just another storage of variables going after workflow context in a lookup chain. So that if somewhere in a wf we have an expression $.something then this ?something? will be first looked in workflow context and if it doesn?t exist there then looked in the specified environment. But if we want to explicitly group a set of variables we can use any (except for reserved as "__actions" ) key, for example, ?env?. > 3) Can user has a few environments? Yes. That?s one of the goals of introducing a concept of environment. So that same workflows could be running in different environments (e.g with different email settings, any kinds of passports etc.). Renat Akhmerov @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Wed Dec 24 11:40:22 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Wed, 24 Dec 2014 17:40:22 +0600 Subject: [openstack-dev] [heat] Application level HA via Heat In-Reply-To: <20141224101744.GB16954@t430slt.redhat.com> References: <20141222182123.GC14130@t430slt.redhat.com> <5498823D.3050005@redhat.com> <20141224101744.GB16954@t430slt.redhat.com> Message-ID: <2EF9C3F5-FA84-488A-BE0D-F17D4F9D6D60@mirantis.com> Hi > Ok, I'm quite happy to accept this may be a better long-term solution, but > can anyone comment on the current maturity level of Mistral? Questions > which spring to mind are: > > - Is the DSL stable now? You can think ?yes? because although we keep adding new features we do it in a backwards compatible manner. I personally try to be very cautious about this. > - What's the roadmap re incubation (there are a lot of TBD's here: > https://wiki.openstack.org/wiki/Mistral/Incubation) Ooh yeah, this page is very very obsolete which is actually my fault because I didn?t pay a lot of attention to this after I heard all these rumors about TC changing the whole approach around getting projects incubated/integrated. I think incubation readiness from a technical perspective is good (various style checks, procedures etc.), even if there?s still something that we need to adjust it must not be difficult and time consuming. The main question for the last half a year has been ?What OpenStack program best fits Mistral??. So far we?ve had two candidates: Orchestration and some new program (e.g. Workflow Service). However, nothing is decided yet on that. > - How does deferred authentication work for alarm triggered workflows, e.g > if a ceilometer alarm (which authenticates as a stack domain user) needs > to signal Mistral to start a workflow? It works via Keystone trusts. It works but there?s still an issue that we are to fix. If we authenticate by a previously created trust and try to call Nova then it fails with an authentication error. I know it?s been solved in other projects (e.g. Heat) so we need to look at it. > I guess a first step is creating a contrib Mistral resource and > investigating it, but it would be great if anyone has first-hand > experiences they can share before we burn too much time digging into it. Yes, we already started discussing how we can create Mistral resource for Heat. Looks like there?s a couple of volunteers who can do that. Anyway, I?m totally for it and any help from our side can be provided (including implementation itself) Renat Akhmerov @ Mirantis Inc. From slukjanov at mirantis.com Wed Dec 24 11:47:22 2014 From: slukjanov at mirantis.com (Sergey Lukjanov) Date: Wed, 24 Dec 2014 15:47:22 +0400 Subject: [openstack-dev] [sahara] no meetings next 2 weeks Message-ID: Hi sahara folks, Lets cancel the next two weekly meetings because of Christmas and new years day. Thanks P.S. Happy holidays! -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tengqim at linux.vnet.ibm.com Wed Dec 24 12:58:13 2014 From: tengqim at linux.vnet.ibm.com (Qiming Teng) Date: Wed, 24 Dec 2014 20:58:13 +0800 Subject: [openstack-dev] [Heat][oslo-incubator][oslo-log] Logging Unicode characters In-Reply-To: <20141224094855.GA28021@localhost> References: <20141224094855.GA28021@localhost> Message-ID: <20141224125811.GA28811@localhost> Seems that the reason is in devstack 'screen' is not started with Unicode support. Still checking ... Regards, Qiming On Wed, Dec 24, 2014 at 05:48:56PM +0800, Qiming Teng wrote: > Hi, > > When trying to enable stack names in Heat to use unicode strings, I am > stuck by a weird behavior of logging. > > Suppose I have a stack name assigned some non-ASCII string, then when > stack tries to log something here: > > heat/engine/stack.py: > > 536 LOG.info(_LI('Stack %(action)s %(status)s (%(name)s): ' > 537 '%(reason)s'), > 538 {'action': action, > 539 'status': status, > 540 'name': self.name, # type(self.name)==unicode here > 541 'reason': reason}) > > I'm seeing the following errors from h-eng session: > > Traceback (most recent call last): > File "/usr/lib64/python2.6/logging/__init__.py", line 799, in emit > stream.write(fs % msg.decode('utf-8')) > File "/usr/lib64/python2.6/encodings/utf_8.py", line 16, in decode > return codecs.utf_8_decode(input, errors, True) > UnicodeEncodeError: 'ascii' codec can't encode characters in position 114-115: > ordinal not in range(128) > > This means logging cannot handle Unicode correctly? No. I did the > following experiments: > > $ cat logtest > > #!/usr/bin/env python > > import sys > > from oslo.utils import encodeutils > from oslo import i18n > > from heat.common.i18n import _LI > from heat.openstack.common import log as logging > > i18n.enable_lazy() > > LOG = logging.getLogger('logtest') > logging.setup('heat') > > print('sys.stdin.encoding: %s' % sys.stdin.encoding) > print('sys.getdefaultencoding: %s' % sys.getdefaultencoding()) > > s = sys.argv[1] > print('s is: %s' % type(s)) > > stack_name = encodeutils.safe_decode(unis) > print('stack_name is: %s' % type(stack_name)) > > # stack_name is unicode here > LOG.error(_LI('stack name: %(name)s') % {'name': stack_name}) > > $ ./logtest > > [tengqm at node1 heat]$ ./logtest ?? > sys.stdin.encoding: UTF-8 > sys.getdefaultencoding: ascii > s is: > stack_name is: > 2014-12-24 17:51:13.799 29194 ERROR logtest [-] stack name: ?? > > It worked. > > After spending more than one day on this, I'm seeking help from people > here. What's wrong with Unicode stack names here? > > Any hints are appreciated. > > Regards, > - Qiming > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From eli at mirantis.com Wed Dec 24 13:15:07 2014 From: eli at mirantis.com (Evgeniy L) Date: Wed, 24 Dec 2014 17:15:07 +0400 Subject: [openstack-dev] [Fuel][Plugins] UI workflow, plugins enabling/disabling Message-ID: Hi, Recently we've discussed what plugins should look like from user point of view [1]. On one of the meeting it was decided to have the next flow: 1. user installs fuel plugin (as usually with `fuel plugins install fuel-plugin-name-1.0.0.fp`) 2. after that plugin can be seen on Plugins page, the button for this page will be placed somewhere between Environments and Releases buttons 3. each plugin on the page has checkbox, the checkbox represents the default state of plugin for new environments, if checkbox is checked, when user creates environment, he can see all of the buttons which are related to plugin, e.g. in case of Contrail he can see new option in the list of network providers on Network tab in the wizard 4. during the environment configuration user should select options which are related to the plugin, the information about the list of options and where they should be placed is described by plugin developer 5. when user starts deployment, Nailgun parses tasks and depending on conditions sends them to Astute, the conditions are described for each task by plugin developer, example of condition "cluster:net_provider != 'contrail' ", if task doesn't have conditions, we always execute it Any comments on that? Thanks, [1] https://www.mail-archive.com/openstack-dev at lists.openstack.org/msg40878.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From akuznetsova at mirantis.com Wed Dec 24 13:34:15 2014 From: akuznetsova at mirantis.com (Anastasia Kuznetsova) Date: Wed, 24 Dec 2014 16:34:15 +0300 Subject: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs In-Reply-To: <8D93C1AB-F879-4FDF-ABE1-5D6384C84F4C@mirantis.com> References: <987B4DBE-F3F8-467A-8867-A95B5BC769BB@mirantis.com> <8D93C1AB-F879-4FDF-ABE1-5D6384C84F4C@mirantis.com> Message-ID: Renat, thanks for response! One more question: So that same workflows could be running in different environments Asking about using a few environments I meant within one workflow. For example I need to work with two DB and I have two environments: env1 = {"conn_str": ip, "user": user, "password": passwd} and env2 = {"conn_str": ip2, "user": user2, "password": passwd2}. Will it be possible to do something like this: tasks: connect_first_db: action: std.sql conn_str={$.env1.conn_str} query={$.query} publish: records: $ connect_second_db: action: std.sql conn_str={$.env2.conn_str} query={$.query} publish: records: $ Thanks, Anastasia Kuznetsova On Wed, Dec 24, 2014 at 2:19 PM, Renat Akhmerov wrote: > > On 24 Dec 2014, at 14:06, Anastasia Kuznetsova > wrote: > > 1) How does the end user will pass env variables to workflow?Will you add > one more optional parameter to execution-create command? > mistral execution-create wf wf_input wf_params wf_env > If yes than what is wf_env will be, json file? > > > Yes. IMO it should be possible to specify either a string (name of a > previously stored environment) or a json file (so called ad-hoc > environment). > > 2) Retuning to first example: > ... > action: std.sql conn_str={$.env.conn_str} query={$.query} > ... > $.env - is it a name of environment or it will be a registered syntax to > getting access to values from env ? > > > So far we agreed that ?key' should not be a registered key. Environment > (optionally specified) is just another storage of variables going after > workflow context in a lookup chain. So that if somewhere in a wf we have an > expression $.something then this ?something? will be first looked in > workflow context and if it doesn?t exist there then looked in the specified > environment. > But if we want to explicitly group a set of variables we can use any > (except for reserved as "__actions" ) key, for example, ?env?. > > 3) Can user has a few environments? > > > Yes. That?s one of the goals of introducing a concept of environment. So > that same workflows could be running in different environments (e.g with > different email settings, any kinds of passports etc.). > > > Renat Akhmerov > @ Mirantis Inc. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dteselkin at mirantis.com Wed Dec 24 14:05:45 2014 From: dteselkin at mirantis.com (Dmitry Teselkin) Date: Wed, 24 Dec 2014 18:05:45 +0400 Subject: [openstack-dev] [Murano] Murano CI maintenance Message-ID: Hi, I'm going to update devstack on Murano CI server. This should take approx 2-3 hrs, if no obstacles. During that period CI jobs will be disabled to avoid -1 scores with NOT_REGISTERED status. -- Thanks, Dmitry Teselkin Deployment Engineer Mirantis http://www.mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdennis at redhat.com Wed Dec 24 14:15:34 2014 From: jdennis at redhat.com (John Dennis) Date: Wed, 24 Dec 2014 09:15:34 -0500 Subject: [openstack-dev] [Keystone] Bug in federation In-Reply-To: <5499C385.6040301@kent.ac.uk> References: <549999A1.2070003@kent.ac.uk> <5499A7A5.2000403@redhat.com> <5499C385.6040301@kent.ac.uk> Message-ID: <549ACA86.9070109@redhat.com> Can't this be solved with a couple of environment variables? The two keys pieces of information needed are: 1) who authenticated the subject? 2) what authentication method was used? There is already precedence for AUTH_TYPE, it's used in AJP to initialize the authType property in a Java Servelet. AUTH_TYPE would cover item 2. Numerous places in Apache already set AUTH_TYPE. Perhaps there could be a convention that AUTH_TYPE could carry extra qualifying parameters much like HTTP headers do. The first token would be the primary mechanism, e.g. saml, negotiate, x509, etc. For authentication types that support multiple mechanisms (e.g. EAP, SAML, etc.) an extra parameter would qualify the actual mechanism used. For SAML that qualifying extra parameter could be the value from AuthnContextClassRef. Item 1 could be covered by a new environment variable AUTH_AUTHORITY. If AUTH_TYPE is negotiate (i.e. kerberos) then the AUTH_AUTHORITY would be the KDC. For SAML it would probably be taken from the AuthenticatingAuthority element or the IdP entityID. I'm not sure I see the need for other layers to receive the full SAML assertion and validate the signature. One has to trust the server you're running in. It's the same concept as trusting REMOTE_USER. -- John From d.w.chadwick at kent.ac.uk Wed Dec 24 15:37:22 2014 From: d.w.chadwick at kent.ac.uk (David Chadwick) Date: Wed, 24 Dec 2014 15:37:22 +0000 Subject: [openstack-dev] [Keystone] Bug in federation In-Reply-To: <549ACA86.9070109@redhat.com> References: <549999A1.2070003@kent.ac.uk> <5499A7A5.2000403@redhat.com> <5499C385.6040301@kent.ac.uk> <549ACA86.9070109@redhat.com> Message-ID: <549ADDB2.8040205@kent.ac.uk> HI John On 24/12/2014 14:15, John Dennis wrote: > Can't this be solved with a couple of environment variables? The two > keys pieces of information needed are: > > 1) who authenticated the subject? AUTH_AUTHORITY or similar would stop wrong configuration of Apache if it was set by the protocol plugin module from the protocol messages it received. But it may take time for all plugin suppliers to adopt this and implement it. > > 2) what authentication method was used? Its not the authentication method that is being questioned (could be un/pw, two factor or any other method), but rather the federation protocol that was used. So I dont think AUTH-TYPE is the right parameter for what is required. > > There is already precedence for AUTH_TYPE, it's used in AJP to > initialize the authType property in a Java Servelet. AUTH_TYPE would > cover item 2. Numerous places in Apache already set AUTH_TYPE. Perhaps > there could be a convention that AUTH_TYPE could carry extra qualifying > parameters much like HTTP headers do. The first token would be the > primary mechanism, e.g. saml, negotiate, x509, etc. For authentication > types that support multiple mechanisms (e.g. EAP, SAML, etc.) an extra > parameter would qualify the actual mechanism used. For SAML that > qualifying extra parameter could be the value from AuthnContextClassRef. > > Item 1 could be covered by a new environment variable AUTH_AUTHORITY. > > If AUTH_TYPE is negotiate (i.e. kerberos) then the AUTH_AUTHORITY would > be the KDC. For SAML it would probably be taken from the > AuthenticatingAuthority element or the IdP entityID. > > I'm not sure I see the need for other layers to receive the full SAML > assertion and validate the signature. One has to trust the server you're > running in. It's the same concept as trusting REMOTE_USER. > Not quite. A badly configured Apache would not (should not) effect REMOTE_USER as this should be set by the authn plugin. Currently we have nothing to check that Apache was correctly configured regards David From tpb at dyncloud.net Wed Dec 24 15:41:08 2014 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 24 Dec 2014 10:41:08 -0500 Subject: [openstack-dev] [cinder] ratio: created to attached In-Reply-To: References: <54960C9B.3000705@dyncloud.net> Message-ID: <549ADE94.4010502@dyncloud.net> On 12/22/14 4:48 PM, John Griffith wrote: > On Sat, Dec 20, 2014 at 4:56 PM, Tom Barron wrote: > Does anyone have real world experience, even data, to speak to the > question: in an OpenStack cloud, what is the likely ratio of (created) > cinder volumes to attached cinder volumes? > > Thanks, > > Tom Barron >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Honestly I think the assumption is and should be 1:1, perhaps not 100% > duty-cycle, but certainly periods of time when there is a 100% attach > rate. > Certainly peak usage would be 1:1. But that still allows for lots of distributions - e.g. 1:1 2% of the time, 10:9 80%, 10:7 95% vs 1:1 90%, etc. Some of the devs on this list also run clouds, so I'm curious if there are data available indicating what kind of distribution of this ratio they see in practice. From dpkshetty at gmail.com Wed Dec 24 15:57:25 2014 From: dpkshetty at gmail.com (Deepak Shetty) Date: Wed, 24 Dec 2014 21:27:25 +0530 Subject: [openstack-dev] Hierarchical Multitenancy In-Reply-To: <2964CB70-9ED7-42DE-9F17-498458A0031B@gmail.com> References: <2964CB70-9ED7-42DE-9F17-498458A0031B@gmail.com> Message-ID: Raildo, Thanks for putting the blog, i really liked it as it helps to understand how hmt works. I am interested to know more about how hmt can be exploited for other OpenStack projects... Esp cinder, manila On Dec 23, 2014 5:55 AM, "Morgan Fainberg" wrote: > Hi Raildo, > > Thanks for putting this post together. I really appreciate all the work > you guys have done (and continue to do) to get the Hierarchical > Mulittenancy code into Keystone. It?s great to have the base implementation > merged into Keystone for the K1 milestone. I look forward to seeing the > rest of the development land during the rest of this cycle and what the > other OpenStack projects build around the HMT functionality. > > Cheers, > Morgan > > > > On Dec 22, 2014, at 1:49 PM, Raildo Mascena wrote: > > Hello folks, My team and I developed the Hierarchical Multitenancy concept > for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we > implemented? What are the next steps for kilo? > To answers these questions, I created a blog post *http://raildo.me/hierarchical-multitenancy-in-openstack/ > * > > Any question, I'm available. > > -- > Raildo Mascena > Software Engineer. > Bachelor of Computer Science. > Distributed Systems Laboratory > Federal University of Campina Grande > Campina Grande, PB - Brazil > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.w.chadwick at kent.ac.uk Wed Dec 24 16:34:17 2014 From: d.w.chadwick at kent.ac.uk (David Chadwick) Date: Wed, 24 Dec 2014 16:34:17 +0000 Subject: [openstack-dev] [Keystone] Bug in federation In-Reply-To: <60568277-58C2-49B7-A7CC-CE12E94800DE@ct.infn.it> References: <549999A1.2070003@kent.ac.uk> <5499A7A5.2000403@redhat.com> <5499C385.6040301@kent.ac.uk> <26D9ECC1-E258-401A-A545-C3C874D9C68E@gmail.com> <549A829E.4080703@kent.ac.uk> <60568277-58C2-49B7-A7CC-CE12E94800DE@ct.infn.it> Message-ID: <549AEB09.7040305@kent.ac.uk> If I understand the bug fix correctly, it is firmly tying the URL to the IDP to the mapping rule. But I think this is going in the wrong direction for several reasons: 1. With Shibboleth, if you use a WAYF service, then anyone from hundreds of different federated IDPs may end up being used to authenticate the user who is accessing OpenStack/Keystone. We dont want to have hundreds of URLs. One is sufficient. Plus we dont know which IDP the user will eventually choose, as this is decided by the WAYF service. So the "correct" URL cannot be pre-chosen by the user. 2. With ABFAB, the IDP to be used is not known by the SP (Keystone) until after authentication. This is because the realm is incorporated in the user's ID (user at real.com) and this is not visible to Keystone. So it is not possible to have different URLs for different IDPs. They all have to use the same URL. So there should be one URL protecting Keystone, and when the response comes from Apache, Keystone needs to be able to reliably determine a) which IDP was used by the user b) which protocol was used and from this, choose which mapping rule to use regards david On 24/12/2014 10:19, Marco Fargetta wrote: > Hi All, > > this bug was already reported and fixed in two steps: > > https://bugs.launchpad.net/ossn/+bug/1390124 > > > The first step is in the documentation. There should be also an OSS advice for previous > version of OpenStack. The solution consist in configuring shibboleth to use different IdPs for > different URLs. > > The second step, still in progress, is to include an ID in the IdP configuration. My patch is under review here: > > https://review.openstack.org/#/c/142743/ > > Let me know if it is enough to solve the issue in your case. > > Marco > >> On 24 Dec 2014, at 10:08, David Chadwick wrote: >> >> >> >> On 23/12/2014 21:56, Morgan Fainberg wrote: >>> >>>> On Dec 23, 2014, at 1:08 PM, Dolph Mathews >>> > wrote: >>>> >>>> >>>> On Tue, Dec 23, 2014 at 1:33 PM, David >>>> Chadwick > wrote: >>>> >>>> Hi Adam >>>> >>>> On 23/12/2014 17:34, Adam Young wrote: >>>>> On 12/23/2014 11:34 AM, David Chadwick wrote: >>>>>> Hi guys >>>>>> >>>>>> we now have the ABFAB federation protocol working with Keystone, using a >>>>>> modified mod_auth_kerb plugin for Apache (available from the project >>>>>> Moonshot web site). However, we did not change Keystone configuration >>>>>> from its original SAML federation configuration, when it was talking to >>>>>> SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code >>>>>> (which I believe had to be done for OpenID connect.) We simply replaced >>>>>> mod_shibboleth with mod_auth_kerb and talked to a completely different >>>>>> IDP with a different protocol. And everything worked just fine. >>>>>> >>>>>> Consequently Keystone is broken, since you can configure it to trust a >>>>>> particular IDP, talking a particular protocol, but Apache will happily >>>>>> talk to another IDP, using a different protocol, and Keystone cannot >>>>>> tell the difference and will happily accept the authenticated user. >>>>>> Keystone should reject any authenticated user who does not come from the >>>>>> trusted IDP talking the correct protocol. Otherwise there is no point in >>>>>> configuring Keystone with this information, if it is ignored by Keystone. >>>>> The IDP and the Protocol should be passed from HTTPD in env vars. Can >>>>> you confirm/deny that this is the case now? >>>> >>>> What is passed from Apache is the 'PATH_INFO' variable, and it is >>>> set to >>>> the URL of Keystone that is being protected, which in our case is >>>> /OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth >>>> >>>> There are also the following arguments passed to Keystone >>>> 'wsgiorg.routing_args': (>>> 0x7ffaba339190>, {'identity_provider': u'KentProxy', 'protocol': >>>> u'saml2'}) >>>> >>>> and >>>> >>>> 'PATH_TRANSLATED': >>>> '/var/www/keystone/main/v3/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth' >>>> >>>> So Apache is telling Keystone that it has protected the URL that >>>> Keystone has configured to be protected. >>>> >>>> However, Apache has been configured to protect this URL with the ABFAB >>>> protocol and the local Radius server, rather than the KentProxy >>>> IdP and >>>> the SAML2 protocol. So we could say that Apache is lying to Keystone, >>>> and because Keystone trusts Apache, then Keystone trusts Apache's lies >>>> and wrongly thinks that the correct IDP and protocol were used. >>>> >>>> The only sure way to protect Keystone from a wrongly or mal-configured >>>> Apache is to have end to end security, where Keystone gets a token >>>> from >>>> the IDP that it can validate, to prove that it is the trusted IDP that >>>> it is talking to. In other words, if Keystone is given the original >>>> signed SAML assertion from the IDP, it will know for definite that the >>>> user was authenticated by the trusted IDP using the trusted protocol >>>> >>>> >>>> So the "bug" is a misconfiguration, not an actual bug. The goal was to >>>> trust and leverage httpd, not reimplement it and all it's extensions. >>> >>> Fixing this ?bug? would be moving towards Keystone needing to implement >>> all of the various protocols to avoid ?misconfigurations?. There are >>> probably some more values that can be passed down from the Apache layer >>> to help provide more confidence in the IDP that is being used. I don?t >>> see a real tangible benefit to moving away from leveraging HTTPD for >>> handling the heavy lifting when handling federated Identity. >> >> Its not as heavy as you suggest. Apache would still do all the protocol >> negotiation and validation. Keystone would only need to verify the >> signature of the incoming SAML assertion in order to validate who the >> IDP was, and that it was SAML. (Remember that Keystone already >> implements SAML for sending out SAML assertions, which is much more >> heavyweight.) ABFAB sends an unsigned SAML assertion embedded in a >> Radius attribute, so obtaining this and doing a minimum of field >> checking would be sufficient. There will be something similar that can >> be done for OpenID Connect. >> >> So we are not talking about redoing all the protocol handling, simply >> checking that the trust rules that have already been configured into >> Keystone, are actually being followed by Apache. "Trust but verify" in >> the words of Ronald Regan. >> >> regards >> >> David >> >>> >>> ?Morgan >>> >>>> >>>> regards >>>> >>>> David >>>> >>>>> >>>>> On the Apache side we are looking to expand the set of variables set. >>>>> http://www.freeipa.org/page/Environment_Variables#Proposed_Additional_Variables >>>>> >>>>> >>>> >>>> The original SAML assertion >>>>> >>>>> mod_shib does support Shib-Identity-Provider : >>>>> >>>>> >>>>> https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPAttributeAccess#NativeSPAttributeAccess-CustomSPVariables >>>>> >>>>> >>>>> Which should be sufficient: if the user is coming in via >>>> mod_shib, they >>>>> are using SAML. >>>>> >>>>> >>>>> >>>>>> >>>>>> BTW, we are using the Juno release. We should fix this bug in Kilo. >>>>>> >>>>>> As I have been saying for many months, Keystone does not know >>>> anything >>>>>> about SAML or ABFAB or OpenID Connect protocols, so there is >>>> currently >>>>>> no point in configuring this information into Keystone. >>>> Keystone is only >>>>>> aware of environmental parameters coming from Apache. So this >>>> is the >>>>>> protocol that Keystone recognises. If you want Keystone to try to >>>>>> control the federation protocol and IDPs used by Apache, then >>>> you will >>>>>> need the Apache plugins to pass the name of the IDP and the >>>> protocol >>>>>> being used as environmental parameters to Keystone, and then >>>> Keystone >>>>>> can check that the ones that it has been configured to trust, are >>>>>> actually being used by Apache. >>>>>> >>>>>> regards >>>>>> >>>>>> David >>>>>> >>>>>> _______________________________________________ >>>>>> OpenStack-dev mailing list >>>>>> OpenStack-dev at lists.openstack.org >>>> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > ==================================================== > Eng. Marco Fargetta, PhD > > Istituto Nazionale di Fisica Nucleare (INFN) > Catania, Italy > > EMail: Marco.Fargetta at ct.infn.it > ==================================================== > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at nemebean.com Wed Dec 24 16:50:48 2014 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 24 Dec 2014 10:50:48 -0600 Subject: [openstack-dev] [Heat][oslo-incubator][oslo-log] Logging Unicode characters In-Reply-To: <20141224094855.GA28021@localhost> References: <20141224094855.GA28021@localhost> Message-ID: <549AEEE8.8080602@nemebean.com> On 12/24/2014 03:48 AM, Qiming Teng wrote: > Hi, > > When trying to enable stack names in Heat to use unicode strings, I am > stuck by a weird behavior of logging. > > Suppose I have a stack name assigned some non-ASCII string, then when > stack tries to log something here: > > heat/engine/stack.py: > > 536 LOG.info(_LI('Stack %(action)s %(status)s (%(name)s): ' > 537 '%(reason)s'), > 538 {'action': action, > 539 'status': status, > 540 'name': self.name, # type(self.name)==unicode here > 541 'reason': reason}) > > I'm seeing the following errors from h-eng session: > > Traceback (most recent call last): > File "/usr/lib64/python2.6/logging/__init__.py", line 799, in emit > stream.write(fs % msg.decode('utf-8')) > File "/usr/lib64/python2.6/encodings/utf_8.py", line 16, in decode > return codecs.utf_8_decode(input, errors, True) > UnicodeEncodeError: 'ascii' codec can't encode characters in position 114-115: > ordinal not in range(128) > > This means logging cannot handle Unicode correctly? No. I did the > following experiments: > > $ cat logtest > > #!/usr/bin/env python > > import sys > > from oslo.utils import encodeutils > from oslo import i18n > > from heat.common.i18n import _LI > from heat.openstack.common import log as logging > > i18n.enable_lazy() > > LOG = logging.getLogger('logtest') > logging.setup('heat') > > print('sys.stdin.encoding: %s' % sys.stdin.encoding) > print('sys.getdefaultencoding: %s' % sys.getdefaultencoding()) > > s = sys.argv[1] > print('s is: %s' % type(s)) > > stack_name = encodeutils.safe_decode(unis) I think you may have a typo in your sample here because unis isn't defined as far as I can tell. In any case, I suspect this line is why your example works and Heat doesn't. I can reproduce the same error if I stuff some unicode data into a unicode string without decoding it first: >>> test = u'\xe2\x82\xac' >>> test.decode('utf8') Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python2.7/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128) >>> test = '\xe2\x82\xac' >>> test.decode('utf8') u'\u20ac' Whether that's what is going on here I can't say for sure though. Trying to figure out unicode in Python usually gives me a headache. :-) > print('stack_name is: %s' % type(stack_name)) > > # stack_name is unicode here > LOG.error(_LI('stack name: %(name)s') % {'name': stack_name}) > > $ ./logtest > > [tengqm at node1 heat]$ ./logtest ?? > sys.stdin.encoding: UTF-8 > sys.getdefaultencoding: ascii > s is: > stack_name is: > 2014-12-24 17:51:13.799 29194 ERROR logtest [-] stack name: ?? > > It worked. > > After spending more than one day on this, I'm seeking help from people > here. What's wrong with Unicode stack names here? > > Any hints are appreciated. > > Regards, > - Qiming > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From m4d.coder at gmail.com Wed Dec 24 17:37:45 2014 From: m4d.coder at gmail.com (W Chan) Date: Wed, 24 Dec 2014 09:37:45 -0800 Subject: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs Message-ID: Trying to clarify a few things... >* 2) Retuning to first example: *>* ... *>* action: std.sql conn_str={$.env.conn_str} query={$.query} *>* ... *>* $.env - is it a name of environment or it will be a registered syntax to getting access to values from env ? * I was actually thinking the environment will use the reserved word "env" in the WF context. The value for the "env" key will be the dict supplied either DB lookup by name, by dict, or by JSON from CLI. The nested dict for "__actions" (and all other keys with double underscore) is special system purpose, in this case declaring defaults for action inputs. Similar to "__execution" where it's for containing runtime data for the WF execution. >* 3) Can user has a few environments?* I don't think we intend to mix one or more environments in a WF execution. The key was to supply any named environment at WF execution time. So the WF auth only needs to know the variables will be under $.env. If we allow one or more environments in a WF execution, this means each environment needs to be referred to by name (i.e. in your example env1 and env2). We then would lost the ability to swap any named environments for different executions of the same WF. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.fargetta at ct.infn.it Wed Dec 24 17:50:36 2014 From: marco.fargetta at ct.infn.it (Marco Fargetta) Date: Wed, 24 Dec 2014 18:50:36 +0100 Subject: [openstack-dev] [Keystone] Bug in federation In-Reply-To: <549AEB09.7040305@kent.ac.uk> References: <549999A1.2070003@kent.ac.uk> <5499A7A5.2000403@redhat.com> <5499C385.6040301@kent.ac.uk> <26D9ECC1-E258-401A-A545-C3C874D9C68E@gmail.com> <549A829E.4080703@kent.ac.uk> <60568277-58C2-49B7-A7CC-CE12E94800DE@ct.infn.it> <549AEB09.7040305@kent.ac.uk> Message-ID: <5B050200-CA2C-4A9C-8EB1-888F79432A46@ct.infn.it> > On 24 Dec 2014, at 17:34, David Chadwick wrote: > > If I understand the bug fix correctly, it is firmly tying the URL to the > IDP to the mapping rule. But I think this is going in the wrong > direction for several reasons: > > 1. With Shibboleth, if you use a WAYF service, then anyone from hundreds > of different federated IDPs may end up being used to authenticate the > user who is accessing OpenStack/Keystone. We dont want to have hundreds > of URLs. One is sufficient. Plus we dont know which IDP the user will > eventually choose, as this is decided by the WAYF service. So the > "correct" URL cannot be pre-chosen by the user. > With the proposed configuration of shibboleth when you access the URL then you are redirect only to the IdP configured for the URL. Since a URL is tied to only an IDP there is not need of a WAYF. Anyway, this is a change only in the documentation and it was the first fix because there was an agreement to provide a solution also for Juno with the minimal change in the code. The other fix I proposed, which is under review, requires an additional parameter when you configure the IdP in OS-Federation. This accepts one or more EntityIDs so you can map the entities with the URL. This also requires to specify the http variable where you can get the entityID (this is a parameter so it can be compatible with different SAML plug-ins). If you do not specify these values the behaviour is like the current implementation otherwise providing the list of entities and the parameter the access to the URL is allowed only to the IDP included in the list and the other are rejected. I tried to be more compatible with the current implementation as possible. Is this in the right direction? Could you comment on the review page? It will be better to understand it the patch need extra work. The link is: https://review.openstack.org/#/c/142743/ > 2. With ABFAB, the IDP to be used is not known by the SP (Keystone) > until after authentication. This is because the realm is incorporated in > the user's ID (user at real.com) and this is not visible to Keystone. So it > is not possible to have different URLs for different IDPs. They all have > to use the same URL. > > So there should be one URL protecting Keystone, and when the response > comes from Apache, Keystone needs to be able to reliably determine > > a) which IDP was used by the user > b) which protocol was used > > and from this, choose which mapping rule to use > This would require a new design of the OS-Federation and you have proposed several specs I was agreeing with. Nevertheless, it seems there was not consensus in the community so I think you have to find a way to integrate ABFAB with the current model. Is it possible to have a single mapping with many rules and keystone chose according to the information coming after the authentication? Maybe this require to work on the mapping but it does not require changes in the overall architecture, just an idea. > regards > > david > > > On 24/12/2014 10:19, Marco Fargetta wrote: >> Hi All, >> >> this bug was already reported and fixed in two steps: >> >> https://bugs.launchpad.net/ossn/+bug/1390124 >> >> >> The first step is in the documentation. There should be also an OSS advice for previous >> version of OpenStack. The solution consist in configuring shibboleth to use different IdPs for >> different URLs. >> >> The second step, still in progress, is to include an ID in the IdP configuration. My patch is under review here: >> >> https://review.openstack.org/#/c/142743/ >> >> Let me know if it is enough to solve the issue in your case. >> >> Marco >> >>> On 24 Dec 2014, at 10:08, David Chadwick wrote: >>> >>> >>> >>> On 23/12/2014 21:56, Morgan Fainberg wrote: >>>> >>>>> On Dec 23, 2014, at 1:08 PM, Dolph Mathews >>>> > wrote: >>>>> >>>>> >>>>> On Tue, Dec 23, 2014 at 1:33 PM, David >>>>> Chadwick > wrote: >>>>> >>>>> Hi Adam >>>>> >>>>> On 23/12/2014 17:34, Adam Young wrote: >>>>>> On 12/23/2014 11:34 AM, David Chadwick wrote: >>>>>>> Hi guys >>>>>>> >>>>>>> we now have the ABFAB federation protocol working with Keystone, using a >>>>>>> modified mod_auth_kerb plugin for Apache (available from the project >>>>>>> Moonshot web site). However, we did not change Keystone configuration >>>>>>> from its original SAML federation configuration, when it was talking to >>>>>>> SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code >>>>>>> (which I believe had to be done for OpenID connect.) We simply replaced >>>>>>> mod_shibboleth with mod_auth_kerb and talked to a completely different >>>>>>> IDP with a different protocol. And everything worked just fine. >>>>>>> >>>>>>> Consequently Keystone is broken, since you can configure it to trust a >>>>>>> particular IDP, talking a particular protocol, but Apache will happily >>>>>>> talk to another IDP, using a different protocol, and Keystone cannot >>>>>>> tell the difference and will happily accept the authenticated user. >>>>>>> Keystone should reject any authenticated user who does not come from the >>>>>>> trusted IDP talking the correct protocol. Otherwise there is no point in >>>>>>> configuring Keystone with this information, if it is ignored by Keystone. >>>>>> The IDP and the Protocol should be passed from HTTPD in env vars. Can >>>>>> you confirm/deny that this is the case now? >>>>> >>>>> What is passed from Apache is the 'PATH_INFO' variable, and it is >>>>> set to >>>>> the URL of Keystone that is being protected, which in our case is >>>>> /OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth >>>>> >>>>> There are also the following arguments passed to Keystone >>>>> 'wsgiorg.routing_args': (>>>> 0x7ffaba339190>, {'identity_provider': u'KentProxy', 'protocol': >>>>> u'saml2'}) >>>>> >>>>> and >>>>> >>>>> 'PATH_TRANSLATED': >>>>> '/var/www/keystone/main/v3/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth' >>>>> >>>>> So Apache is telling Keystone that it has protected the URL that >>>>> Keystone has configured to be protected. >>>>> >>>>> However, Apache has been configured to protect this URL with the ABFAB >>>>> protocol and the local Radius server, rather than the KentProxy >>>>> IdP and >>>>> the SAML2 protocol. So we could say that Apache is lying to Keystone, >>>>> and because Keystone trusts Apache, then Keystone trusts Apache's lies >>>>> and wrongly thinks that the correct IDP and protocol were used. >>>>> >>>>> The only sure way to protect Keystone from a wrongly or mal-configured >>>>> Apache is to have end to end security, where Keystone gets a token >>>>> from >>>>> the IDP that it can validate, to prove that it is the trusted IDP that >>>>> it is talking to. In other words, if Keystone is given the original >>>>> signed SAML assertion from the IDP, it will know for definite that the >>>>> user was authenticated by the trusted IDP using the trusted protocol >>>>> >>>>> >>>>> So the "bug" is a misconfiguration, not an actual bug. The goal was to >>>>> trust and leverage httpd, not reimplement it and all it's extensions. >>>> >>>> Fixing this ?bug? would be moving towards Keystone needing to implement >>>> all of the various protocols to avoid ?misconfigurations?. There are >>>> probably some more values that can be passed down from the Apache layer >>>> to help provide more confidence in the IDP that is being used. I don?t >>>> see a real tangible benefit to moving away from leveraging HTTPD for >>>> handling the heavy lifting when handling federated Identity. >>> >>> Its not as heavy as you suggest. Apache would still do all the protocol >>> negotiation and validation. Keystone would only need to verify the >>> signature of the incoming SAML assertion in order to validate who the >>> IDP was, and that it was SAML. (Remember that Keystone already >>> implements SAML for sending out SAML assertions, which is much more >>> heavyweight.) ABFAB sends an unsigned SAML assertion embedded in a >>> Radius attribute, so obtaining this and doing a minimum of field >>> checking would be sufficient. There will be something similar that can >>> be done for OpenID Connect. >>> >>> So we are not talking about redoing all the protocol handling, simply >>> checking that the trust rules that have already been configured into >>> Keystone, are actually being followed by Apache. "Trust but verify" in >>> the words of Ronald Regan. >>> >>> regards >>> >>> David >>> >>>> >>>> ?Morgan >>>> >>>>> >>>>> regards >>>>> >>>>> David >>>>> >>>>>> >>>>>> On the Apache side we are looking to expand the set of variables set. >>>>>> http://www.freeipa.org/page/Environment_Variables#Proposed_Additional_Variables >>>>>> >>>>>> >>>>> >>>>> The original SAML assertion >>>>>> >>>>>> mod_shib does support Shib-Identity-Provider : >>>>>> >>>>>> >>>>>> https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPAttributeAccess#NativeSPAttributeAccess-CustomSPVariables >>>>>> >>>>>> >>>>>> Which should be sufficient: if the user is coming in via >>>>> mod_shib, they >>>>>> are using SAML. >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> BTW, we are using the Juno release. We should fix this bug in Kilo. >>>>>>> >>>>>>> As I have been saying for many months, Keystone does not know >>>>> anything >>>>>>> about SAML or ABFAB or OpenID Connect protocols, so there is >>>>> currently >>>>>>> no point in configuring this information into Keystone. >>>>> Keystone is only >>>>>>> aware of environmental parameters coming from Apache. So this >>>>> is the >>>>>>> protocol that Keystone recognises. If you want Keystone to try to >>>>>>> control the federation protocol and IDPs used by Apache, then >>>>> you will >>>>>>> need the Apache plugins to pass the name of the IDP and the >>>>> protocol >>>>>>> being used as environmental parameters to Keystone, and then >>>>> Keystone >>>>>>> can check that the ones that it has been configured to trust, are >>>>>>> actually being used by Apache. >>>>>>> >>>>>>> regards >>>>>>> >>>>>>> David >>>>>>> >>>>>>> _______________________________________________ >>>>>>> OpenStack-dev mailing list >>>>>>> OpenStack-dev at lists.openstack.org >>>>> >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> OpenStack-dev mailing list >>>>>> OpenStack-dev at lists.openstack.org >>>>> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev at lists.openstack.org >>>>> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> ==================================================== >> Eng. Marco Fargetta, PhD >> >> Istituto Nazionale di Fisica Nucleare (INFN) >> Catania, Italy >> >> EMail: Marco.Fargetta at ct.infn.it >> ==================================================== >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ==================================================== Eng. Marco Fargetta, PhD Istituto Nazionale di Fisica Nucleare (INFN) Catania, Italy EMail: Marco.Fargetta at ct.infn.it ==================================================== -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4551 bytes Desc: not available URL: From bpavlovic at mirantis.com Wed Dec 24 17:54:50 2014 From: bpavlovic at mirantis.com (Boris Pavlovic) Date: Wed, 24 Dec 2014 21:54:50 +0400 Subject: [openstack-dev] [Mistral] Plans to load and performance testing In-Reply-To: References: Message-ID: Guys, I added patch to infra: https://review.openstack.org/#/c/143879/ That allows to run Rally against Mistral in gates. Best regards, Boris Pavlovic On Mon, Dec 22, 2014 at 4:25 PM, Anastasia Kuznetsova < akuznetsova at mirantis.com> wrote: > Dmitry, > > Now I see that my comments are not so informative, I will try to describe > environment and scenarios in more details. > > 1) *1 api 1 engine 1 executor *it means that there were 3 Mistral > processes running on the same box > 2) list-workbooks scenario was run when there were no workflow executions > at the same time, I will notice this your comment and I will measure time > in such situation, but I guess that it will take more time, the question is > as far as. > 3) 60 % of success means that only 60 % of number of times execution of > scenario 'list-workbooks' were successful, at the moment I have observed > only one type of error: > error connection to Rabbit : Error ConnectionError: ('Connection > aborted.', error(104, 'Connection reset by peer')) > 4) we don't know the durability criteria of Mistral and under what load > Mistral will 'die', we want to define the threshold. > > P.S. Dmitry, if you have any ideas/scenarios which you want to test, > please share them. > > On Sat, Dec 20, 2014 at 9:35 AM, Dmitri Zimine > wrote: > >> Anastasia, any start is a good start. >> >> *> 1 api 1 engine 1 executor, list-workbooks* >> >> what exactly doest it mean: 1) is mistral deployed on 3 boxes with >> component per box, or all three are processes on the same box? 2) is >> list-workbooks test running while workflow executions going on? How many? >> what?s the character of the load 3) when it says 60% success what exactly >> does it mean, what kind of failures? 4) what is the durability criteria, >> how long do we expect Mistral to withstand the load. >> >> Let?s discuss this in details on the next IRC meeting? >> >> Thanks again for getting this started. >> >> DZ. >> >> >> On Dec 19, 2014, at 7:44 AM, Anastasia Kuznetsova < >> akuznetsova at mirantis.com> wrote: >> >> Boris, >> >> Thanks for feedback! >> >> > But I belive that you should put bigger load here: >> https://etherpad.openstack.org/p/mistral-rally-testing-results >> >> As I said it is only beginning and I will increase the load and change >> its type. >> >> >As well concurrency should be at least 2-3 times bigger than times >> otherwise it won't generate proper load and you won't collect >enough data >> for statistical analyze. >> > >> >As well use "rps" runner that generates more real life load. >> >Plus it will be nice to share as well output of "rally task report" >> command. >> >> Thanks for the advice, I will consider it in further testing and >> reporting. >> >> Answering to your question about using Rally for integration testing, as >> I mentioned in our load testing plan published on wiki page, one of our >> final goals is to have a Rally gate in one of Mistral repositories, so we >> are interested in it and I already prepare first commits to Rally. >> >> Thanks, >> Anastasia Kuznetsova >> >> On Fri, Dec 19, 2014 at 4:51 PM, Boris Pavlovic >> wrote: >>> >>> Anastasia, >>> >>> Nice work on this. But I belive that you should put bigger load here: >>> https://etherpad.openstack.org/p/mistral-rally-testing-results >>> >>> As well concurrency should be at least 2-3 times bigger than times >>> otherwise it won't generate proper load and you won't collect enough data >>> for statistical analyze. >>> >>> As well use "rps" runner that generates more real life load. >>> Plus it will be nice to share as well output of "rally task report" >>> command. >>> >>> >>> By the way what do you think about using Rally scenarios (that you >>> already wrote) for integration testing as well? >>> >>> >>> Best regards, >>> Boris Pavlovic >>> >>> On Fri, Dec 19, 2014 at 2:39 PM, Anastasia Kuznetsova < >>> akuznetsova at mirantis.com> wrote: >>>> >>>> Hello everyone, >>>> >>>> I want to announce that Mistral team has started work on load and >>>> performance testing in this release cycle. >>>> >>>> Brief information about scope of our work can be found here: >>>> >>>> https://wiki.openstack.org/wiki/Mistral/Testing#Load_and_Performance_Testing >>>> >>>> First results are published here: >>>> https://etherpad.openstack.org/p/mistral-rally-testing-results >>>> >>>> Thanks, >>>> Anastasia Kuznetsova >>>> @ Mirantis Inc. >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.fargetta at ct.infn.it Wed Dec 24 18:01:13 2014 From: marco.fargetta at ct.infn.it (Marco Fargetta) Date: Wed, 24 Dec 2014 19:01:13 +0100 Subject: [openstack-dev] [Keystone] Bug in federation In-Reply-To: <549ACA86.9070109@redhat.com> References: <549999A1.2070003@kent.ac.uk> <5499A7A5.2000403@redhat.com> <5499C385.6040301@kent.ac.uk> <549ACA86.9070109@redhat.com> Message-ID: <1664A804-E74B-490E-ADE1-BE4D9DC09D27@ct.infn.it> Hi John, the problem is not to establish which variable has the correct information but the association between IDP and URL. In OS-Federation you define an authentication URL per IDP and protocol and it is supposed to use the specified IDP and protocol for authenticate. Nevertheless, during the authentication there is not code to check if the IDP and protocol are the one specified for the URL and in the apache configuration for Juno there was no configuration in the apache side to bind the IDP with the URL. Therefore, you need to add something in OS_Federation to perform this control using the variable you are proposing or others. Marco > On 24 Dec 2014, at 15:15, John Dennis wrote: > > Can't this be solved with a couple of environment variables? The two > keys pieces of information needed are: > > 1) who authenticated the subject? > > 2) what authentication method was used? > > There is already precedence for AUTH_TYPE, it's used in AJP to > initialize the authType property in a Java Servelet. AUTH_TYPE would > cover item 2. Numerous places in Apache already set AUTH_TYPE. Perhaps > there could be a convention that AUTH_TYPE could carry extra qualifying > parameters much like HTTP headers do. The first token would be the > primary mechanism, e.g. saml, negotiate, x509, etc. For authentication > types that support multiple mechanisms (e.g. EAP, SAML, etc.) an extra > parameter would qualify the actual mechanism used. For SAML that > qualifying extra parameter could be the value from AuthnContextClassRef. > > Item 1 could be covered by a new environment variable AUTH_AUTHORITY. > > If AUTH_TYPE is negotiate (i.e. kerberos) then the AUTH_AUTHORITY would > be the KDC. For SAML it would probably be taken from the > AuthenticatingAuthority element or the IdP entityID. > > I'm not sure I see the need for other layers to receive the full SAML > assertion and validate the signature. One has to trust the server you're > running in. It's the same concept as trusting REMOTE_USER. > > -- > John > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ==================================================== Eng. Marco Fargetta, PhD Istituto Nazionale di Fisica Nucleare (INFN) Catania, Italy EMail: Marco.Fargetta at ct.infn.it ==================================================== -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4551 bytes Desc: not available URL: From clint at fewbar.com Wed Dec 24 18:18:08 2014 From: clint at fewbar.com (Clint Byrum) Date: Wed, 24 Dec 2014 10:18:08 -0800 Subject: [openstack-dev] [heat] Application level HA via Heat In-Reply-To: <2EF9C3F5-FA84-488A-BE0D-F17D4F9D6D60@mirantis.com> References: <20141222182123.GC14130@t430slt.redhat.com> <5498823D.3050005@redhat.com> <20141224101744.GB16954@t430slt.redhat.com> <2EF9C3F5-FA84-488A-BE0D-F17D4F9D6D60@mirantis.com> Message-ID: <1419443658-sup-7248@fewbar.com> Excerpts from Renat Akhmerov's message of 2014-12-24 03:40:22 -0800: > Hi > > > Ok, I'm quite happy to accept this may be a better long-term solution, but > > can anyone comment on the current maturity level of Mistral? Questions > > which spring to mind are: > > > > - Is the DSL stable now? > > You can think ?yes? because although we keep adding new features we do it in a backwards compatible manner. I personally try to be very cautious about this. > > > - What's the roadmap re incubation (there are a lot of TBD's here: > > https://wiki.openstack.org/wiki/Mistral/Incubation) > > Ooh yeah, this page is very very obsolete which is actually my fault because I didn?t pay a lot of attention to this after I heard all these rumors about TC changing the whole approach around getting projects incubated/integrated. > > I think incubation readiness from a technical perspective is good (various style checks, procedures etc.), even if there?s still something that we need to adjust it must not be difficult and time consuming. The main question for the last half a year has been ?What OpenStack program best fits Mistral??. So far we?ve had two candidates: Orchestration and some new program (e.g. Workflow Service). However, nothing is decided yet on that. > It's probably worth re-thinking the discussion above given the governance changes that are being worked on: http://governance.openstack.org/resolutions/20141202-project-structure-reform-spec.html From thingee at gmail.com Wed Dec 24 20:16:19 2014 From: thingee at gmail.com (Mike Perez) Date: Wed, 24 Dec 2014 12:16:19 -0800 Subject: [openstack-dev] [cinder] [driver] DB operations In-Reply-To: References: Message-ID: <20141224201619.GA25116@gmail.com> On 06:05 Sat 20 Dec , Duncan Thomas wrote: > No, I mean that if drivers are going to access database, then they should > do it via a defined interface that limits what they can do to a sane set of > operations. I'd still prefer that they didn't need extra access beyond the > model update, but I don't know if that is possible. > > Duncan Thomas > On Dec 19, 2014 6:43 PM, "Amit Das" wrote: > > > Thanks Duncan. > > Do you mean hepler methods in the specific driver class? > > On 19 Dec 2014 14:51, "Duncan Thomas" wrote: > > > >> So our general advice has historical been 'drivers should not be > >> accessing the db directly'. I haven't had chance to look at your driver > >> code yet, I've been on vacation, but my suggestion is that if you > >> absolutely must store something in the admin metadata rather than somewhere > >> that is covered by the model update (generally provider location and > >> provider auth) then writing some helper methods that wrap the context bump > >> and db call would be better than accessing it directly from the driver. > >> > >> Duncan Thomas > >> On Dec 18, 2014 11:41 PM, "Amit Das" wrote: I've expressed in past reviews that we should have an interface that limits drivers access to the database [1], but received quite a bit of push back in Cinder. I recommend we stick to what has been decided, otherwise, Amit you should spend some time on reading the history of this issue [2] from previous meetings and start a rediscussion on it in the next meeting [3]. Not discouraging it, but this has been something brought up at least a couple of times now and it ends up with the same answer from the community. [1] - https://review.openstack.org/#/c/107693/14 [2] - http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-10-15-16.00.log.html#l-186 [3] - https://wiki.openstack.org/wiki/CinderMeetings -- Mike Perez From caedo at mirantis.com Wed Dec 24 22:32:20 2014 From: caedo at mirantis.com (Christopher Aedo) Date: Wed, 24 Dec 2014 14:32:20 -0800 Subject: [openstack-dev] [Fuel][Docs] Move fuel-web/docs to fuel-docs In-Reply-To: References: Message-ID: I think it's worth pursuing these efforts to include the auto-generated doc components in the fuel-docs build process. The additional dependencies required to build nailgun are not so unreasonable, and the preparation of the build environment has already been put into a single script, >From the CI perspective, where do the CI slaves "start" so to speak? I do not think we are starting from a bare machine, installing OS, installing dependencies, and then attempting a build of fuel-docs right? I'm wondering here whether it's reasonable to start from a snapshot where the machine has the necessary dependencies (it must start from some snapshotted point, otherwise just testing change of a single line would take unreasonably long). >From your steps: > 6) Implement additional make target in fuel-docs > to download and build autodocs from fuel-web > repo as a separate chapter. If we add this as an additional make target, then the environment would have to support the necessary dependencies anyway, right? If that's the case, then this would have to be testable no matter what, right? Or is it suggested that this step would not be tested, and would essentially stand off on it's own? -Christopher On Tue, Dec 23, 2014 at 9:20 AM, Aleksandra Fedorova wrote: > Blueprint https://blueprints.launchpad.net/fuel/+spec/fuel-dev-docs-merge-fuel-docs > suggests us to move all documentation from fuel-web to fuel-docs > repository. > > While I agree that moving Developer Guide to fuel-docs is a good idea, > there is an issue with autodocs which currently blocks the whole > process. > > If we move dev docs to fuel-docs as suggested by Christopher in [1] we > will make it impossible to build fuel-docs without cloning fuel-web > repository and installing all nailgun dependencies into current > environment. And this is bad from both CI and user point of view. > > I think we should keep fuel-docs repository self-contained, i.e. one > should be able to build docs without any external code. We can add a > switch or separate make target to build 'addons' to this documentation > when explicitly requested, but it shouldn't be default behaviour. > > Thus I think we need to split documentation in fuel-web/ repository > and move the "static" part to fuel-docs, but keep "dynamic" > auto-generated part in fuel-web repo. See patch [2]. > > Then to move docs from fuel-web to fuel-docs we need to perform following steps: > > 1) Merge/abandon all docs-related patches to fuel-web, see full list [3] > 2) Merge updated patch [2] which removes docs from fuel-web repo, > leaving autogenerated api docs only. > 3) Disable docs CI for fuel-web > 4) Add building of api docs to fuel-web/run_tests.sh. > 5) Update fuel-docs repository with new data as in patch [4] but > excluding anything related to autodocs. > 6) Implement additional make target in fuel-docs to download and build > autodocs from fuel-web repo as a separate chapter. > 7) Add this make target in fuel-docs CI. > > > [1] https://review.openstack.org/#/c/124551/ > [2] https://review.openstack.org/#/c/143679/ > [3] https://review.openstack.org/#/q/project:stackforge/fuel-web+status:open+file:%255Edoc.*,n,z > [4] https://review.openstack.org/#/c/125234/ > > -- > Aleksandra Fedorova > Fuel Devops Engineer > bookwar From ichihara.hirofumi at lab.ntt.co.jp Thu Dec 25 03:41:13 2014 From: ichihara.hirofumi at lab.ntt.co.jp (Hirofumi Ichihara) Date: Thu, 25 Dec 2014 12:41:13 +0900 Subject: [openstack-dev] [devstack][VMware NSX CI] VMware NSX CI fails and don't recheck Message-ID: Hi, My patch(https://review.openstack.org/#/c/124011/) received verified-1 from VMware NSX CI. But my patch isn?t related to VMware so I added comment ?vmware-recheck-patch? according to VMware NSX CI comment. However, VMware NSX CI don?t recheck. I don?t know recheck word was wrong or CI broke. Could someone help me? Thanks, Hirofumi -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Thu Dec 25 05:51:36 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Thu, 25 Dec 2014 11:51:36 +0600 Subject: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs In-Reply-To: References: Message-ID: <0857543A-BD3A-47F4-A19D-37F358B3CCA6@mirantis.com> > On 24 Dec 2014, at 23:37, W Chan wrote: > > 2) Retuning to first example: > > ... > > action: std.sql conn_str={$.env.conn_str} query={$.query} > > ... > > $.env - is it a name of environment or it will be a registered syntax to getting access to values from env ? > I was actually thinking the environment will use the reserved word "env" in the WF context. The value for the "env" key will be the dict supplied either DB lookup by name, by dict, or by JSON from CLI. Ok, probably here?s the place where I didn?t understand you before. I thought ?env? here is just a arbitrary key that users themselves may want to have to just group some variables under a single umbrella. What you?re saying is that whatever is under ?$.env? is just the exact same environment that we passed when we started the workflow? If yes then it definitely makes sense to me (it just allows to explicitly access environment, not through the implicit variable lookup). Please confirm. One thing that I strongly suggest is that we clearly define all reserved keys like ?env?, ?__actions? etc. I think it?d be better if they all started with the same prefix, for example, double underscore. > The nested dict for "__actions" (and all other keys with double underscore) is special system purpose, in this case declaring defaults for action inputs. Similar to "__execution" where it's for containing runtime data for the WF execution. Yes, that?s clear Renat Akhmerov @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Thu Dec 25 05:52:56 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Thu, 25 Dec 2014 11:52:56 +0600 Subject: [openstack-dev] [Mistral] Plans to load and performance testing In-Reply-To: References: Message-ID: Thanks Boris! Renat Akhmerov @ Mirantis Inc. > On 24 Dec 2014, at 23:54, Boris Pavlovic wrote: > > Guys, > > I added patch to infra: > https://review.openstack.org/#/c/143879/ > > That allows to run Rally against Mistral in gates. > > Best regards, > Boris Pavlovic > > On Mon, Dec 22, 2014 at 4:25 PM, Anastasia Kuznetsova > wrote: > Dmitry, > > Now I see that my comments are not so informative, I will try to describe environment and scenarios in more details. > > 1) 1 api 1 engine 1 executor it means that there were 3 Mistral processes running on the same box > 2) list-workbooks scenario was run when there were no workflow executions at the same time, I will notice this your comment and I will measure time in such situation, but I guess that it will take more time, the question is as far as. > 3) 60 % of success means that only 60 % of number of times execution of scenario 'list-workbooks' were successful, at the moment I have observed only one type of error: > error connection to Rabbit : Error ConnectionError: ('Connection aborted.', error(104, 'Connection reset by peer')) > 4) we don't know the durability criteria of Mistral and under what load Mistral will 'die', we want to define the threshold. > > P.S. Dmitry, if you have any ideas/scenarios which you want to test, please share them. > > On Sat, Dec 20, 2014 at 9:35 AM, Dmitri Zimine > wrote: > Anastasia, any start is a good start. > > > 1 api 1 engine 1 executor, list-workbooks > > what exactly doest it mean: 1) is mistral deployed on 3 boxes with component per box, or all three are processes on the same box? 2) is list-workbooks test running while workflow executions going on? How many? what?s the character of the load 3) when it says 60% success what exactly does it mean, what kind of failures? 4) what is the durability criteria, how long do we expect Mistral to withstand the load. > > Let?s discuss this in details on the next IRC meeting? > > Thanks again for getting this started. > > DZ. > > > On Dec 19, 2014, at 7:44 AM, Anastasia Kuznetsova > wrote: > >> Boris, >> >> Thanks for feedback! >> >> > But I belive that you should put bigger load here: https://etherpad.openstack.org/p/mistral-rally-testing-results >> >> As I said it is only beginning and I will increase the load and change its type. >> >> >As well concurrency should be at least 2-3 times bigger than times otherwise it won't generate proper load and you won't collect >enough data for statistical analyze. >> > >> >As well use "rps" runner that generates more real life load. >> >Plus it will be nice to share as well output of "rally task report" command. >> >> Thanks for the advice, I will consider it in further testing and reporting. >> >> Answering to your question about using Rally for integration testing, as I mentioned in our load testing plan published on wiki page, one of our final goals is to have a Rally gate in one of Mistral repositories, so we are interested in it and I already prepare first commits to Rally. >> >> Thanks, >> Anastasia Kuznetsova >> >> On Fri, Dec 19, 2014 at 4:51 PM, Boris Pavlovic > wrote: >> Anastasia, >> >> Nice work on this. But I belive that you should put bigger load here: https://etherpad.openstack.org/p/mistral-rally-testing-results >> >> As well concurrency should be at least 2-3 times bigger than times otherwise it won't generate proper load and you won't collect enough data for statistical analyze. >> >> As well use "rps" runner that generates more real life load. >> Plus it will be nice to share as well output of "rally task report" command. >> >> >> By the way what do you think about using Rally scenarios (that you already wrote) for integration testing as well? >> >> >> Best regards, >> Boris Pavlovic >> >> On Fri, Dec 19, 2014 at 2:39 PM, Anastasia Kuznetsova > wrote: >> Hello everyone, >> >> I want to announce that Mistral team has started work on load and performance testing in this release cycle. >> >> Brief information about scope of our work can be found here: >> https://wiki.openstack.org/wiki/Mistral/Testing#Load_and_Performance_Testing >> >> First results are published here: >> https://etherpad.openstack.org/p/mistral-rally-testing-results >> >> Thanks, >> Anastasia Kuznetsova >> @ Mirantis Inc. >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rakhmerov at mirantis.com Thu Dec 25 06:01:16 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Thu, 25 Dec 2014 12:01:16 +0600 Subject: [openstack-dev] [heat] Application level HA via Heat In-Reply-To: <1419443658-sup-7248@fewbar.com> References: <20141222182123.GC14130@t430slt.redhat.com> <5498823D.3050005@redhat.com> <20141224101744.GB16954@t430slt.redhat.com> <2EF9C3F5-FA84-488A-BE0D-F17D4F9D6D60@mirantis.com> <1419443658-sup-7248@fewbar.com> Message-ID: Thanks Clint, I actually didn?t see this before (like I said just rumors) so need to read it carefully. Renat Akhmerov @ Mirantis Inc. > On 25 Dec 2014, at 00:18, Clint Byrum wrote: > > Excerpts from Renat Akhmerov's message of 2014-12-24 03:40:22 -0800: >> Hi >> >>> Ok, I'm quite happy to accept this may be a better long-term solution, but >>> can anyone comment on the current maturity level of Mistral? Questions >>> which spring to mind are: >>> >>> - Is the DSL stable now? >> >> You can think ?yes? because although we keep adding new features we do it in a backwards compatible manner. I personally try to be very cautious about this. >> >>> - What's the roadmap re incubation (there are a lot of TBD's here: >>> https://wiki.openstack.org/wiki/Mistral/Incubation) >> >> Ooh yeah, this page is very very obsolete which is actually my fault because I didn?t pay a lot of attention to this after I heard all these rumors about TC changing the whole approach around getting projects incubated/integrated. >> >> I think incubation readiness from a technical perspective is good (various style checks, procedures etc.), even if there?s still something that we need to adjust it must not be difficult and time consuming. The main question for the last half a year has been ?What OpenStack program best fits Mistral??. So far we?ve had two candidates: Orchestration and some new program (e.g. Workflow Service). However, nothing is decided yet on that. >> > > It's probably worth re-thinking the discussion above given the governance > changes that are being worked on: > > http://governance.openstack.org/resolutions/20141202-project-structure-reform-spec.html > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gkotton at vmware.com Thu Dec 25 06:57:18 2014 From: gkotton at vmware.com (Gary Kotton) Date: Thu, 25 Dec 2014 06:57:18 +0000 Subject: [openstack-dev] [devstack][VMware NSX CI] VMware NSX CI fails and don't recheck In-Reply-To: References: Message-ID: Hi, We have a few CI issues. We are working on them at the moment. I hope that we get to the bottom of this soon. Thanks Gary From: Hirofumi Ichihara > Reply-To: OpenStack List > Date: Thursday, December 25, 2014 at 5:41 AM To: OpenStack List > Subject: [openstack-dev] [devstack][VMware NSX CI] VMware NSX CI fails and don't recheck Hi, My patch(https://review.openstack.org/#/c/124011/) received verified-1 from VMware NSX CI. But my patch isn't related to VMware so I added comment "vmware-recheck-patch" according to VMware NSX CI comment. However, VMware NSX CI don't recheck. I don't know recheck word was wrong or CI broke. Could someone help me? Thanks, Hirofumi -------------- next part -------------- An HTML attachment was scrubbed... URL: From smelikyan at mirantis.com Thu Dec 25 07:28:28 2014 From: smelikyan at mirantis.com (Serg Melikyan) Date: Thu, 25 Dec 2014 11:28:28 +0400 Subject: [openstack-dev] [Murano] Nominating Kate Chernova for murano-core Message-ID: I'd like to propose that we add Kate Chernova to the murano-core. Kate is active member of our community for more than a year, she is regular participant in our IRC meeting and maintains a good score as contributor: http://stackalytics.com/report/users/efedorova Please vote by replying to this thread. As a reminder of your options, +1 votes from 5 cores is sufficient; a -1 is a veto. -- Serg Melikyan, Senior Software Engineer at Mirantis, Inc. http://mirantis.com | smelikyan at mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichihara.hirofumi at lab.ntt.co.jp Thu Dec 25 07:32:42 2014 From: ichihara.hirofumi at lab.ntt.co.jp (Hirofumi Ichihara) Date: Thu, 25 Dec 2014 16:32:42 +0900 Subject: [openstack-dev] [devstack][VMware NSX CI] VMware NSX CI fails and don't recheck In-Reply-To: References: Message-ID: <9C7207F7-5F6A-48DE-AA2D-4B27C34A4DE7@lab.ntt.co.jp> Hi Gary, Thank you for your response. I understand. I?m expecting good news. Thanks, Hirofumi 2014/12/25 15:57?Gary Kotton ????? > Hi, > We have a few CI issues. We are working on them at the moment. I hope that we get to the bottom of this soon. > Thanks > Gary > > From: Hirofumi Ichihara > Reply-To: OpenStack List > Date: Thursday, December 25, 2014 at 5:41 AM > To: OpenStack List > Subject: [openstack-dev] [devstack][VMware NSX CI] VMware NSX CI fails and don't recheck > > Hi, > > My patch(https://review.openstack.org/#/c/124011/) received verified-1 from VMware NSX CI. > But my patch isn?t related to VMware so I added comment ?vmware-recheck-patch? according to VMware NSX CI comment. > However, VMware NSX CI don?t recheck. > > I don?t know recheck word was wrong or CI broke. > > Could someone help me? > > Thanks, > Hirofumi > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsufiev at mirantis.com Thu Dec 25 08:02:58 2014 From: tsufiev at mirantis.com (Timur Sufiev) Date: Thu, 25 Dec 2014 11:02:58 +0300 Subject: [openstack-dev] [Murano] Nominating Kate Chernova for murano-core In-Reply-To: References: Message-ID: +1 from me. On Thu, Dec 25, 2014 at 10:28 AM, Serg Melikyan wrote: > I'd like to propose that we add Kate Chernova to the murano-core. > > Kate is active member of our community for more than a year, she is > regular participant in our IRC meeting and maintains a good score as > contributor: > > http://stackalytics.com/report/users/efedorova > > Please vote by replying to this thread. As a reminder of your options, +1 > votes from 5 cores is sufficient; a -1 is a veto. > -- > Serg Melikyan, Senior Software Engineer at Mirantis, Inc. > http://mirantis.com | smelikyan at mirantis.com > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Timur Sufiev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkamaldinov at mirantis.com Thu Dec 25 09:42:37 2014 From: rkamaldinov at mirantis.com (Ruslan Kamaldinov) Date: Thu, 25 Dec 2014 13:42:37 +0400 Subject: [openstack-dev] [Murano] Nominating Kate Chernova for murano-core In-Reply-To: References: Message-ID: Great addition to core team! +2 On Thu, Dec 25, 2014 at 11:02 AM, Timur Sufiev wrote: > +1 from me. > > On Thu, Dec 25, 2014 at 10:28 AM, Serg Melikyan > wrote: >> >> I'd like to propose that we add Kate Chernova to the murano-core. >> >> Kate is active member of our community for more than a year, she is >> regular participant in our IRC meeting and maintains a good score as >> contributor: >> >> http://stackalytics.com/report/users/efedorova >> >> Please vote by replying to this thread. As a reminder of your options, +1 >> votes from 5 cores is sufficient; a -1 is a veto. >> -- >> Serg Melikyan, Senior Software Engineer at Mirantis, Inc. >> http://mirantis.com | smelikyan at mirantis.com >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Timur Sufiev > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From amit.das at cloudbyte.com Thu Dec 25 09:45:51 2014 From: amit.das at cloudbyte.com (Amit Das) Date: Thu, 25 Dec 2014 15:15:51 +0530 Subject: [openstack-dev] [cinder] [driver] DB operations In-Reply-To: <20141224201619.GA25116@gmail.com> References: <20141224201619.GA25116@gmail.com> Message-ID: Thanks Mike for getting me these useful reviews & design discussions. So as it stands now, I am trying '*provider_id*' to map OpenStack/Cinder with the driver's backend storage. I got some useful review comments from @xing-yang to try out '*provider_id'* feature enabled by below commit: https://review.openstack.org/#/c/143205/ Do let me know if '*provider_id'* approach seems reasonable ? Regards, Amit *CloudByte Inc.* On Thu, Dec 25, 2014 at 1:46 AM, Mike Perez wrote: > On 06:05 Sat 20 Dec , Duncan Thomas wrote: > > No, I mean that if drivers are going to access database, then they should > > do it via a defined interface that limits what they can do to a sane set > of > > operations. I'd still prefer that they didn't need extra access beyond > the > > model update, but I don't know if that is possible. > > > > Duncan Thomas > > On Dec 19, 2014 6:43 PM, "Amit Das" wrote: > > > > > Thanks Duncan. > > > Do you mean hepler methods in the specific driver class? > > > On 19 Dec 2014 14:51, "Duncan Thomas" wrote: > > > > > >> So our general advice has historical been 'drivers should not be > > >> accessing the db directly'. I haven't had chance to look at your > driver > > >> code yet, I've been on vacation, but my suggestion is that if you > > >> absolutely must store something in the admin metadata rather than > somewhere > > >> that is covered by the model update (generally provider location and > > >> provider auth) then writing some helper methods that wrap the context > bump > > >> and db call would be better than accessing it directly from the > driver. > > >> > > >> Duncan Thomas > > >> On Dec 18, 2014 11:41 PM, "Amit Das" wrote: > > I've expressed in past reviews that we should have an interface that limits > drivers access to the database [1], but received quite a bit of push > back in Cinder. I recommend we stick to what has been decided, otherwise, > Amit > you should spend some time on reading the history of this issue [2] from > previous meetings and start a rediscussion on it in the next meeting [3]. > Not > discouraging it, but this has been something brought up at least a couple > of > times now and it ends up with the same answer from the community. > > [1] - https://review.openstack.org/#/c/107693/14 > [2] - > http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-10-15-16.00.log.html#l-186 > [3] - https://wiki.openstack.org/wiki/CinderMeetings > > -- > Mike Perez > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From slagun at mirantis.com Thu Dec 25 09:50:39 2014 From: slagun at mirantis.com (Stan Lagun) Date: Thu, 25 Dec 2014 12:50:39 +0300 Subject: [openstack-dev] [Murano] Nominating Kate Chernova for murano-core In-Reply-To: References: Message-ID: +2 Sincerely yours, Stan Lagun Principal Software Engineer @ Mirantis On Thu, Dec 25, 2014 at 12:42 PM, Ruslan Kamaldinov < rkamaldinov at mirantis.com> wrote: > Great addition to core team! > > +2 > > > > On Thu, Dec 25, 2014 at 11:02 AM, Timur Sufiev > wrote: > > +1 from me. > > > > On Thu, Dec 25, 2014 at 10:28 AM, Serg Melikyan > > wrote: > >> > >> I'd like to propose that we add Kate Chernova to the murano-core. > >> > >> Kate is active member of our community for more than a year, she is > >> regular participant in our IRC meeting and maintains a good score as > >> contributor: > >> > >> http://stackalytics.com/report/users/efedorova > >> > >> Please vote by replying to this thread. As a reminder of your options, > +1 > >> votes from 5 cores is sufficient; a -1 is a veto. > >> -- > >> Serg Melikyan, Senior Software Engineer at Mirantis, Inc. > >> http://mirantis.com | smelikyan at mirantis.com > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > > > > -- > > Timur Sufiev > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skraynev at mirantis.com Thu Dec 25 09:52:43 2014 From: skraynev at mirantis.com (Sergey Kraynev) Date: Thu, 25 Dec 2014 13:52:43 +0400 Subject: [openstack-dev] [heat] Remove deprecation properties Message-ID: Hi all. In the last time we got on review several patches, which removes old deprecation properties [1], and one mine [2]. The aim is to delete deprecated code and redundant tests. It looks simple, but the main problem, which we met, is backward compatibility. F.e. user has created resource (FIP) with old property schema, i.e. using SUBNET_ID instead of SUBNET. On first look nothing bad will not happen, because: 1. handle_delete use resource_id and any changes in property schema does not affect other actions. 2. If user want to use old template, he will get adequate error message, that this property is not presented in schema. After that he just should switch to new property and update stack using this new property. In the same time we have one big issues for shadow dependencies, which is actual for neutron resources. The simplest approach will not be worked [3], due to old properties was deleted from property_schema. Why is it bad ? - we will get again all bugs related with such dependencies. - how to make sure: - create stack with old property (my template [4]) - open horizon, and look on topology - download patch [2] and restart engine - reload horizon page with topology - as result it will be different I have some ideas about how to solve this, but none of them is not enough good for me: - getting such information from self.properties.data is bad, because we will skip all validations mentioned in properties.__getitem__ - renaming old key in data to new or creating copy with new key is not correct for me, because in this case we actually change properties (resource representation) invisibly from user. - as possible we may leave old deprecated property and mark it something like (removed), which will have similar behavior such as for implemented=False. I do not like it, because it means, that we never remove this "support code", because wants to be compatible with old resources. (User may be not very lazy to do simple update or something else ...) - last way, which I have not tried yet, is using _stored_properties_data for extraction necessary information. So now I have the questions: Should we support such case with backward compatibility? If yes, how will be better to do it for us and user? May be we should create some strategy for removing deprecated properties ? [1] https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:rm-depr-props,n,z [2] https://review.openstack.org/#/c/139990/7 [3] https://review.openstack.org/#/c/139990/7/heat/engine/resources/neutron/floatingip.py [4] http://paste.openstack.org/show/154591/ Regards, Sergey. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Dec 25 09:59:47 2014 From: zigo at debian.org (Thomas Goirand) Date: Thu, 25 Dec 2014 17:59:47 +0800 Subject: [openstack-dev] Horizon switching to the normal .ini format Message-ID: <549BE013.1080808@debian.org> Hi, There's been talks about Horizon switching to the normal .ini format that all other projects have been using so far. It would really be awesome if this could happen. Though I don't see the light at the end of the tunnel. Quite the opposite way: the settings.py is every day becoming more complicated. Is anyone at least working on the .ini switch idea? Or will we continue to see the Django style settings.py forever? Is there any blockers? Cheers, Thomas Goirand (zigo) From ativelkov at mirantis.com Thu Dec 25 10:25:13 2014 From: ativelkov at mirantis.com (Alexander Tivelkov) Date: Thu, 25 Dec 2014 14:25:13 +0400 Subject: [openstack-dev] [Murano] Nominating Kate Chernova for murano-core In-Reply-To: References: Message-ID: +2! Need moar good cores! -- Regards, Alexander Tivelkov On Thu, Dec 25, 2014 at 12:50 PM, Stan Lagun wrote: > +2 > > Sincerely yours, > Stan Lagun > Principal Software Engineer @ Mirantis > > > On Thu, Dec 25, 2014 at 12:42 PM, Ruslan Kamaldinov > wrote: >> >> Great addition to core team! >> >> +2 >> >> >> >> On Thu, Dec 25, 2014 at 11:02 AM, Timur Sufiev >> wrote: >> > +1 from me. >> > >> > On Thu, Dec 25, 2014 at 10:28 AM, Serg Melikyan >> > wrote: >> >> >> >> I'd like to propose that we add Kate Chernova to the murano-core. >> >> >> >> Kate is active member of our community for more than a year, she is >> >> regular participant in our IRC meeting and maintains a good score as >> >> contributor: >> >> >> >> http://stackalytics.com/report/users/efedorova >> >> >> >> Please vote by replying to this thread. As a reminder of your options, >> >> +1 >> >> votes from 5 cores is sufficient; a -1 is a veto. >> >> -- >> >> Serg Melikyan, Senior Software Engineer at Mirantis, Inc. >> >> http://mirantis.com | smelikyan at mirantis.com >> >> >> >> _______________________________________________ >> >> OpenStack-dev mailing list >> >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> > >> > >> > >> > -- >> > Timur Sufiev >> > >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From nmakhotkin at mirantis.com Thu Dec 25 10:46:42 2014 From: nmakhotkin at mirantis.com (Nikolay Makhotkin) Date: Thu, 25 Dec 2014 14:46:42 +0400 Subject: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs In-Reply-To: <0857543A-BD3A-47F4-A19D-37F358B3CCA6@mirantis.com> References: <0857543A-BD3A-47F4-A19D-37F358B3CCA6@mirantis.com> Message-ID: > > One thing that I strongly suggest is that we clearly define all reserved > keys like ?env?, ?__actions? etc. I think it?d be better if they all > started with the same prefix, for example, double underscore. I absolutely agree here. We should use specific keywords with "__" prefix like we used "__executions". On Thu, Dec 25, 2014 at 8:51 AM, Renat Akhmerov wrote: > > On 24 Dec 2014, at 23:37, W Chan wrote: > > >* 2) Retuning to first example: > *>* ... > *>* action: std.sql conn_str={$.env.conn_str} query={$.query} > *>* ... > *>* $.env - is it a name of environment or it will be a registered syntax to getting access to values from env ? > * > > I was actually thinking the environment will use the reserved word "env" in the WF context. The value for the "env" key will be the dict supplied either DB lookup by name, by dict, or by JSON from CLI. > > Ok, probably here?s the place where I didn?t understand you before. I > thought ?env? here is just a arbitrary key that users themselves may want > to have to just group some variables under a single umbrella. What you?re > saying is that whatever is under ?$.env? is just the exact same environment > that we passed when we started the workflow? If yes then it definitely > makes sense to me (it just allows to explicitly access environment, not > through the implicit variable lookup). Please confirm. > > One thing that I strongly suggest is that we clearly define all reserved > keys like ?env?, ?__actions? etc. I think it?d be better if they all > started with the same prefix, for example, double underscore. > > The nested dict for "__actions" (and all other keys with double underscore) is special system purpose, in this case declaring defaults for action inputs. Similar to "__execution" where it's for containing runtime data for the WF execution. > > Yes, that?s clear > > > Renat Akhmerov > @ Mirantis Inc. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best Regards, Nikolay -------------- next part -------------- An HTML attachment was scrubbed... URL: From dteselkin at mirantis.com Thu Dec 25 12:52:31 2014 From: dteselkin at mirantis.com (Dmitry Teselkin) Date: Thu, 25 Dec 2014 16:52:31 +0400 Subject: [openstack-dev] [Murano] Nominating Kate Chernova for murano-core In-Reply-To: References: Message-ID: +2 :) On Thu, Dec 25, 2014 at 1:25 PM, Alexander Tivelkov wrote: > +2! > Need moar good cores! > -- > Regards, > Alexander Tivelkov > > > On Thu, Dec 25, 2014 at 12:50 PM, Stan Lagun wrote: > > +2 > > > > Sincerely yours, > > Stan Lagun > > Principal Software Engineer @ Mirantis > > > > > > On Thu, Dec 25, 2014 at 12:42 PM, Ruslan Kamaldinov > > wrote: > >> > >> Great addition to core team! > >> > >> +2 > >> > >> > >> > >> On Thu, Dec 25, 2014 at 11:02 AM, Timur Sufiev > >> wrote: > >> > +1 from me. > >> > > >> > On Thu, Dec 25, 2014 at 10:28 AM, Serg Melikyan < > smelikyan at mirantis.com> > >> > wrote: > >> >> > >> >> I'd like to propose that we add Kate Chernova to the murano-core. > >> >> > >> >> Kate is active member of our community for more than a year, she is > >> >> regular participant in our IRC meeting and maintains a good score as > >> >> contributor: > >> >> > >> >> http://stackalytics.com/report/users/efedorova > >> >> > >> >> Please vote by replying to this thread. As a reminder of your > options, > >> >> +1 > >> >> votes from 5 cores is sufficient; a -1 is a veto. > >> >> -- > >> >> Serg Melikyan, Senior Software Engineer at Mirantis, Inc. > >> >> http://mirantis.com | smelikyan at mirantis.com > >> >> > >> >> _______________________________________________ > >> >> OpenStack-dev mailing list > >> >> OpenStack-dev at lists.openstack.org > >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> >> > >> > > >> > > >> > > >> > -- > >> > Timur Sufiev > >> > > >> > _______________________________________________ > >> > OpenStack-dev mailing list > >> > OpenStack-dev at lists.openstack.org > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Thanks, Dmitry Teselkin Deployment Engineer Mirantis http://www.mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From smelikyan at mirantis.com Thu Dec 25 12:59:05 2014 From: smelikyan at mirantis.com (Serg Melikyan) Date: Thu, 25 Dec 2014 15:59:05 +0300 Subject: [openstack-dev] [Murano] Nominating Kate Chernova for murano-core In-Reply-To: References: Message-ID: Kate, welcome to murano-core! Congratulations! On Thu, Dec 25, 2014 at 3:52 PM, Dmitry Teselkin wrote: > +2 :) > > On Thu, Dec 25, 2014 at 1:25 PM, Alexander Tivelkov < > ativelkov at mirantis.com> wrote: > >> +2! >> Need moar good cores! >> -- >> Regards, >> Alexander Tivelkov >> >> >> On Thu, Dec 25, 2014 at 12:50 PM, Stan Lagun wrote: >> > +2 >> > >> > Sincerely yours, >> > Stan Lagun >> > Principal Software Engineer @ Mirantis >> > >> > >> > On Thu, Dec 25, 2014 at 12:42 PM, Ruslan Kamaldinov >> > wrote: >> >> >> >> Great addition to core team! >> >> >> >> +2 >> >> >> >> >> >> >> >> On Thu, Dec 25, 2014 at 11:02 AM, Timur Sufiev >> >> wrote: >> >> > +1 from me. >> >> > >> >> > On Thu, Dec 25, 2014 at 10:28 AM, Serg Melikyan < >> smelikyan at mirantis.com> >> >> > wrote: >> >> >> >> >> >> I'd like to propose that we add Kate Chernova to the murano-core. >> >> >> >> >> >> Kate is active member of our community for more than a year, she is >> >> >> regular participant in our IRC meeting and maintains a good score as >> >> >> contributor: >> >> >> >> >> >> http://stackalytics.com/report/users/efedorova >> >> >> >> >> >> Please vote by replying to this thread. As a reminder of your >> options, >> >> >> +1 >> >> >> votes from 5 cores is sufficient; a -1 is a veto. >> >> >> -- >> >> >> Serg Melikyan, Senior Software Engineer at Mirantis, Inc. >> >> >> http://mirantis.com | smelikyan at mirantis.com >> >> >> >> >> >> _______________________________________________ >> >> >> OpenStack-dev mailing list >> >> >> OpenStack-dev at lists.openstack.org >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> > >> >> > >> >> > >> >> > -- >> >> > Timur Sufiev >> >> > >> >> > _______________________________________________ >> >> > OpenStack-dev mailing list >> >> > OpenStack-dev at lists.openstack.org >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > >> >> >> >> _______________________________________________ >> >> OpenStack-dev mailing list >> >> OpenStack-dev at lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Thanks, > Dmitry Teselkin > Deployment Engineer > Mirantis > http://www.mirantis.com > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Serg Melikyan, Senior Software Engineer at Mirantis, Inc. http://mirantis.com | smelikyan at mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tengqim at linux.vnet.ibm.com Thu Dec 25 13:54:09 2014 From: tengqim at linux.vnet.ibm.com (Qiming Teng) Date: Thu, 25 Dec 2014 21:54:09 +0800 Subject: [openstack-dev] [Heat][oslo-incubator][oslo-log] Logging Unicode characters In-Reply-To: <549AEEE8.8080602@nemebean.com> References: <20141224094855.GA28021@localhost> <549AEEE8.8080602@nemebean.com> Message-ID: <20141225135408.GA31468@localhost> On Wed, Dec 24, 2014 at 10:50:48AM -0600, Ben Nemec wrote: > On 12/24/2014 03:48 AM, Qiming Teng wrote: > > Hi, > > > > When trying to enable stack names in Heat to use unicode strings, I am > > stuck by a weird behavior of logging. > > > > Suppose I have a stack name assigned some non-ASCII string, then when > > stack tries to log something here: > > > > heat/engine/stack.py: > > > > 536 LOG.info(_LI('Stack %(action)s %(status)s (%(name)s): ' > > 537 '%(reason)s'), > > 538 {'action': action, > > 539 'status': status, > > 540 'name': self.name, # type(self.name)==unicode here > > 541 'reason': reason}) > > > > I'm seeing the following errors from h-eng session: > > > > Traceback (most recent call last): > > File "/usr/lib64/python2.6/logging/__init__.py", line 799, in emit > > stream.write(fs % msg.decode('utf-8')) > > File "/usr/lib64/python2.6/encodings/utf_8.py", line 16, in decode > > return codecs.utf_8_decode(input, errors, True) > > UnicodeEncodeError: 'ascii' codec can't encode characters in position 114-115: > > ordinal not in range(128) > > > > This means logging cannot handle Unicode correctly? No. I did the > > following experiments: > > > > $ cat logtest > > > > #!/usr/bin/env python > > > > import sys > > > > from oslo.utils import encodeutils > > from oslo import i18n > > > > from heat.common.i18n import _LI > > from heat.openstack.common import log as logging > > > > i18n.enable_lazy() > > > > LOG = logging.getLogger('logtest') > > logging.setup('heat') > > > > print('sys.stdin.encoding: %s' % sys.stdin.encoding) > > print('sys.getdefaultencoding: %s' % sys.getdefaultencoding()) > > > > s = sys.argv[1] > > print('s is: %s' % type(s)) > > > > stack_name = encodeutils.safe_decode(unis) > > I think you may have a typo in your sample here because unis isn't > defined as far as I can tell. You are right, it was a typo. It should be s here. > In any case, I suspect this line is why your example works and Heat > doesn't. I can reproduce the same error if I stuff some unicode data > into a unicode string without decoding it first: > > >>> test = u'\xe2\x82\xac' > >>> test.decode('utf8') > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib64/python2.7/encodings/utf_8.py", line 16, in decode > return codecs.utf_8_decode(input, errors, True) > UnicodeEncodeError: 'ascii' codec can't encode characters in position > 0-2: ordinal not in range(128) The above didn't work because test is already declared to be Unicode, decoding it won't work.... > >>> test = '\xe2\x82\xac' > >>> test.decode('utf8') > u'\u20ac' This one works because test is 'str' type. > Whether that's what is going on here I can't say for sure though. > Trying to figure out unicode in Python usually gives me a headache. :-) Right. Not just unicode conversion, in Heat's case, it also involves quoting. The test above needs to be quoted when being part of an URI. That is further more complicating the whole process. > > print('stack_name is: %s' % type(stack_name)) > > > > # stack_name is unicode here > > LOG.error(_LI('stack name: %(name)s') % {'name': stack_name}) > > > > $ ./logtest > > > > [tengqm at node1 heat]$ ./logtest ?? > > sys.stdin.encoding: UTF-8 > > sys.getdefaultencoding: ascii > > s is: > > stack_name is: > > 2014-12-24 17:51:13.799 29194 ERROR logtest [-] stack name: ?? > > > > It worked. > > > > After spending more than one day on this, I'm seeking help from people > > here. What's wrong with Unicode stack names here? > > > > Any hints are appreciated. > > > > Regards, > > - Qiming > > From tengqim at linux.vnet.ibm.com Thu Dec 25 13:57:30 2014 From: tengqim at linux.vnet.ibm.com (Qiming Teng) Date: Thu, 25 Dec 2014 21:57:30 +0800 Subject: [openstack-dev] [Heat][devstack] Logging Unicode characters In-Reply-To: <20141224125811.GA28811@localhost> References: <20141224094855.GA28021@localhost> <20141224125811.GA28811@localhost> Message-ID: <20141225135730.GB31468@localhost> After some tweaking to screen sessions, finally I can see Unicode strings logged and shown in screen environment. It is not a problem of oslo.log or log module from oslo-incubator. Sorry for the false alarm. Maybe devstack should start screen sessions with Unicode support by default? Regards, - Qiming On Wed, Dec 24, 2014 at 08:58:13PM +0800, Qiming Teng wrote: > Seems that the reason is in devstack 'screen' is not started with > Unicode support. Still checking ... > > Regards, > Qiming > > On Wed, Dec 24, 2014 at 05:48:56PM +0800, Qiming Teng wrote: > > Hi, > > > > When trying to enable stack names in Heat to use unicode strings, I am > > stuck by a weird behavior of logging. > > > > Suppose I have a stack name assigned some non-ASCII string, then when > > stack tries to log something here: > > > > heat/engine/stack.py: > > > > 536 LOG.info(_LI('Stack %(action)s %(status)s (%(name)s): ' > > 537 '%(reason)s'), > > 538 {'action': action, > > 539 'status': status, > > 540 'name': self.name, # type(self.name)==unicode here > > 541 'reason': reason}) > > > > I'm seeing the following errors from h-eng session: > > > > Traceback (most recent call last): > > File "/usr/lib64/python2.6/logging/__init__.py", line 799, in emit > > stream.write(fs % msg.decode('utf-8')) > > File "/usr/lib64/python2.6/encodings/utf_8.py", line 16, in decode > > return codecs.utf_8_decode(input, errors, True) > > UnicodeEncodeError: 'ascii' codec can't encode characters in position 114-115: > > ordinal not in range(128) > > > > This means logging cannot handle Unicode correctly? No. I did the > > following experiments: > > > > $ cat logtest > > > > #!/usr/bin/env python > > > > import sys > > > > from oslo.utils import encodeutils > > from oslo import i18n > > > > from heat.common.i18n import _LI > > from heat.openstack.common import log as logging > > > > i18n.enable_lazy() > > > > LOG = logging.getLogger('logtest') > > logging.setup('heat') > > > > print('sys.stdin.encoding: %s' % sys.stdin.encoding) > > print('sys.getdefaultencoding: %s' % sys.getdefaultencoding()) > > > > s = sys.argv[1] > > print('s is: %s' % type(s)) > > > > stack_name = encodeutils.safe_decode(unis) > > print('stack_name is: %s' % type(stack_name)) > > > > # stack_name is unicode here > > LOG.error(_LI('stack name: %(name)s') % {'name': stack_name}) > > > > $ ./logtest > > > > [tengqm at node1 heat]$ ./logtest ?? > > sys.stdin.encoding: UTF-8 > > sys.getdefaultencoding: ascii > > s is: > > stack_name is: > > 2014-12-24 17:51:13.799 29194 ERROR logtest [-] stack name: ?? > > > > It worked. > > > > After spending more than one day on this, I'm seeking help from people > > here. What's wrong with Unicode stack names here? > > > > Any hints are appreciated. > > > > Regards, > > - Qiming > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Thu Dec 25 14:18:29 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 25 Dec 2014 14:18:29 +0000 Subject: [openstack-dev] [Heat][devstack] Logging Unicode characters In-Reply-To: <20141225135730.GB31468@localhost> References: <20141224094855.GA28021@localhost> <20141224125811.GA28811@localhost> <20141225135730.GB31468@localhost> Message-ID: <20141225141828.GQ2497@yuggoth.org> On 2014-12-25 21:57:30 +0800 (+0800), Qiming Teng wrote: [...] > Maybe devstack should start screen sessions with Unicode support by > default? The easiest way to have that discussion is to add -U to the screen calls in the screen_rc function definition in openstack-dev/devstack functions-common and see what the CI jobs and DevStack reviewers have to say about that change when you submit it to Gerrit. Or else consider filing a bug at https://bugs.launchpad.net/devstack/+filebug and hope some other contributor has time to follow through with it. -- Jeremy Stanley From afedorova at mirantis.com Thu Dec 25 14:31:02 2014 From: afedorova at mirantis.com (Aleksandra Fedorova) Date: Thu, 25 Dec 2014 18:31:02 +0400 Subject: [openstack-dev] [Fuel][Docs] Move fuel-web/docs to fuel-docs In-Reply-To: References: Message-ID: There are different types of dependencies: docs dependencies like sphinx, plantuml and so on are rarely changed so we can create environment on a slave during slave deployment phase and keep it there. But nailgun dependencies for example can be changed at any time, thus we need to update the environment every time we checkout fuel-web/ repository. Therefore workflow for building just docs from rst-files is different from the one where you need to update everything before you start. Also autodocs should be updated and tested on changes into fuel-web code, while changes to fuel-docs don't usually touch them. Thus I think that autodocs belong to the repository they are generated from and I'd like to keep them there. That's why I moved objects.rst and api_doc.rst into nailgun/docs in [1] One more thing is that fuel-web is not the only repository which produces autodocs. There are autodocs from fuel-main/fuelweb_test repo and autodocs from fuel-devops/docs. And I'd like to have the same general workflow for all of them. So my idea is that we should keep: 1) repository fuel-docs with all texts and diagrams, including Fuel Developer Guide, 2) repository fuel-web/nailgun/docs with nailgun reference, 3) repository fuel-main/fuelweb_test/docs with autogenerated system tests reference, 4) repository fuel-devops/docs with autogenerated fuel-devops reference, 5) repository fuel-specs with specifications. On CI we will build autodocs for every commit into fuel-web, fuel-main and fuel-devops. And we will build docs on every commit for fuel-docs without additional repositories involved. And finally we will have CI job to publish final documentation in one place. And this job will build and publish every part into separate subfolder, so we'll get final docs in one place but built independently. https://review.openstack.org/#/c/143679/1/nailgun/docs/api_doc.rst,cm -- Aleksandra Fedorova Fuel DevOps Engineer bookwar From tsufiev at mirantis.com Thu Dec 25 18:25:07 2014 From: tsufiev at mirantis.com (Timur Sufiev) Date: Thu, 25 Dec 2014 10:25:07 -0800 Subject: [openstack-dev] Horizon switching to the normal .ini format In-Reply-To: <549BE013.1080808@debian.org> References: <549BE013.1080808@debian.org> Message-ID: Thomas, I could only point you to the Radomir's patch https://review.openstack.org/#/c/100521/ It's still a work in progress, so you may ask him for more details. On Thu, Dec 25, 2014 at 1:59 AM, Thomas Goirand wrote: > Hi, > > There's been talks about Horizon switching to the normal .ini format > that all other projects have been using so far. It would really be > awesome if this could happen. Though I don't see the light at the end of > the tunnel. Quite the opposite way: the settings.py is every day > becoming more complicated. > > Is anyone at least working on the .ini switch idea? Or will we continue > to see the Django style settings.py forever? Is there any blockers? > > Cheers, > > Thomas Goirand (zigo) > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Timur Sufiev -------------- next part -------------- An HTML attachment was scrubbed... URL: From raildom at gmail.com Thu Dec 25 20:18:52 2014 From: raildom at gmail.com (Raildo Mascena) Date: Thu, 25 Dec 2014 20:18:52 +0000 Subject: [openstack-dev] Hierarchical Multitenancy References: <2964CB70-9ED7-42DE-9F17-498458A0031B@gmail.com> Message-ID: Hi Deepak, I think that one of the next steps for HMT is expand the concept for other services, as Nova folks are doing with Quotas for nested projects. I think that we can do a brainstorm about the use cases for HMT in each service, but I think that if a resource can be shared inside the hierarchy, so this is a good evidence that we can implement HMT in the service, like a instance or a image and other things. I'm available for any discussion related to HMT in other services. :) On Wed Dec 24 2014 at 1:04:15 PM Deepak Shetty wrote: > Raildo, > Thanks for putting the blog, i really liked it as it helps to > understand how hmt works. I am interested to know more about how hmt can be > exploited for other OpenStack projects... Esp cinder, manila > On Dec 23, 2014 5:55 AM, "Morgan Fainberg" > wrote: > >> Hi Raildo, >> >> Thanks for putting this post together. I really appreciate all the work >> you guys have done (and continue to do) to get the Hierarchical >> Mulittenancy code into Keystone. It?s great to have the base implementation >> merged into Keystone for the K1 milestone. I look forward to seeing the >> rest of the development land during the rest of this cycle and what the >> other OpenStack projects build around the HMT functionality. >> >> Cheers, >> Morgan >> >> >> >> On Dec 22, 2014, at 1:49 PM, Raildo Mascena wrote: >> >> Hello folks, My team and I developed the Hierarchical Multitenancy >> concept for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What >> have we implemented? What are the next steps for kilo? >> To answers these questions, I created a blog post *http://raildo.me/hierarchical-multitenancy-in-openstack/ >> * >> >> Any question, I'm available. >> >> -- >> Raildo Mascena >> Software Engineer. >> Bachelor of Computer Science. >> Distributed Systems Laboratory >> Federal University of Campina Grande >> Campina Grande, PB - Brazil >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From caedo at mirantis.com Thu Dec 25 23:49:17 2014 From: caedo at mirantis.com (Christopher Aedo) Date: Thu, 25 Dec 2014 15:49:17 -0800 Subject: [openstack-dev] [Fuel][Docs] Move fuel-web/docs to fuel-docs In-Reply-To: References: Message-ID: Ah I got it, I hadn't really considered how much more likely the requirements are to change for nailgun and the other components that should have auto-generated API docs. Having those build completely separately matches OpenStack proper too - I'm on board now :) We can work out the details over the next few weeks, should be able to wrap it up no latter than mid January. I'll get to work next week on updating the blueprint to reflect this approach. -Christopher On Thu, Dec 25, 2014 at 6:31 AM, Aleksandra Fedorova wrote: > There are different types of dependencies: > > docs dependencies like sphinx, plantuml and so on are rarely changed > so we can create environment on a slave during slave deployment phase > and keep it there. But nailgun dependencies for example can be changed > at any time, thus we need to update the environment every time we > checkout fuel-web/ repository. Therefore workflow for building just > docs from rst-files is different from the one where you need to update > everything before you start. > > Also autodocs should be updated and tested on changes into fuel-web > code, while changes to fuel-docs don't usually touch them. > > Thus I think that autodocs belong to the repository they are generated > from and I'd like to keep them there. That's why I moved objects.rst > and api_doc.rst into nailgun/docs in [1] > > One more thing is that fuel-web is not the only repository which > produces autodocs. There are autodocs from fuel-main/fuelweb_test repo > and autodocs from fuel-devops/docs. And I'd like to have the same > general workflow for all of them. > > > So my idea is that we should keep: > > 1) repository fuel-docs with all texts and diagrams, including Fuel > Developer Guide, > 2) repository fuel-web/nailgun/docs with nailgun reference, > 3) repository fuel-main/fuelweb_test/docs with autogenerated system > tests reference, > 4) repository fuel-devops/docs with autogenerated fuel-devops reference, > 5) repository fuel-specs with specifications. > > On CI we will build autodocs for every commit into fuel-web, fuel-main > and fuel-devops. And we will build docs on every commit for fuel-docs > without additional repositories involved. > > And finally we will have CI job to publish final documentation in one > place. And this job will build and publish every part into separate > subfolder, so we'll get final docs in one place but built > independently. > > > https://review.openstack.org/#/c/143679/1/nailgun/docs/api_doc.rst,cm > > -- > Aleksandra Fedorova > Fuel DevOps Engineer > bookwar From aji.zqfan at gmail.com Fri Dec 26 02:42:15 2014 From: aji.zqfan at gmail.com (ZhiQiang Fan) Date: Fri, 26 Dec 2014 10:42:15 +0800 Subject: [openstack-dev] [CI][TripleO][Ironic] check-tripleo-ironic-xxx fails Message-ID: check-tripleo-ironic-xxx failed because: rm -rf /home/jenkins/.cache/image-create/pypi/mirror/ rm: cannot remove `/home/jenkins/.cache/image-create/pypi/mirror/': Permission denie see http://logs.openstack.org/37/143937/1/check-tripleo/check-tripleo-ironic-overcloud-precise-nonha/9be729b/console.html search on logstash.openstack.org: message:"rm: cannot remove `/home/jenkins/.cache/image-create/pypi/mirror/\': Permission denied" there are 59 hits in last 48 hours -------------- next part -------------- An HTML attachment was scrubbed... URL: From sragolu at mvista.com Fri Dec 26 04:25:29 2014 From: sragolu at mvista.com (Srinivasa Rao Ragolu) Date: Fri, 26 Dec 2014 09:55:29 +0530 Subject: [openstack-dev] Need help in validating CPU Pinning feature Message-ID: Hi All, I have been working on CPU Pinning feature validation. I could able set vcpu_pin_set config in nova.conf and could able to see cpupin set in guest xml and working fine while launching. Please let me know how can I set cputune: vcpupin in guest xml? Or Any documents refer to validate cpu pinning use cases. Thanks, Srinivas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From saripurigopi at gmail.com Fri Dec 26 05:28:45 2014 From: saripurigopi at gmail.com (GopiKrishna Saripuri) Date: Fri, 26 Dec 2014 10:58:45 +0530 Subject: [openstack-dev] [Ironic] Ironic pxe_ucs driver BP review approval Message-ID: Hi Ironic-core team, I've submitted new BP for pxe_ucs driver, supporting Cisco UCS B/C/M-series hardware. Could you take a look at the review and provide your comments and approvals. I will submit the code for review once the BP is approved. Review link: https://review.openstack.org/#/c/139517 https://blueprints.launchpad.net/ironic/+spec/cisco-ucs-pxe-driver Regards GopiKrishna S -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp at jamezpolley.com Fri Dec 26 06:59:12 2014 From: jp at jamezpolley.com (James Polley) Date: Fri, 26 Dec 2014 07:59:12 +0100 Subject: [openstack-dev] [CI][TripleO][Ironic] check-tripleo-ironic-xxx fails In-Reply-To: References: Message-ID: Thanks for the alert The earliest failure I can see because of this is http://logs.openstack.org/43/141043/6/check-tripleo/check-tripleo-ironic-overcloud-precise-nonha/36c9771/ I've raised https://bugs.launchpad.net/tripleo/+bug/1405732 and I've put some preliminary notes on https://etherpad.openstack.org/p/tripleo-ci-breakages On Fri, Dec 26, 2014 at 3:42 AM, ZhiQiang Fan wrote: > check-tripleo-ironic-xxx failed because: > > rm -rf /home/jenkins/.cache/image-create/pypi/mirror/ > rm: cannot remove `/home/jenkins/.cache/image-create/pypi/mirror/': > Permission denie > > see > > http://logs.openstack.org/37/143937/1/check-tripleo/check-tripleo-ironic-overcloud-precise-nonha/9be729b/console.html > > search on logstash.openstack.org: > message:"rm: cannot remove > `/home/jenkins/.cache/image-create/pypi/mirror/\': Permission denied" > there are 59 hits in last 48 hours > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adamg at ubuntu.com Fri Dec 26 09:01:18 2014 From: adamg at ubuntu.com (Adam Gandelman) Date: Fri, 26 Dec 2014 01:01:18 -0800 Subject: [openstack-dev] [CI][TripleO][Ironic] check-tripleo-ironic-xxx fails In-Reply-To: References: Message-ID: This is fallout from a new upstream release of pip that went out earlier in the week. It looks like no formal bug ever got filed, tho the same problem discovered in devstack and trove's integration testing repository. Added some comments comments to the bug. On Thu, Dec 25, 2014 at 10:59 PM, James Polley wrote: > Thanks for the alert > > The earliest failure I can see because of this is > http://logs.openstack.org/43/141043/6/check-tripleo/check-tripleo-ironic-overcloud-precise-nonha/36c9771/ > > I've raised https://bugs.launchpad.net/tripleo/+bug/1405732 and I've put > some preliminary notes on > https://etherpad.openstack.org/p/tripleo-ci-breakages > > On Fri, Dec 26, 2014 at 3:42 AM, ZhiQiang Fan wrote: > >> check-tripleo-ironic-xxx failed because: >> >> rm -rf /home/jenkins/.cache/image-create/pypi/mirror/ >> rm: cannot remove `/home/jenkins/.cache/image-create/pypi/mirror/': >> Permission denie >> >> see >> >> http://logs.openstack.org/37/143937/1/check-tripleo/check-tripleo-ironic-overcloud-precise-nonha/9be729b/console.html >> >> search on logstash.openstack.org: >> message:"rm: cannot remove >> `/home/jenkins/.cache/image-create/pypi/mirror/\': Permission denied" >> there are 59 hits in last 48 hours >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From punith.s at cloudbyte.com Fri Dec 26 09:23:39 2014 From: punith.s at cloudbyte.com (Punith S) Date: Fri, 26 Dec 2014 14:53:39 +0530 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4361EE@G4W3223.americas.hpqcorp.net> References: <5486D947.4090209@hp.com> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4240C5@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A424BC0@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A435949@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4361EE@G4W3223.americas.hpqcorp.net> Message-ID: hello, i have setup the CI environment for our cloudbyte storage, running a master jenkins vm and a slave node vm running devstack. i have followed the jay's blog using asselin's scripts from github. all i need to do is to test our cloudbyte cinder driver against the cinder tempest suit. currently the noop-check-communication job is coming of successfully on openstack-dev/sandbox project but the *dvsm full tempest *job is failing due to an error in failing to upload the images to the glance service from swift. how can i hack the dsvm tempest full job so that it only installs my required services like cinder,nova,horizon etc. ? does modifying the devstack gate scripts help ? i'm attaching the links for the failure of dsvm-tempest-full job devstacklog.txt - http://paste.openstack.org/show/154779/ devstack.txt.summary - http://paste.openstack.org/show/154780/ thanks On Tue, Dec 23, 2014 at 8:46 PM, Asselin, Ramy wrote: > You should use 14.04 for the slave. The limitation for using 12.04 is > only for the master since zuul?s apache configuration is WIP on 14.04 [1], > and zuul does not run on the slave. > > Ramy > > [1] https://review.openstack.org/#/c/141518/ > > *From:* Punith S [mailto:punith.s at cloudbyte.com] > *Sent:* Monday, December 22, 2014 11:37 PM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Hi Asselin, > > > > i'm following your readme https://github.com/rasselin/os-ext-testing > > for setting up our cloudbyte CI on two ubuntu 12.04 VM's(master and slave) > > > > currently the scripts and setup went fine as followed with the document. > > > > now both master and slave have been connected successfully, but in order > to run the tempest integration test against our proposed cloudbyte cinder > driver for kilo, we need to have devstack installed in the slave.(in my > understanding) > > > > but on installing the master devstack i'm getting permission issues in > 12.04 in executing ./stack.sh since master devstack suggests the 14.04 or > 13.10 ubuntu. and on contrary running install_slave.sh is failing on 13.10 > due to puppet modules on found error. > > > > is there a way to get this work ? > > > > thanks in advance > > > > On Mon, Dec 22, 2014 at 11:10 PM, Asselin, Ramy > wrote: > > Eduard, > > > > A few items you can try: > > 1. Double-check that the job is in Jenkins > > a. If not, then that?s the issue > > 2. Check that the processes are running correctly > > a. ps -ef | grep zuul > > i. Should > have 2 zuul-server & 1 zuul-merger > > b. ps -ef | grep Jenkins > > i. Should > have 1 /usr/bin/daemon --name=jenkins & 1 /usr/bin/java > > 3. In Jenkins, Manage Jenkins, Gearman Plugin Config, ?Test > Connection? > > 4. Stop and Zuul & Jenkins. Start Zuul & Jenkins > > a. service Jenkins stop > > b. service zuul stop > > c. service zuul-merger stop > > d. service Jenkins start > > e. service zuul start > > f. service zuul-merger start > > > > Otherwise, I suggest you ask in #openstack-infra irc channel. > > > > Ramy > > > > *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] > *Sent:* Sunday, December 21, 2014 11:01 PM > > > *To:* Asselin, Ramy > *Cc:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Thanks Ramy, > > > > Unfortunately i don't see dsvm-tempest-full in the "status" output. > > Any idea how i can get it "registered"? > > > > Thanks, > > Eduard > > > > On Fri, Dec 19, 2014 at 9:43 PM, Asselin, Ramy > wrote: > > Eduard, > > > > If you run this command, you can see which jobs are registered: > > >telnet localhost 4730 > > > > >status > > > > There are 3 numbers per job: queued, running and workers that can run job. > Make sure the job is listed & last ?workers? is non-zero. > > > > To run the job again without submitting a patch set, leave a ?recheck? > comment on the patch & make sure your zuul layout.yaml is configured to > trigger off that comment. For example [1]. > > Be sure to use the sandbox repository. [2] > > I?m not aware of other ways. > > > > Ramy > > > > [1] > https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L20 > > [2] https://github.com/openstack-dev/sandbox > > > > > > > > > > *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] > *Sent:* Friday, December 19, 2014 3:36 AM > *To:* Asselin, Ramy > *Cc:* OpenStack Development Mailing List (not for usage questions) > > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Hi all, > > After a little struggle with the config scripts i managed to get a working > setup that is able to process openstack-dev/sandbox and run > noop-check-comunication. > > > > Then, i tried enabling dsvm-tempest-full job but it keeps returning > "NOT_REGISTERED" > > > > 2014-12-19 12:07:14,683 INFO zuul.IndependentPipelineManager: Change > depends on changes [] > > 2014-12-19 12:07:14,683 INFO zuul.Gearman: Launch job > noop-check-communication for change with > dependent changes [] > > 2014-12-19 12:07:14,693 INFO zuul.Gearman: Launch job dsvm-tempest-full > for change with dependent changes [] > > 2014-12-19 12:07:14,694 ERROR zuul.Gearman: Job handle: None name: build:dsvm-tempest-full unique: > a9199d304d1140a8bf4448dfb1ae42c1> is not registered with Gearman > > 2014-12-19 12:07:14,694 INFO zuul.Gearman: Build handle: None name: build:dsvm-tempest-full unique: > a9199d304d1140a8bf4448dfb1ae42c1> complete, result NOT_REGISTERED > > 2014-12-19 12:07:14,765 INFO zuul.Gearman: Build handle: H:127.0.0.1:2 name: build:noop-check-communication unique: > 333c6ea077324a788e3c37a313d872c5> started > > 2014-12-19 12:07:14,910 INFO zuul.Gearman: Build handle: H:127.0.0.1:2 name: build:noop-check-communication unique: > 333c6ea077324a788e3c37a313d872c5> complete, result SUCCESS > > 2014-12-19 12:07:14,916 INFO zuul.IndependentPipelineManager: Reporting > change , actions: [ , {'verified': -1}>] > > > > Nodepoold's log show no reference to dsvm-tempest-full and neither > jenkins' logs. > > > > Any idea how to enable this job? > > > > Also, i got the "Cloud provider" setup and i can access it from the > jenkins master. > > Any idea how i can manually trigger dsvm-tempest-full job to run and test > the cloud provider without having to push a review to Gerrit? > > > > Thanks, > > Eduard > > > > On Thu, Dec 18, 2014 at 7:52 PM, Eduard Matei < > eduard.matei at cloudfounders.com> wrote: > > Thanks for the input. > > > > I managed to get another master working (on Ubuntu 13.10), again with some > issues since it was already setup. > > I'm now working towards setting up the slave. > > > > Will add comments to those reviews. > > > > Thanks, > > Eduard > > > > On Thu, Dec 18, 2014 at 7:42 PM, Asselin, Ramy > wrote: > > Yes, Ubuntu 12.04 is tested as mentioned in the readme [1]. Note that > the referenced script is just a wrapper that pulls all the latest from > various locations in openstack-infra, e.g. [2]. > > Ubuntu 14.04 support is WIP [3] > > FYI, there?s a spec to get an in-tree 3rd party ci solution [4]. Please > add your comments if this interests you. > > > > [1] https://github.com/rasselin/os-ext-testing/blob/master/README.md > > [2] > https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L29 > > [3] https://review.openstack.org/#/c/141518/ > > [4] https://review.openstack.org/#/c/139745/ > > > > > > *From:* Punith S [mailto:punith.s at cloudbyte.com] > *Sent:* Thursday, December 18, 2014 3:12 AM > *To:* OpenStack Development Mailing List (not for usage questions); > Eduard Matei > > > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Hi Eduard > > > > we tried running > https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh > > on ubuntu master 12.04, and it appears to be working fine on 12.04. > > > > thanks > > > > On Thu, Dec 18, 2014 at 1:57 PM, Eduard Matei < > eduard.matei at cloudfounders.com> wrote: > > Hi, > > Seems i can't install using puppet on the jenkins master using > install_master.sh from > https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh > because it's running Ubuntu 11.10 and it appears unsupported. > > I managed to install puppet manually on master and everything else fails > > So i'm trying to manually install zuul and nodepool and jenkins job > builder, see where i end up. > > > > The slave looks complete, got some errors on running install_slave so i > ran parts of the script manually, changing some params and it appears > installed but no way to test it without the master. > > > > Any ideas welcome. > > > > Thanks, > > > > Eduard > > > > On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy > wrote: > > Manually running the script requires a few environment settings. Take a > look at the README here: > > https://github.com/openstack-infra/devstack-gate > > > > Regarding cinder, I?m using this repo to run our cinder jobs (fork from > jaypipes). > > https://github.com/rasselin/os-ext-testing > > > > Note that this solution doesn?t use the Jenkins gerrit trigger pluggin, > but zuul. > > > > There?s a sample job for cinder here. It?s in Jenkins Job Builder format. > > > https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample > > > > You can ask more questions in IRC freenode #openstack-cinder. (irc# > asselin) > > > > Ramy > > > > *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] > *Sent:* Tuesday, December 16, 2014 12:41 AM > *To:* Bailey, Darragh > *Cc:* OpenStack Development Mailing List (not for usage questions); > OpenStack > *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help > setting up CI > > > > Hi, > > > > Can someone point me to some working documentation on how to setup third > party CI? (joinfu's instructions don't seem to work, and manually running > devstack-gate scripts fails: > > Running gate_hook > > Job timeout set to: 163 minutes > > timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory > > ERROR: the main setup script run by this job failed - exit code: 127 > > please look at the relevant log files to determine the root cause > > Cleaning up host > > ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) > > Build step 'Execute shell' marked build as failure. > > > > I have a working Jenkins slave with devstack and our internal libraries, i > have Gerrit Trigger Plugin working and triggering on patches created, i > just need the actual job contents so that it can get to comment with the > test results. > > > > Thanks, > > > > Eduard > > > > On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei < > eduard.matei at cloudfounders.com> wrote: > > Hi Darragh, thanks for your input > > > > I double checked the job settings and fixed it: > > - build triggers is set to Gerrit event > > - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin > and tested separately) > > - Trigger on: Patchset Created > > - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: > Type: Path, Pattern: ** (was Type Plain on both) > > Now the job is triggered by commit on openstack-dev/sandbox :) > > > > Regarding the Query and Trigger Gerrit Patches, i found my patch using > query: status:open project:openstack-dev/sandbox change:139585 and i can > trigger it manually and it executes the job. > > > > But i still have the problem: what should the job do? It doesn't actually > do anything, it doesn't run tests or comment on the patch. > > Do you have an example of job? > > > > Thanks, > > Eduard > > > > On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh wrote: > > Hi Eduard, > > > I would check the trigger settings in the job, particularly which "type" > of pattern matching is being used for the branches. Found it tends to be > the spot that catches most people out when configuring jobs with the > Gerrit Trigger plugin. If you're looking to trigger against all branches > then you would want "Type: Path" and "Pattern: **" appearing in the UI. > > If you have sufficient access using the 'Query and Trigger Gerrit > Patches' page accessible from the main view will make it easier to > confirm that your Jenkins instance can actually see changes in gerrit > for the given project (which should mean that it can see the > corresponding events as well). Can also use the same page to re-trigger > for PatchsetCreated events to see if you've set the patterns on the job > correctly. > > Regards, > Darragh Bailey > > "Nothing is foolproof to a sufficiently talented fool" - Unknown > > On 08/12/14 14:33, Eduard Matei wrote: > > Resending this to dev ML as it seems i get quicker response :) > > > > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: > > Patchset Created", chose as server the configured Gerrit server that > > was previously tested, then added the project openstack-dev/sandbox > > and saved. > > I made a change on dev sandbox repo but couldn't trigger my job. > > > > Any ideas? > > > > Thanks, > > Eduard > > > > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei > > > > wrote: > > > > Hello everyone, > > > > Thanks to the latest changes to the creation of service accounts > > process we're one step closer to setting up our own CI platform > > for Cinder. > > > > So far we've got: > > - Jenkins master (with Gerrit plugin) and slave (with DevStack and > > our storage solution) > > - Service account configured and tested (can manually connect to > > review.openstack.org and get events > > and publish comments) > > > > Next step would be to set up a job to do the actual testing, this > > is where we're stuck. > > Can someone please point us to a clear example on how a job should > > look like (preferably for testing Cinder on Kilo)? Most links > > we've found are broken, or tools/scripts are no longer working. > > Also, we cannot change the Jenkins master too much (it's owned by > > Ops team and they need a list of tools/scripts to review before > > installing/running so we're not allowed to experiment). > > > > Thanks, > > Eduard > > > > -- > > > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > > intended solely for the use of the individual or entity to whom > > they are addressed.If you are not the named addressee or an > > employee or agent responsible for delivering this message to the > > named addressee, you are hereby notified that you are not > > authorized to read, print, retain, copy or disseminate this > > message or any part of it. If you have received this email in > > error we request you to notify us by reply e-mail and to delete > > all electronic files of the message. If you are not the intended > > recipient you are notified that disclosing, copying, distributing > > or taking any action in reliance on the contents of this > > information is strictly prohibited. E-mail transmission cannot be > > guaranteed to be secure or error free as information could be > > intercepted, corrupted, lost, destroyed, arrive late or > > incomplete, or contain viruses. The sender therefore does not > > accept liability for any errors or omissions in the content of > > this message, and shall have no liability for any loss or damage > > suffered by the user, which arise as a result of e-mail transmission. > > > > > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and > > intended solely for the use of the individual or entity to whom they > > are addressed.If you are not the named addressee or an employee or > > agent responsible for delivering this message to the named addressee, > > you are hereby notified that you are not authorized to read, print, > > retain, copy or disseminate this message or any part of it. If you > > have received this email in error we request you to notify us by reply > > e-mail and to delete all electronic files of the message. If you are > > not the intended recipient you are notified that disclosing, copying, > > distributing or taking any action in reliance on the contents of this > > information is strictly prohibited. E-mail transmission cannot be > > guaranteed to be secure or error free as information could be > > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or > > contain viruses. The sender therefore does not accept liability for > > any errors or omissions in the content of this message, and shall have > > no liability for any loss or damage suffered by the user, which arise > > as a result of e-mail transmission. > > > > > > > _______________________________________________ > > OpenStack-Infra mailing list > > OpenStack-Infra at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > > regards, > > > > punith s > > cloudbyte.com > > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > > > -- > > *Eduard Biceri Matei, Senior Software Developer* > > www.cloudfounders.com > > | eduard.matei at cloudfounders.com > > > > > > > > *CloudFounders, The Private Cloud Software Company* > > > > Disclaimer: > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. > If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. > E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > > regards, > > > > punith s > > cloudbyte.com > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- regards, punith s cloudbyte.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From eduard.matei at cloudfounders.com Fri Dec 26 09:29:41 2014 From: eduard.matei at cloudfounders.com (Eduard Matei) Date: Fri, 26 Dec 2014 11:29:41 +0200 Subject: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI In-Reply-To: References: <5486D947.4090209@hp.com> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A422E59@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4240C5@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A424BC0@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A435949@G4W3223.americas.hpqcorp.net> <4BFD2A2A3BAE4A46AA43C6A2DB44D1693A4361EE@G4W3223.americas.hpqcorp.net> Message-ID: @Asselin: Regarding the "few items you can try": i tried everything, the job still appears NOT_REGISTERED. I'll see next week if i can do a clean install on another jenkins master. Thanks for your help, Eduard On Fri, Dec 26, 2014 at 11:23 AM, Punith S wrote: > hello, > > i have setup the CI environment for our cloudbyte storage, running a > master jenkins vm and a slave node vm running devstack. > > i have followed the jay's blog using asselin's scripts from github. > > all i need to do is to test our cloudbyte cinder driver against the cinder > tempest suit. > > currently the noop-check-communication job is coming of successfully on > openstack-dev/sandbox project > > but the *dvsm full tempest *job is failing due to an error in failing to > upload the images to the glance service from swift. > > how can i hack the dsvm tempest full job so that it only installs my > required services like cinder,nova,horizon etc. ? > > does modifying the devstack gate scripts help ? > > i'm attaching the links for the failure of dsvm-tempest-full job > > devstacklog.txt - http://paste.openstack.org/show/154779/ > devstack.txt.summary - http://paste.openstack.org/show/154780/ > > thanks > > > > > On Tue, Dec 23, 2014 at 8:46 PM, Asselin, Ramy > wrote: > >> You should use 14.04 for the slave. The limitation for using 12.04 is >> only for the master since zuul?s apache configuration is WIP on 14.04 [1], >> and zuul does not run on the slave. >> >> Ramy >> >> [1] https://review.openstack.org/#/c/141518/ >> >> *From:* Punith S [mailto:punith.s at cloudbyte.com] >> *Sent:* Monday, December 22, 2014 11:37 PM >> *To:* OpenStack Development Mailing List (not for usage questions) >> *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need >> help setting up CI >> >> >> >> Hi Asselin, >> >> >> >> i'm following your readme https://github.com/rasselin/os-ext-testing >> >> for setting up our cloudbyte CI on two ubuntu 12.04 VM's(master and slave) >> >> >> >> currently the scripts and setup went fine as followed with the document. >> >> >> >> now both master and slave have been connected successfully, but in order >> to run the tempest integration test against our proposed cloudbyte cinder >> driver for kilo, we need to have devstack installed in the slave.(in my >> understanding) >> >> >> >> but on installing the master devstack i'm getting permission issues in >> 12.04 in executing ./stack.sh since master devstack suggests the 14.04 or >> 13.10 ubuntu. and on contrary running install_slave.sh is failing on 13.10 >> due to puppet modules on found error. >> >> >> >> is there a way to get this work ? >> >> >> >> thanks in advance >> >> >> >> On Mon, Dec 22, 2014 at 11:10 PM, Asselin, Ramy >> wrote: >> >> Eduard, >> >> >> >> A few items you can try: >> >> 1. Double-check that the job is in Jenkins >> >> a. If not, then that?s the issue >> >> 2. Check that the processes are running correctly >> >> a. ps -ef | grep zuul >> >> i. Should >> have 2 zuul-server & 1 zuul-merger >> >> b. ps -ef | grep Jenkins >> >> i. Should >> have 1 /usr/bin/daemon --name=jenkins & 1 /usr/bin/java >> >> 3. In Jenkins, Manage Jenkins, Gearman Plugin Config, ?Test >> Connection? >> >> 4. Stop and Zuul & Jenkins. Start Zuul & Jenkins >> >> a. service Jenkins stop >> >> b. service zuul stop >> >> c. service zuul-merger stop >> >> d. service Jenkins start >> >> e. service zuul start >> >> f. service zuul-merger start >> >> >> >> Otherwise, I suggest you ask in #openstack-infra irc channel. >> >> >> >> Ramy >> >> >> >> *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] >> *Sent:* Sunday, December 21, 2014 11:01 PM >> >> >> *To:* Asselin, Ramy >> *Cc:* OpenStack Development Mailing List (not for usage questions) >> *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need >> help setting up CI >> >> >> >> Thanks Ramy, >> >> >> >> Unfortunately i don't see dsvm-tempest-full in the "status" output. >> >> Any idea how i can get it "registered"? >> >> >> >> Thanks, >> >> Eduard >> >> >> >> On Fri, Dec 19, 2014 at 9:43 PM, Asselin, Ramy >> wrote: >> >> Eduard, >> >> >> >> If you run this command, you can see which jobs are registered: >> >> >telnet localhost 4730 >> >> >> >> >status >> >> >> >> There are 3 numbers per job: queued, running and workers that can run >> job. Make sure the job is listed & last ?workers? is non-zero. >> >> >> >> To run the job again without submitting a patch set, leave a ?recheck? >> comment on the patch & make sure your zuul layout.yaml is configured to >> trigger off that comment. For example [1]. >> >> Be sure to use the sandbox repository. [2] >> >> I?m not aware of other ways. >> >> >> >> Ramy >> >> >> >> [1] >> https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L20 >> >> [2] https://github.com/openstack-dev/sandbox >> >> >> >> >> >> >> >> >> >> *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] >> *Sent:* Friday, December 19, 2014 3:36 AM >> *To:* Asselin, Ramy >> *Cc:* OpenStack Development Mailing List (not for usage questions) >> >> *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need >> help setting up CI >> >> >> >> Hi all, >> >> After a little struggle with the config scripts i managed to get a >> working setup that is able to process openstack-dev/sandbox and run >> noop-check-comunication. >> >> >> >> Then, i tried enabling dsvm-tempest-full job but it keeps returning >> "NOT_REGISTERED" >> >> >> >> 2014-12-19 12:07:14,683 INFO zuul.IndependentPipelineManager: Change >> depends on changes [] >> >> 2014-12-19 12:07:14,683 INFO zuul.Gearman: Launch job >> noop-check-communication for change with >> dependent changes [] >> >> 2014-12-19 12:07:14,693 INFO zuul.Gearman: Launch job dsvm-tempest-full >> for change with dependent changes [] >> >> 2014-12-19 12:07:14,694 ERROR zuul.Gearman: Job > handle: None name: build:dsvm-tempest-full unique: >> a9199d304d1140a8bf4448dfb1ae42c1> is not registered with Gearman >> >> 2014-12-19 12:07:14,694 INFO zuul.Gearman: Build > handle: None name: build:dsvm-tempest-full unique: >> a9199d304d1140a8bf4448dfb1ae42c1> complete, result NOT_REGISTERED >> >> 2014-12-19 12:07:14,765 INFO zuul.Gearman: Build > handle: H:127.0.0.1:2 name: build:noop-check-communication unique: >> 333c6ea077324a788e3c37a313d872c5> started >> >> 2014-12-19 12:07:14,910 INFO zuul.Gearman: Build > handle: H:127.0.0.1:2 name: build:noop-check-communication unique: >> 333c6ea077324a788e3c37a313d872c5> complete, result SUCCESS >> >> 2014-12-19 12:07:14,916 INFO zuul.IndependentPipelineManager: Reporting >> change , actions: [> , {'verified': -1}>] >> >> >> >> Nodepoold's log show no reference to dsvm-tempest-full and neither >> jenkins' logs. >> >> >> >> Any idea how to enable this job? >> >> >> >> Also, i got the "Cloud provider" setup and i can access it from the >> jenkins master. >> >> Any idea how i can manually trigger dsvm-tempest-full job to run and test >> the cloud provider without having to push a review to Gerrit? >> >> >> >> Thanks, >> >> Eduard >> >> >> >> On Thu, Dec 18, 2014 at 7:52 PM, Eduard Matei < >> eduard.matei at cloudfounders.com> wrote: >> >> Thanks for the input. >> >> >> >> I managed to get another master working (on Ubuntu 13.10), again with >> some issues since it was already setup. >> >> I'm now working towards setting up the slave. >> >> >> >> Will add comments to those reviews. >> >> >> >> Thanks, >> >> Eduard >> >> >> >> On Thu, Dec 18, 2014 at 7:42 PM, Asselin, Ramy >> wrote: >> >> Yes, Ubuntu 12.04 is tested as mentioned in the readme [1]. Note that >> the referenced script is just a wrapper that pulls all the latest from >> various locations in openstack-infra, e.g. [2]. >> >> Ubuntu 14.04 support is WIP [3] >> >> FYI, there?s a spec to get an in-tree 3rd party ci solution [4]. Please >> add your comments if this interests you. >> >> >> >> [1] https://github.com/rasselin/os-ext-testing/blob/master/README.md >> >> [2] >> https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L29 >> >> [3] https://review.openstack.org/#/c/141518/ >> >> [4] https://review.openstack.org/#/c/139745/ >> >> >> >> >> >> *From:* Punith S [mailto:punith.s at cloudbyte.com] >> *Sent:* Thursday, December 18, 2014 3:12 AM >> *To:* OpenStack Development Mailing List (not for usage questions); >> Eduard Matei >> >> >> *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need >> help setting up CI >> >> >> >> Hi Eduard >> >> >> >> we tried running >> https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh >> >> on ubuntu master 12.04, and it appears to be working fine on 12.04. >> >> >> >> thanks >> >> >> >> On Thu, Dec 18, 2014 at 1:57 PM, Eduard Matei < >> eduard.matei at cloudfounders.com> wrote: >> >> Hi, >> >> Seems i can't install using puppet on the jenkins master using >> install_master.sh from >> https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh >> because it's running Ubuntu 11.10 and it appears unsupported. >> >> I managed to install puppet manually on master and everything else fails >> >> So i'm trying to manually install zuul and nodepool and jenkins job >> builder, see where i end up. >> >> >> >> The slave looks complete, got some errors on running install_slave so i >> ran parts of the script manually, changing some params and it appears >> installed but no way to test it without the master. >> >> >> >> Any ideas welcome. >> >> >> >> Thanks, >> >> >> >> Eduard >> >> >> >> On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy >> wrote: >> >> Manually running the script requires a few environment settings. Take >> a look at the README here: >> >> https://github.com/openstack-infra/devstack-gate >> >> >> >> Regarding cinder, I?m using this repo to run our cinder jobs (fork from >> jaypipes). >> >> https://github.com/rasselin/os-ext-testing >> >> >> >> Note that this solution doesn?t use the Jenkins gerrit trigger pluggin, >> but zuul. >> >> >> >> There?s a sample job for cinder here. It?s in Jenkins Job Builder format. >> >> >> https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample >> >> >> >> You can ask more questions in IRC freenode #openstack-cinder. (irc# >> asselin) >> >> >> >> Ramy >> >> >> >> *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com] >> *Sent:* Tuesday, December 16, 2014 12:41 AM >> *To:* Bailey, Darragh >> *Cc:* OpenStack Development Mailing List (not for usage questions); >> OpenStack >> *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need >> help setting up CI >> >> >> >> Hi, >> >> >> >> Can someone point me to some working documentation on how to setup third >> party CI? (joinfu's instructions don't seem to work, and manually running >> devstack-gate scripts fails: >> >> Running gate_hook >> >> Job timeout set to: 163 minutes >> >> timeout: failed to run command ?/opt/stack/new/devstack-gate/devstack-vm-gate.sh?: No such file or directory >> >> ERROR: the main setup script run by this job failed - exit code: 127 >> >> please look at the relevant log files to determine the root cause >> >> Cleaning up host >> >> ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz) >> >> Build step 'Execute shell' marked build as failure. >> >> >> >> I have a working Jenkins slave with devstack and our internal libraries, >> i have Gerrit Trigger Plugin working and triggering on patches created, i >> just need the actual job contents so that it can get to comment with the >> test results. >> >> >> >> Thanks, >> >> >> >> Eduard >> >> >> >> On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei < >> eduard.matei at cloudfounders.com> wrote: >> >> Hi Darragh, thanks for your input >> >> >> >> I double checked the job settings and fixed it: >> >> - build triggers is set to Gerrit event >> >> - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger >> Plugin and tested separately) >> >> - Trigger on: Patchset Created >> >> - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: >> Type: Path, Pattern: ** (was Type Plain on both) >> >> Now the job is triggered by commit on openstack-dev/sandbox :) >> >> >> >> Regarding the Query and Trigger Gerrit Patches, i found my patch using >> query: status:open project:openstack-dev/sandbox change:139585 and i can >> trigger it manually and it executes the job. >> >> >> >> But i still have the problem: what should the job do? It doesn't actually >> do anything, it doesn't run tests or comment on the patch. >> >> Do you have an example of job? >> >> >> >> Thanks, >> >> Eduard >> >> >> >> On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh wrote: >> >> Hi Eduard, >> >> >> I would check the trigger settings in the job, particularly which "type" >> of pattern matching is being used for the branches. Found it tends to be >> the spot that catches most people out when configuring jobs with the >> Gerrit Trigger plugin. If you're looking to trigger against all branches >> then you would want "Type: Path" and "Pattern: **" appearing in the UI. >> >> If you have sufficient access using the 'Query and Trigger Gerrit >> Patches' page accessible from the main view will make it easier to >> confirm that your Jenkins instance can actually see changes in gerrit >> for the given project (which should mean that it can see the >> corresponding events as well). Can also use the same page to re-trigger >> for PatchsetCreated events to see if you've set the patterns on the job >> correctly. >> >> Regards, >> Darragh Bailey >> >> "Nothing is foolproof to a sufficiently talented fool" - Unknown >> >> On 08/12/14 14:33, Eduard Matei wrote: >> > Resending this to dev ML as it seems i get quicker response :) >> > >> > I created a job in Jenkins, added as Build Trigger: "Gerrit Event: >> > Patchset Created", chose as server the configured Gerrit server that >> > was previously tested, then added the project openstack-dev/sandbox >> > and saved. >> > I made a change on dev sandbox repo but couldn't trigger my job. >> > >> > Any ideas? >> > >> > Thanks, >> > Eduard >> > >> > On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei >> > > > > wrote: >> > >> > Hello everyone, >> > >> > Thanks to the latest changes to the creation of service accounts >> > process we're one step closer to setting up our own CI platform >> > for Cinder. >> > >> > So far we've got: >> > - Jenkins master (with Gerrit plugin) and slave (with DevStack and >> > our storage solution) >> > - Service account configured and tested (can manually connect to >> > review.openstack.org and get events >> > and publish comments) >> > >> > Next step would be to set up a job to do the actual testing, this >> > is where we're stuck. >> > Can someone please point us to a clear example on how a job should >> > look like (preferably for testing Cinder on Kilo)? Most links >> > we've found are broken, or tools/scripts are no longer working. >> > Also, we cannot change the Jenkins master too much (it's owned by >> > Ops team and they need a list of tools/scripts to review before >> > installing/running so we're not allowed to experiment). >> > >> > Thanks, >> > Eduard >> > >> > -- >> > >> > *Eduard Biceri Matei, Senior Software Developer* >> > www.cloudfounders.com >> > | eduard.matei at cloudfounders.com >> > >> > >> > >> > >> > *CloudFounders, The Private Cloud Software Company* >> > >> > Disclaimer: >> > This email and any files transmitted with it are confidential and >> > intended solely for the use of the individual or entity to whom >> > they are addressed.If you are not the named addressee or an >> > employee or agent responsible for delivering this message to the >> > named addressee, you are hereby notified that you are not >> > authorized to read, print, retain, copy or disseminate this >> > message or any part of it. If you have received this email in >> > error we request you to notify us by reply e-mail and to delete >> > all electronic files of the message. If you are not the intended >> > recipient you are notified that disclosing, copying, distributing >> > or taking any action in reliance on the contents of this >> > information is strictly prohibited. E-mail transmission cannot be >> > guaranteed to be secure or error free as information could be >> > intercepted, corrupted, lost, destroyed, arrive late or >> > incomplete, or contain viruses. The sender therefore does not >> > accept liability for any errors or omissions in the content of >> > this message, and shall have no liability for any loss or damage >> > suffered by the user, which arise as a result of e-mail >> transmission. >> > >> > >> > >> > >> > -- >> > *Eduard Biceri Matei, Senior Software Developer* >> > www.cloudfounders.com >> > | eduard.matei at cloudfounders.com >> > >> > >> > >> > >> > *CloudFounders, The Private Cloud Software Company* >> > >> > Disclaimer: >> > This email and any files transmitted with it are confidential and >> > intended solely for the use of the individual or entity to whom they >> > are addressed.If you are not the named addressee or an employee or >> > agent responsible for delivering this message to the named addressee, >> > you are hereby notified that you are not authorized to read, print, >> > retain, copy or disseminate this message or any part of it. If you >> > have received this email in error we request you to notify us by reply >> > e-mail and to delete all electronic files of the message. If you are >> > not the intended recipient you are notified that disclosing, copying, >> > distributing or taking any action in reliance on the contents of this >> > information is strictly prohibited. E-mail transmission cannot be >> > guaranteed to be secure or error free as information could be >> > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or >> > contain viruses. The sender therefore does not accept liability for >> > any errors or omissions in the content of this message, and shall have >> > no liability for any loss or damage suffered by the user, which arise >> > as a result of e-mail transmission. >> > >> > >> >> > _______________________________________________ >> > OpenStack-Infra mailing list >> > OpenStack-Infra at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra >> >> >> >> >> >> -- >> >> *Eduard Biceri Matei, Senior Software Developer* >> >> www.cloudfounders.com >> >> | eduard.matei at cloudfounders.com >> >> >> >> >> >> >> >> *CloudFounders, The Private Cloud Software Company* >> >> >> >> Disclaimer: >> >> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. >> If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. >> E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. >> >> >> >> >> >> -- >> >> *Eduard Biceri Matei, Senior Software Developer* >> >> www.cloudfounders.com >> >> | eduard.matei at cloudfounders.com >> >> >> >> >> >> >> >> *CloudFounders, The Private Cloud Software Company* >> >> >> >> Disclaimer: >> >> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. >> If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. >> E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> -- >> >> *Eduard Biceri Matei, Senior Software Developer* >> >> www.cloudfounders.com >> >> | eduard.matei at cloudfounders.com >> >> >> >> >> >> >> >> *CloudFounders, The Private Cloud Software Company* >> >> >> >> Disclaimer: >> >> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. >> If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. >> E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> -- >> >> regards, >> >> >> >> punith s >> >> cloudbyte.com >> >> >> >> >> >> -- >> >> *Eduard Biceri Matei, Senior Software Developer* >> >> www.cloudfounders.com >> >> | eduard.matei at cloudfounders.com >> >> >> >> >> >> >> >> *CloudFounders, The Private Cloud Software Company* >> >> >> >> Disclaimer: >> >> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. >> If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. >> E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. >> >> >> >> >> -- >> >> *Eduard Biceri Matei, Senior Software Developer* >> >> www.cloudfounders.com >> >> | eduard.matei at cloudfounders.com >> >> >> >> >> >> >> >> *CloudFounders, The Private Cloud Software Company* >> >> >> >> Disclaimer: >> >> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. >> If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. >> E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. >> >> >> >> >> -- >> >> *Eduard Biceri Matei, Senior Software Developer* >> >> www.cloudfounders.com >> >> | eduard.matei at cloudfounders.com >> >> >> >> >> >> >> >> *CloudFounders, The Private Cloud Software Company* >> >> >> >> Disclaimer: >> >> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. >> If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. >> E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> -- >> >> regards, >> >> >> >> punith s >> >> cloudbyte.com >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > regards, > > punith s > cloudbyte.com > -- *Eduard Biceri Matei, Senior Software Developer* www.cloudfounders.com | eduard.matei at cloudfounders.com *CloudFounders, The Private Cloud Software Company* Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee or an employee or agent responsible for delivering this message to the named addressee, you are hereby notified that you are not authorized to read, print, retain, copy or disseminate this message or any part of it. If you have received this email in error we request you to notify us by reply e-mail and to delete all electronic files of the message. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the content of this message, and shall have no liability for any loss or damage suffered by the user, which arise as a result of e-mail transmission. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifzing at 126.com Fri Dec 26 11:07:15 2014 From: ifzing at 126.com (joejiang) Date: Fri, 26 Dec 2014 19:07:15 +0800 (CST) Subject: [openstack-dev] [Openstack] Need help in validating CPU Pinning feature In-Reply-To: References: Message-ID: <202b345c.f8bc.14a86476dc6.Coremail.ifzing@126.com> Hi Srinivasa, This section is related to cpu affinity in instance xml file. 4 Regards, Joe Chiang At 2014-12-26 12:25:29, "Srinivasa Rao Ragolu" wrote: Hi All, I have been working on CPU Pinning feature validation. I could able set vcpu_pin_set config in nova.conf and could able to see cpupin set in guest xml and working fine while launching. Please let me know how can I set cputune: vcpupin in guest xml? Or Any documents refer to validate cpu pinning use cases. Thanks, Srinivas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmakogon at mirantis.com Fri Dec 26 11:13:12 2014 From: dmakogon at mirantis.com (Denis Makogon) Date: Fri, 26 Dec 2014 13:13:12 +0200 Subject: [openstack-dev] [CI][TripleO][Ironic] check-tripleo-ironic-xxx fails In-Reply-To: References: Message-ID: On Fri, Dec 26, 2014 at 11:01 AM, Adam Gandelman wrote: > This is fallout from a new upstream release of pip that went out earlier > in the week. It looks like no formal bug ever got filed, tho the same > problem discovered in devstack and trove's integration testing repository. > Added some comments comments to the bug. > Proper solution was merged into devstack (see [1]) and proposed for trove-integration (see [2]). So, it seems that we've faced with same problem across multiple projects that are relaying on pip. [1] https://review.openstack.org/#/c/143501 [2] https://review.openstack.org/#/c/143746/ Kind regards, Denis M. > On Thu, Dec 25, 2014 at 10:59 PM, James Polley wrote: > >> Thanks for the alert >> >> The earliest failure I can see because of this is >> http://logs.openstack.org/43/141043/6/check-tripleo/check-tripleo-ironic-overcloud-precise-nonha/36c9771/ >> >> I've raised https://bugs.launchpad.net/tripleo/+bug/1405732 and I've put >> some preliminary notes on >> https://etherpad.openstack.org/p/tripleo-ci-breakages >> >> On Fri, Dec 26, 2014 at 3:42 AM, ZhiQiang Fan >> wrote: >> >>> check-tripleo-ironic-xxx failed because: >>> >>> rm -rf /home/jenkins/.cache/image-create/pypi/mirror/ >>> rm: cannot remove `/home/jenkins/.cache/image-create/pypi/mirror/': >>> Permission denie >>> >>> see >>> >>> http://logs.openstack.org/37/143937/1/check-tripleo/check-tripleo-ironic-overcloud-precise-nonha/9be729b/console.html >>> >>> search on logstash.openstack.org: >>> message:"rm: cannot remove >>> `/home/jenkins/.cache/image-create/pypi/mirror/\': Permission denied" >>> there are 59 hits in last 48 hours >>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sragolu at mvista.com Fri Dec 26 11:25:49 2014 From: sragolu at mvista.com (Srinivasa Rao Ragolu) Date: Fri, 26 Dec 2014 16:55:49 +0530 Subject: [openstack-dev] [Openstack] Need help in validating CPU Pinning feature In-Reply-To: <202b345c.f8bc.14a86476dc6.Coremail.ifzing@126.com> References: <202b345c.f8bc.14a86476dc6.Coremail.ifzing@126.com> Message-ID: Hi Joejing, Thanks for quick reply. Above xml is getting generated fine if I set "vcpu_pin_set=1-12" in /etc/nova/nova.conf. But how to pin each vcpu with pcpu something like below One more questions is Are Numa nodes are compulsory for pin each vcpu to pcpu? Thanks, Srininvas. On Fri, Dec 26, 2014 at 4:37 PM, joejiang wrote: > Hi Srinivasa, > > This section is related to cpu affinity in instance xml file. > 4 > > Regards, > Joe Chiang > > > > At 2014-12-26 12:25:29, "Srinivasa Rao Ragolu" wrote: > > Hi All, > > I have been working on CPU Pinning feature validation. > > I could able set vcpu_pin_set config in nova.conf and could able to see > cpupin set in guest xml and working fine while launching. > > Please let me know how can I set cputune: vcpupin in guest xml? > > Or > > Any documents refer to validate cpu pinning use cases. > > Thanks, > Srinivas. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp at jamezpolley.com Fri Dec 26 11:40:00 2014 From: jp at jamezpolley.com (James Polley) Date: Fri, 26 Dec 2014 12:40:00 +0100 Subject: [openstack-dev] [CI][TripleO][Ironic] check-tripleo-ironic-xxx fails In-Reply-To: References: Message-ID: as I've said on https://bugs.launchpad.net/tripleo/+bug/1405732, suspecting this comes from "sudo pip" is a good pointer - but as far as I can tell our tripleo-ci scripts never run "sudo pip". This seems to be something that's happening before the testenv gets handed to the tripleo-ci scripts. I've filed https://review.openstack.org/#/c/144107/ to clean up all the uses of "sudo pip" I can find in project-config, but I don't know if that will be sufficient. On Fri, Dec 26, 2014 at 12:13 PM, Denis Makogon wrote: > > > On Fri, Dec 26, 2014 at 11:01 AM, Adam Gandelman wrote: > >> This is fallout from a new upstream release of pip that went out earlier >> in the week. It looks like no formal bug ever got filed, tho the same >> problem discovered in devstack and trove's integration testing repository. >> Added some comments comments to the bug. >> > > Proper solution was merged into devstack (see [1]) and proposed for > trove-integration (see [2]). So, it seems that we've faced with same > problem across multiple projects that are relaying on pip. > > > [1] https://review.openstack.org/#/c/143501 > [2] https://review.openstack.org/#/c/143746/ > > Kind regards, > Denis M. > > >> On Thu, Dec 25, 2014 at 10:59 PM, James Polley >> wrote: >> >>> Thanks for the alert >>> >>> The earliest failure I can see because of this is >>> http://logs.openstack.org/43/141043/6/check-tripleo/check-tripleo-ironic-overcloud-precise-nonha/36c9771/ >>> >>> I've raised https://bugs.launchpad.net/tripleo/+bug/1405732 and I've >>> put some preliminary notes on >>> https://etherpad.openstack.org/p/tripleo-ci-breakages >>> >>> On Fri, Dec 26, 2014 at 3:42 AM, ZhiQiang Fan >>> wrote: >>> >>>> check-tripleo-ironic-xxx failed because: >>>> >>>> rm -rf /home/jenkins/.cache/image-create/pypi/mirror/ >>>> rm: cannot remove `/home/jenkins/.cache/image-create/pypi/mirror/': >>>> Permission denie >>>> >>>> see >>>> >>>> http://logs.openstack.org/37/143937/1/check-tripleo/check-tripleo-ironic-overcloud-precise-nonha/9be729b/console.html >>>> >>>> search on logstash.openstack.org: >>>> message:"rm: cannot remove >>>> `/home/jenkins/.cache/image-create/pypi/mirror/\': Permission denied" >>>> there are 59 hits in last 48 hours >>>> >>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From isviridov at mirantis.com Fri Dec 26 14:16:46 2014 From: isviridov at mirantis.com (isviridov) Date: Fri, 26 Dec 2014 16:16:46 +0200 Subject: [openstack-dev] [MagnetoDB] Andrey Ostapenko core nomination Message-ID: <549D6DCE.7080506@mirantis.com> Hello stackers and magnetians, I suggest nominating Andrey Ostapenko [1] to MagnetoDB cores. During last months he has made huge contribution to MagnetoDB [2] Andrey drives Tempest and python-magnetodbclient successfully. Please rise your hands. Thank you, Ilya Sviridov [1] http://stackalytics.com/report/users/aostapenko [2] http://stackalytics.com/report/contribution/magnetodb/90 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From dukhlov at mirantis.com Fri Dec 26 14:52:19 2014 From: dukhlov at mirantis.com (Dmitriy Ukhlov) Date: Fri, 26 Dec 2014 16:52:19 +0200 Subject: [openstack-dev] [MagnetoDB] Andrey Ostapenko core nomination In-Reply-To: <549D6DCE.7080506@mirantis.com> References: <549D6DCE.7080506@mirantis.com> Message-ID: Andrey is very active contributor, good team player and very helps us at previous development cycle. +1 from my side. On Fri, Dec 26, 2014 at 4:16 PM, isviridov wrote: > Hello stackers and magnetians, > > I suggest nominating Andrey Ostapenko [1] to MagnetoDB cores. > > During last months he has made huge contribution to MagnetoDB [2] > Andrey drives Tempest and python-magnetodbclient successfully. > > Please rise your hands. > > Thank you, > Ilya Sviridov > > [1] http://stackalytics.com/report/users/aostapenko > [2] http://stackalytics.com/report/contribution/magnetodb/90 > > > -- Best regards, Dmitriy Ukhlov Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ipekelny at mirantis.com Fri Dec 26 14:58:02 2014 From: ipekelny at mirantis.com (Ilya Pekelny) Date: Fri, 26 Dec 2014 16:58:02 +0200 Subject: [openstack-dev] ZeroMQ topic object. Message-ID: Hi, all! Unexpectedly I met a pretty significant issue when I have been solving a small bug https://bugs.launchpad.net/ubuntu/+source/oslo.messaging/+bug/1367614. The problem is in several parts: * Topics used for several purposes: to set subscriptions and to determine a type of sockets. * Topics is a strings which are modifying inline everywhere where it is needed. So, the topic feature is very distributed and uncoordinated. My issue with the bug was: "It is impossible just hash topic somewhere and not to crash all the driver". Second part of the issue is: "It is very painful process to trace all the topic modifications which are distributed though all the driver code". After several attempts to fix the bug "with small losses" I concluded that I need to create a control/entry point for topic string. Now I have a proposal: Blueprint ? https://blueprints.launchpad.net/oslo.messaging/+spec/zmq-topic-object Spec ? https://review.openstack.org/#/c/144149/ Patch ? https://review.openstack.org/#/c/144120/ I want to discuss this feature and receive a feedbacks from a more experienced rpc-Jedi. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From afedorova at mirantis.com Fri Dec 26 17:36:38 2014 From: afedorova at mirantis.com (Aleksandra Fedorova) Date: Fri, 26 Dec 2014 21:36:38 +0400 Subject: [openstack-dev] [Fuel] Feature delivery rules and automated tests In-Reply-To: <8B7EA75C-25E6-41CC-A4B3-56494CB9ABE6@mirantis.com> References: <8B7EA75C-25E6-41CC-A4B3-56494CB9ABE6@mirantis.com> Message-ID: Actually with JJB in place we create jobs slower than without it. Because we cannot just click through Jenkins web ui but have to properly describe and test the job and make it pass the review process. JJB helps to manage jobs but doesn't simplify job creation. But let me point out that Jenkins job configuration is the least important part of the task. The most troublesome part is to define the exact scenario and implement environment for the job. Take, for example, glustrefs cluster we needed to setup for plugins tests. Thus, if we want CI job to be ready at SCF we need to get this information in advance. Which means feature itself should have stable interface and working test implementation several days before the freeze. On Wed, Dec 24, 2014 at 2:26 AM, Igor Shishkin wrote: > I believe yes. > > With jenkins job builder we could create jobs faster, QA can be involved in that or even create jobs on their own. > > I think we have to try it during next release cycle, currently I can?t see blockers/problems here. > -- > Igor Shishkin > DevOps > > > >> On 24 Dec 2014, at 2:20 am GMT+3, Mike Scherbakov wrote: >> >> Igor, >> would that be possible? >> >> On Mon, Dec 22, 2014 at 7:49 PM, Anastasia Urlapova wrote: >> Mike, Dmitry, team, >> let me add 5 cents - tests per feature have to run on CI before SCF, it is mean that jobs configuration also should be implemented. >> >> On Wed, Dec 17, 2014 at 7:33 AM, Mike Scherbakov wrote: >> I fully support the idea. >> >> Feature Lead has to know, that his feature is under threat if it's not yet covered by system tests (unit/integration tests are not enough!!!), and should proactive work with QA engineers to get tests implemented and passing before SCF. >> >> On Fri, Dec 12, 2014 at 5:55 PM, Dmitry Pyzhov wrote: >> Guys, >> >> we've done a good job in 6.0. Most of the features were merged before feature freeze. Our QA were involved in testing even earlier. It was much better than before. >> >> We had a discussion with Anastasia. There were several bug reports for features yesterday, far beyond HCF. So we still have a long way to be perfect. We should add one rule: we need to have automated tests before HCF. >> >> Actually, we should have results of these tests just after FF. It is quite challengeable because we have a short development cycle. So my proposal is to require full deployment and run of automated tests for each feature before soft code freeze. And it needs to be tracked in checklists and on feature syncups. >> >> Your opinion? >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> -- >> Mike Scherbakov >> #mihgen >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> -- >> Mike Scherbakov >> #mihgen >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Aleksandra Fedorova Fuel Devops Engineer bookwar From tleontovich at mirantis.com Fri Dec 26 17:53:29 2014 From: tleontovich at mirantis.com (Tatyanka) Date: Fri, 26 Dec 2014 19:53:29 +0200 Subject: [openstack-dev] [Fuel] Feature delivery rules and automated tests In-Reply-To: References: <8B7EA75C-25E6-41CC-A4B3-56494CB9ABE6@mirantis.com> Message-ID: <6DF34D62-9BF4-4375-9ADE-2F854B283AEA@mirantis.com> Hi all, I agree with Alexandra about importance of the feature info on early implementation stage, And seems devops need to take part in review of specs too, clarify scenarios here and other data, that they need. ?????????? ? iPhone > 26 ???. 2014, ? 19:36, Aleksandra Fedorova ???????(?): > > Actually with JJB in place we create jobs slower than without it. > Because we cannot just click through Jenkins web ui but have to > properly describe and test the job and make it pass the review > process. JJB helps to manage jobs but doesn't simplify job creation. > > But let me point out that Jenkins job configuration is the least > important part of the task. The most troublesome part is to define the > exact scenario and implement environment for the job. Take, for > example, glustrefs cluster we needed to setup for plugins tests. > > Thus, if we want CI job to be ready at SCF we need to get this > information in advance. Which means feature itself should have stable > interface and working test implementation several days before the > freeze. > > >> On Wed, Dec 24, 2014 at 2:26 AM, Igor Shishkin wrote: >> I believe yes. >> >> With jenkins job builder we could create jobs faster, QA can be involved in that or even create jobs on their own. >> >> I think we have to try it during next release cycle, currently I can?t see blockers/problems here. >> -- >> Igor Shishkin >> DevOps >> >> >> >>> On 24 Dec 2014, at 2:20 am GMT+3, Mike Scherbakov wrote: >>> >>> Igor, >>> would that be possible? >>> >>> On Mon, Dec 22, 2014 at 7:49 PM, Anastasia Urlapova wrote: >>> Mike, Dmitry, team, >>> let me add 5 cents - tests per feature have to run on CI before SCF, it is mean that jobs configuration also should be implemented. >>> >>> On Wed, Dec 17, 2014 at 7:33 AM, Mike Scherbakov wrote: >>> I fully support the idea. >>> >>> Feature Lead has to know, that his feature is under threat if it's not yet covered by system tests (unit/integration tests are not enough!!!), and should proactive work with QA engineers to get tests implemented and passing before SCF. >>> >>> On Fri, Dec 12, 2014 at 5:55 PM, Dmitry Pyzhov wrote: >>> Guys, >>> >>> we've done a good job in 6.0. Most of the features were merged before feature freeze. Our QA were involved in testing even earlier. It was much better than before. >>> >>> We had a discussion with Anastasia. There were several bug reports for features yesterday, far beyond HCF. So we still have a long way to be perfect. We should add one rule: we need to have automated tests before HCF. >>> >>> Actually, we should have results of these tests just after FF. It is quite challengeable because we have a short development cycle. So my proposal is to require full deployment and run of automated tests for each feature before soft code freeze. And it needs to be tracked in checklists and on feature syncups. >>> >>> Your opinion? >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> -- >>> Mike Scherbakov >>> #mihgen >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> -- >>> Mike Scherbakov >>> #mihgen >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Aleksandra Fedorova > Fuel Devops Engineer > bookwar > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From m4d.coder at gmail.com Fri Dec 26 19:39:18 2014 From: m4d.coder at gmail.com (W Chan) Date: Fri, 26 Dec 2014 11:39:18 -0800 Subject: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs Message-ID: > What you?re saying is that whatever is under ?$.env? is just the exact same environment that we passed when we started the workflow? If yes then it definitely makes sense to me (it just allows to explicitly access environment, not through the implicit variable lookup). Please confirm. Yes. the $.env that I original proposed would be the same dict as the one supplied at start_workflow. Although we have to agree whether the variables in the environment are allowed to change after the WF started. Unless there's a valid use case, I would lean toward making env immutable. > One thing that I strongly suggest is that we clearly define all reserved keys like ?env?, ?__actions? etc. I think it?d be better if they all started with the same prefix, for example, double underscore. Agree. How about using double underscore for env as well (i.e. $.__env.var1, $.__env.var2)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian.otto at rackspace.com Fri Dec 26 20:44:46 2014 From: adrian.otto at rackspace.com (Adrian Otto) Date: Fri, 26 Dec 2014 20:44:46 +0000 Subject: [openstack-dev] [Magnum] Proposed Changes to Magnum Core Message-ID: <61EAB960-3A1B-401E-9C1B-670DCCEB95A3@rackspace.com> Magnum Cores, I propose the following addition to the Magnum Core group[1]: + Motohiro/Yuanying Otsuka (ootsuka) Please let me know your votes by replying to this message. Thanks, Adrian [1] https://review.openstack.org/#/admin/groups/473,members Current Members From ajku.agr at gmail.com Fri Dec 26 21:14:28 2014 From: ajku.agr at gmail.com (Ajaya Agrawal) Date: Sat, 27 Dec 2014 02:44:28 +0530 Subject: [openstack-dev] [MagnetoDB] Andrey Ostapenko core nomination In-Reply-To: References: <549D6DCE.7080506@mirantis.com> Message-ID: +1 Cheers, Ajaya On Fri, Dec 26, 2014 at 8:22 PM, Dmitriy Ukhlov wrote: > Andrey is very active contributor, good team player and very helps us at > previous development cycle. +1 from my side. > > On Fri, Dec 26, 2014 at 4:16 PM, isviridov wrote: > >> Hello stackers and magnetians, >> >> I suggest nominating Andrey Ostapenko [1] to MagnetoDB cores. >> >> During last months he has made huge contribution to MagnetoDB [2] >> Andrey drives Tempest and python-magnetodbclient successfully. >> >> Please rise your hands. >> >> Thank you, >> Ilya Sviridov >> >> [1] http://stackalytics.com/report/users/aostapenko >> [2] http://stackalytics.com/report/contribution/magnetodb/90 >> >> >> > > > -- > Best regards, > Dmitriy Ukhlov > Mirantis Inc. > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Fri Dec 26 23:07:15 2014 From: sgordon at redhat.com (Steve Gordon) Date: Fri, 26 Dec 2014 18:07:15 -0500 (EST) Subject: [openstack-dev] [Openstack] Need help in validating CPU Pinning feature In-Reply-To: References: <202b345c.f8bc.14a86476dc6.Coremail.ifzing@126.com> Message-ID: <257537232.3138680.1419635235848.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Srinivasa Rao Ragolu" > To: "joejiang" > > Hi Joejing, > > Thanks for quick reply. Above xml is getting generated fine if I set > "vcpu_pin_set=1-12" in /etc/nova/nova.conf. > > But how to pin each vcpu with pcpu something like below > > > > > > > > > > One more questions is Are Numa nodes are compulsory for pin each vcpu to > pcpu? The specification for the CPU pinning functionality recently implemented in Nova is here: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/virt-driver-cpu-pinning.html Note that exact vCPU to pCPU pinning is not exposed to the user as this would require them to have direct knowledge of the host pCPU layout. Instead they request that the instance receive "dedicated" CPU resourcing and Nova handles allocation of pCPUs and pinning of vCPUs to them. Example usage: * Create a host aggregate and add set metadata on it to indicate it is to be used for pinning, 'pinned' is used for the example but any key value can be used. The same key must used be used in later steps though:: $ nova aggregate-create cpu_pinning $ nova aggregate-set-metadata 1 pinned=true NB: For aggregates/flavors that wont be dedicated set pinned=false. * Set all existing flavors to avoid this aggregate:: $ for FLAVOR in `nova flavor-list | cut -f 2 -d ' ' | grep -o [0-9]*`; do nova flavor-key ${FLAVOR} set "aggregate_instance_extra_specs:pinned"="false"; done * Create flavor that has extra spec "hw:cpu_policy" set to "dedicated". In this example it is created with ID of 6, 2048 MB of RAM, 20 GB drive, and 2 vCPUs:: $ nova flavor-create pinned.medium 6 2048 20 2 $ nova flavor-key 6 set "hw:cpu_policy"="dedicated" * Set the flavor to require the aggregate set aside for dedicated pinning of guests:: $ nova flavor-key 6 set "aggregate_instance_extra_specs:pinned"="true" * Add a compute host to the created aggregate (see nova host-list to get the host name(s)):: $ nova aggregate-add-host 1 my_packstack_host_name * Add the AggregateInstanceExtraSpecsFilter and CPUPinningFilter filters to the scheduler_default_filters in /etc/nova.conf:: scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter, ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter, AggregateInstanceExtraSpecsFilter,CPUPinningFilter NB: On Kilo code base I believe the filter is NUMATopologyFilter * Restart the scheduler:: # systemctl restart openstack-nova-scheduler * After the above - with a normal (non-admin user) try to boot an instance with the newly created flavor:: $ nova boot --image fedora --flavor 6 test_pinning * Confirm the instance has succesfully booted and that it's vCPU's are pinned to _a single_ host CPU by observing the element of the generated domain XML:: # virsh list Id Name State ---------------------------------------------------- 2 instance-00000001 running # virsh dumpxml instance-00000001 ... 2 -Steve From sdake at redhat.com Sat Dec 27 00:04:22 2014 From: sdake at redhat.com (Steven Dake) Date: Fri, 26 Dec 2014 17:04:22 -0700 Subject: [openstack-dev] [Magnum] Proposed Changes to Magnum Core In-Reply-To: <61EAB960-3A1B-401E-9C1B-670DCCEB95A3@rackspace.com> References: <61EAB960-3A1B-401E-9C1B-670DCCEB95A3@rackspace.com> Message-ID: <549DF786.8030500@redhat.com> On 12/26/2014 01:44 PM, Adrian Otto wrote: > Magnum Cores, > > I propose the following addition to the Magnum Core group[1]: > > + Motohiro/Yuanying Otsuka (ootsuka) > > Please let me know your votes by replying to this message. > > Thanks, > > Adrian > > [1] https://review.openstack.org/#/admin/groups/473,members Current Members > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > +1 from me Keep up the good work! -steve From davanum at gmail.com Sat Dec 27 01:39:54 2014 From: davanum at gmail.com (Davanum Srinivas) Date: Fri, 26 Dec 2014 20:39:54 -0500 Subject: [openstack-dev] [Magnum] Proposed Changes to Magnum Core In-Reply-To: <61EAB960-3A1B-401E-9C1B-670DCCEB95A3@rackspace.com> References: <61EAB960-3A1B-401E-9C1B-670DCCEB95A3@rackspace.com> Message-ID: +1 welcome! On Dec 26, 2014 3:48 PM, "Adrian Otto" wrote: > Magnum Cores, > > I propose the following addition to the Magnum Core group[1]: > > + Motohiro/Yuanying Otsuka (ootsuka) > > Please let me know your votes by replying to this message. > > Thanks, > > Adrian > > [1] https://review.openstack.org/#/admin/groups/473,members Current > Members > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdake at redhat.com Sat Dec 27 05:57:45 2014 From: sdake at redhat.com (Steven Dake) Date: Fri, 26 Dec 2014 22:57:45 -0700 Subject: [openstack-dev] [Magnum] Proposed Changes to Magnum Core In-Reply-To: <61EAB960-3A1B-401E-9C1B-670DCCEB95A3@rackspace.com> References: <61EAB960-3A1B-401E-9C1B-670DCCEB95A3@rackspace.com> Message-ID: <549E4A59.1050201@redhat.com> On 12/26/2014 01:44 PM, Adrian Otto wrote: > Magnum Cores, > > I propose the following addition to the Magnum Core group[1]: > > + Motohiro/Yuanying Otsuka (ootsuka) > > Please let me know your votes by replying to this message. > > Thanks, > > Adrian > > [1] https://review.openstack.org/#/admin/groups/473,members Current Members > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Abishek Chanda gave a +1 on IRC, but indicated his mail client is busted so voting in his proxy. Feel free to sync on IRC if necessary :) Regards -steve From ikhudoshyn at mirantis.com Sat Dec 27 07:56:16 2014 From: ikhudoshyn at mirantis.com (Illia Khudoshyn) Date: Sat, 27 Dec 2014 08:56:16 +0100 Subject: [openstack-dev] [MagnetoDB] Andrey Ostapenko core nomination In-Reply-To: <549D6DCE.7080506@mirantis.com> References: <549D6DCE.7080506@mirantis.com> Message-ID: <-1427670325913770469@unknownmsgid> +1 -- Best regards, Illia Khudoshyn, Software Engineer, Mirantis, Inc. 38, Lenina ave. Kharkov, Ukraine www.mirantis.com www.mirantis.ru Skype: gluke_work ikhudoshyn at mirantis.com 26 ???. 2014 ?., ? 15:16, isviridov ???????(?): Hello stackers and magnetians, I suggest nominating Andrey Ostapenko [1] to MagnetoDB cores. During last months he has made huge contribution to MagnetoDB [2] Andrey drives Tempest and python-magnetodbclient successfully. Please rise your hands. Thank you, Ilya Sviridov [1] http://stackalytics.com/report/users/aostapenko [2] http://stackalytics.com/report/contribution/magnetodb/90 -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian.otto at rackspace.com Sat Dec 27 07:56:26 2014 From: adrian.otto at rackspace.com (Adrian Otto) Date: Sat, 27 Dec 2014 07:56:26 +0000 Subject: [openstack-dev] [Solum] Addition to solum core Message-ID: <11D5ED2E-A3E7-426F-9F87-8C2757506932@rackspace.com> Solum cores, I propose the following addition to the solum-core group[1]: + Ed Cranford (ed--cranford) Please reply to this email to indicate your votes. Thanks, Adrian Otto [1] https://review.openstack.org/#/admin/groups/229,members Current Members From yueli.m at gmail.com Sat Dec 27 15:03:51 2014 From: yueli.m at gmail.com (James Y. Li) Date: Sat, 27 Dec 2014 09:03:51 -0600 Subject: [openstack-dev] [Solum] Addition to solum core In-Reply-To: <11D5ED2E-A3E7-426F-9F87-8C2757506932@rackspace.com> References: <11D5ED2E-A3E7-426F-9F87-8C2757506932@rackspace.com> Message-ID: +1! -James Li On Dec 27, 2014 2:02 AM, "Adrian Otto" wrote: > Solum cores, > > I propose the following addition to the solum-core group[1]: > > + Ed Cranford (ed--cranford) > > Please reply to this email to indicate your votes. > > Thanks, > > Adrian Otto > > [1] https://review.openstack.org/#/admin/groups/229,members Current > Members > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From devdatta.kulkarni at RACKSPACE.COM Sat Dec 27 18:02:48 2014 From: devdatta.kulkarni at RACKSPACE.COM (Devdatta Kulkarni) Date: Sat, 27 Dec 2014 18:02:48 +0000 Subject: [openstack-dev] [Solum] Addition to solum core In-Reply-To: References: <11D5ED2E-A3E7-426F-9F87-8C2757506932@rackspace.com>, Message-ID: <2C3448F15339494A8F56D64C997C071477989EDE@ORD1EXD04.RACKSPACE.CORP> +1 ________________________________ From: James Y. Li [yueli.m at gmail.com] Sent: Saturday, December 27, 2014 9:03 AM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [Solum] Addition to solum core +1! -James Li On Dec 27, 2014 2:02 AM, "Adrian Otto" > wrote: Solum cores, I propose the following addition to the solum-core group[1]: + Ed Cranford (ed--cranford) Please reply to this email to indicate your votes. Thanks, Adrian Otto [1] https://review.openstack.org/#/admin/groups/229,members Current Members _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp at jamezpolley.com Sun Dec 28 07:57:41 2014 From: jp at jamezpolley.com (James Polley) Date: Sun, 28 Dec 2014 08:57:41 +0100 Subject: [openstack-dev] [TripleO] CI/CD report - 2014-12-20 - 2014-12-27 Message-ID: It's been a bad week for CI, mostly due to setuptools. Cores, please review https://review.openstack.org/#/c/144184/ immediately, as CI is currently broken. 2014-12-19 - Neutron committed a change which had a symlink. This broke "pip install neutron", which broke CI for around 6 hours. 2014-12-22 - the release of pip 6.0 immediately triggered two distinct issues. One was fixed by 6.0.2 being released, the other required a patch to version specifiers in ceilometer. In total, CI was broken for around 24 hours. 2014-12-24 - new nodepool images were built which contained pip 6.0. This triggered another issue related to the fact that pip now creates ~/.cache if it doesn't already exist. At the time of writing, CI has been broken for ~3.5 days. https://review.openstack.org/#/c/144184/ seems to fix the problem, but it needs to get review from cores before it can land. Not listed here: setuptools 8.4 was released, then pulled after it was found to have problems installing/upgrading many packages. Because our CI was already broken, this had no noticeable effect. As always, most of this information is pulled from DerekH's notes on https://etherpad.openstack.org/p/tripleo-ci-breakages and more details can be found there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuxinguo at huawei.com Mon Dec 29 01:26:42 2014 From: liuxinguo at huawei.com (liuxinguo) Date: Mon, 29 Dec 2014 01:26:42 +0000 Subject: [openstack-dev] [docs] Question about submit docs Message-ID: We have opened a bug for document of huawei storage driver, and have posted the document at https://review.openstack.org/#/c/143926/?. Now I have two things not very clear: 1. When will be this document merged into Kilo? At the end of K-2 or will a little earlier? 2. What should we do if we want amend the document both in IceHouse and Juno? Should I use ?git cherry-pick? and how to use ?git cherry-pick?? Any input will be appreciated, thanks! -- Liu ________________________________ ??? ???????? Huawei Technologies Co., Ltd. [Company_logo] Phone: Fax: Mobile: Email: ??????????????? ???518129 Huawei Technologies Co., Ltd. Bantian, Longgang District,Shenzhen 518129, P.R.China http://www.huawei.com ________________________________ ???????????????????????????????????????? ???????????????????????????????????????? ??????????????????????????????????? This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 6737 bytes Desc: image001.jpg URL: From anne at openstack.org Mon Dec 29 02:35:50 2014 From: anne at openstack.org (Anne Gentle) Date: Sun, 28 Dec 2014 20:35:50 -0600 Subject: [openstack-dev] [docs] Question about submit docs In-Reply-To: References: Message-ID: On Sun, Dec 28, 2014 at 7:26 PM, liuxinguo wrote: > We have opened a bug for document of huawei storage driver, and have > posted the document at https://review.openstack.org/#/c/143926/?. > > Now I have two things not very clear: > > 1. When will be this document merged into Kilo? At the end of K-2 or will > a little earlier? > It'll merge into the master branch for openstack-manuals, probably this week. I'll add some review comments (my) tomorrow. > > 2. What should we do if we want amend the document both in IceHouse and > Juno? Should I use ?git cherry-pick? and how to use ?git cherry-pick?? > If you want this in prior release branches, use these instructions: https://wiki.openstack.org/wiki/Documentation/HowTo#How_to_a_cherry-pick_a_change_to_a_stable_branch Thanks for the doc patch, hope this information helps. Anne > > > Any input will be appreciated, thanks! > -- > Liu > > > > > ------------------------------ > > ??? > ???????? Huawei Technologies Co., Ltd. > [image: Company_logo] > > Phone: > Fax: > Mobile: > Email: > ??????????????? ???518129 > Huawei Technologies Co., Ltd. > Bantian, Longgang District,Shenzhen 518129, P.R.China > http://www.huawei.com > ------------------------------ > > ???????????????????????????????????????? > ???????????????????????????????????????? > ??????????????????????????????????? > This e-mail and its attachments contain confidential information from > HUAWEI, which > is intended only for the person or entity whose address is listed above. > Any use of the > information contained herein in any way (including, but not limited to, > total or partial > disclosure, reproduction, or dissemination) by persons other than the > intended > recipient(s) is prohibited. If you receive this e-mail in error, please > notify the sender by > phone or email immediately and delete it! > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 6737 bytes Desc: not available URL: From rakhmerov at mirantis.com Mon Dec 29 04:52:56 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Mon, 29 Dec 2014 10:52:56 +0600 Subject: [openstack-dev] [mistral] Cancelling team meetings today and next Monday Message-ID: <975F99EF-DE99-44EA-A79E-6C3A41A03BCF@mirantis.com> Hi, Let?s cancel the meeting today and on the next week. I think they won?t gather enough people because of the New Year holidays. Thanks Renat Akhmerov @ Mirantis Inc. From rakhmerov at mirantis.com Mon Dec 29 04:54:45 2014 From: rakhmerov at mirantis.com (Renat Akhmerov) Date: Mon, 29 Dec 2014 10:54:45 +0600 Subject: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs In-Reply-To: References: Message-ID: > On 27 Dec 2014, at 01:39, W Chan wrote: > > > What you?re saying is that whatever is under ?$.env? is just the exact same environment that we passed when we started the workflow? If yes then it definitely makes sense to me (it just allows to explicitly access environment, not through the implicit variable lookup). Please confirm. > Yes. the $.env that I original proposed would be the same dict as the one supplied at start_workflow. Although we have to agree whether the variables in the environment are allowed to change after the WF started. Unless there's a valid use case, I would lean toward making env immutable. Let?s make them immutable. > > One thing that I strongly suggest is that we clearly define all reserved keys like ?env?, ?__actions? etc. I think it?d be better if they all started with the same prefix, for example, double underscore. > > Agree. How about using double underscore for env as well (i.e. $.__env.var1, $.__env.var2)? Yes. Renat Akhmerov @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichavero at chavero.com.mx Mon Dec 29 05:15:39 2014 From: ichavero at chavero.com.mx (=?UTF-8?B?SXbDoW4gQ2hhdmVybw==?=) Date: Sun, 28 Dec 2014 22:15:39 -0700 Subject: [openstack-dev] [Containers][docker] Networking problem Message-ID: <54A0E37B.9020908@chavero.com.mx> Hello, I've installed OpenStack with Docker as hypervisor on a cubietruck, everything seems to work ok but the container ip does not respond to pings nor respond to the service i'm running inside the container (nginx por 80). I checked how nova created the container and it looks like everything is in place: # nova list +--------------------------------------+---------------+--------+------------+-------------+----------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+---------------+--------+------------+-------------+----------------------+ | 249df778-b2b6-490c-9dce-1126f8f337f3 | test_nginx_13 | ACTIVE | - | Running | public=192.168.1.135 | +--------------------------------------+---------------+--------+------------+-------------+----------------------+ # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 89b59bf9f442 sotolitolabs/nginx_arm:latest "/usr/sbin/nginx" 6 hours ago Up 6 hours nova-249df778-b2b6-490c-9dce-1126f8f337f3 A funny thing that i noticed but i'm not really sure it's relevant, the docker container does not show network info when created by nova: # docker inspect 89b59bf9f442 .... unnecesary output.... "NetworkSettings": { "Bridge": "", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "PortMapping": null, "Ports": null }, # neutron router-list +--------------------------------------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+ | id | name | external_gateway_info | distributed | ha | +--------------------------------------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+ | f8dc7e15-1087-4681-b495-217ecfa95189 | router1 | {"network_id": "160add9a-2d2e-45ab-8045-68b334d29418", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d", "ip_address": "192.168.1.120"}]} | False | False | +--------------------------------------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+ # neutron subnet-list +--------------------------------------+----------------+----------------+----------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+----------------+----------------+----------------------------------------------------+ | 34995548-bc2b-4d33-bdb2-27443c01e483 | private_subnet | 10.0.0.0/24 | {"start": "10.0.0.2", "end": "10.0.0.254"} | | 1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d | public_subnet | 192.168.1.0/24 | {"start": "192.168.1.120", "end": "192.168.1.200"} | +--------------------------------------+----------------+----------------+----------------------------------------------------+ # neutron port-list +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | 863eb9a3-461c-4016-9bd1-7c4c7210db98 | | fa:16:3e:24:7b:2c | {"subnet_id": "34995548-bc2b-4d33-bdb2-27443c01e483", "ip_address": "10.0.0.2"} | | bbe59188-ab4e-4b92-a578-bbc2d6759295 | | fa:16:3e:1c:04:6a | {"subnet_id": "1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d", "ip_address": "192.168.1.135"} | | c8b94a90-c7d1-44fc-a582-3370f5486d26 | | fa:16:3e:6f:69:71 | {"subnet_id": "34995548-bc2b-4d33-bdb2-27443c01e483", "ip_address": "10.0.0.1"} | | f108b583-0d54-4388-bcc0-f8d1cbe6efd4 | | fa:16:3e:bb:3a:1b | {"subnet_id": "1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d", "ip_address": "192.168.1.120"} | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ the network namespace is being created: # ip netns exec 89b59bf9f442a0d468d9d4d8c9370c53f8e4a3ba4d8affcd6be8b2dde84fff64 ifconfig lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 nsbbe59188-ab: flags=4163 mtu 1500 inet 192.168.1.135 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::f816:3eff:fe1c:46a prefixlen 64 scopeid 0x20 ether fa:16:3e:1c:04:6a txqueuelen 1000 (Ethernet) RX packets 8 bytes 648 (648.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 when i try a ping it does not return: # ping -c3 192.168.1.135 PING 192.168.1.135 (192.168.1.135) 56(84) bytes of data. From 192.168.1.65 icmp_seq=1 Destination Host Unreachable From 192.168.1.65 icmp_seq=2 Destination Host Unreachable From 192.168.1.65 icmp_seq=3 Destination Host Unreachable --- 192.168.1.135 ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2008ms i tried adding the interface to the bridge br-ex but it didn's work either: # ip netns exec 89b59bf9f442a0d468d9d4d8c9370c53f8e4a3ba4d8affcd6be8b2dde84fff64 ovs-vsctl add-port br-ex nsbbe59188-ab relevant log extract: 2014-12-28T22:57:59.577Z|00700|bridge|WARN|could not open network device nsbbe59188-ab (No such even with error it appears # ovs-vsctl list-ports br-ex eth0 nsbbe59188-ab i think this is a bridging problem but i'm not sure. can somebody give me a hint? Thanks Ivan -- Iv?n Chavero Hacker From jay.lau.513 at gmail.com Mon Dec 29 05:49:38 2014 From: jay.lau.513 at gmail.com (Jay Lau) Date: Mon, 29 Dec 2014 13:49:38 +0800 Subject: [openstack-dev] [Containers][docker] Networking problem In-Reply-To: <54A0E37B.9020908@chavero.com.mx> References: <54A0E37B.9020908@chavero.com.mx> Message-ID: There is no problem for your cluster, it is working well. With nova docker driver, you need to use namespace to check the network as you did: 2014-12-29 13:15 GMT+08:00 Iv?n Chavero : > Hello, > > I've installed OpenStack with Docker as hypervisor on a cubietruck, > everything > seems to work ok but the container ip does not respond to pings nor > respond to > the service i'm running inside the container (nginx por 80). > > I checked how nova created the container and it looks like everything is > in place: > > # nova list > +--------------------------------------+---------------+---- > ----+------------+-------------+----------------------+ > | ID | Name | Status | Task > State | Power State | Networks | > +--------------------------------------+---------------+---- > ----+------------+-------------+----------------------+ > | 249df778-b2b6-490c-9dce-1126f8f337f3 | test_nginx_13 | ACTIVE | - > | Running | public=192.168.1.135 | > +--------------------------------------+---------------+---- > ----+------------+-------------+----------------------+ > > > # docker ps > CONTAINER ID IMAGE COMMAND CREATED STATUS > PORTS NAMES > 89b59bf9f442 sotolitolabs/nginx_arm:latest "/usr/sbin/nginx" 6 > hours ago Up 6 hours nova-249df778-b2b6-490c-9dce-1126f8f337f3 > > > A funny thing that i noticed but i'm not really sure it's relevant, the > docker container > does not show network info when created by nova: > > # docker inspect 89b59bf9f442 > > .... unnecesary output.... > > "NetworkSettings": { > "Bridge": "", > "Gateway": "", > "IPAddress": "", > "IPPrefixLen": 0, > "PortMapping": null, > "Ports": null > }, > > > > > # neutron router-list > +--------------------------------------+---------+---------- > ------------------------------------------------------------ > ------------------------------------------------------------ > ---------------------------------------------------------+-- > -----------+-------+ > | id | name | external_gateway_info | > distributed | ha | > +--------------------------------------+---------+---------- > ------------------------------------------------------------ > ------------------------------------------------------------ > ---------------------------------------------------------+-- > -----------+-------+ > | f8dc7e15-1087-4681-b495-217ecfa95189 | router1 | {"network_id": > "160add9a-2d2e-45ab-8045-68b334d29418", "enable_snat": true, > "external_fixed_ips": [{"subnet_id": "1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d", > "ip_address": "192.168.1.120"}]} | False | False | > +--------------------------------------+---------+---------- > ------------------------------------------------------------ > ------------------------------------------------------------ > ---------------------------------------------------------+-- > -----------+-------+ > > > # neutron subnet-list > +--------------------------------------+----------------+--- > -------------+----------------------------------------------------+ > | id | name | cidr | > allocation_pools | > +--------------------------------------+----------------+--- > -------------+----------------------------------------------------+ > | 34995548-bc2b-4d33-bdb2-27443c01e483 | private_subnet | 10.0.0.0/24 > | {"start": "10.0.0.2", "end": "10.0.0.254"} | > | 1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d | public_subnet | 192.168.1.0/24 > | {"start": "192.168.1.120", "end": "192.168.1.200"} | > +--------------------------------------+----------------+--- > -------------+----------------------------------------------------+ > > > > > # neutron port-list > +--------------------------------------+------+------------- > ------+----------------------------------------------------- > ---------------------------------+ > | id | name | mac_address | > fixed_ips | > +--------------------------------------+------+------------- > ------+----------------------------------------------------- > ---------------------------------+ > | 863eb9a3-461c-4016-9bd1-7c4c7210db98 | | fa:16:3e:24:7b:2c | > {"subnet_id": "34995548-bc2b-4d33-bdb2-27443c01e483", "ip_address": > "10.0.0.2"} | > | bbe59188-ab4e-4b92-a578-bbc2d6759295 | | fa:16:3e:1c:04:6a | > {"subnet_id": "1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d", "ip_address": > "192.168.1.135"} | > | c8b94a90-c7d1-44fc-a582-3370f5486d26 | | fa:16:3e:6f:69:71 | > {"subnet_id": "34995548-bc2b-4d33-bdb2-27443c01e483", "ip_address": > "10.0.0.1"} | > | f108b583-0d54-4388-bcc0-f8d1cbe6efd4 | | fa:16:3e:bb:3a:1b | > {"subnet_id": "1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d", "ip_address": > "192.168.1.120"} | > +--------------------------------------+------+------------- > ------+----------------------------------------------------- > ---------------------------------+ > > > > the network namespace is being created: > > # ip netns exec 89b59bf9f442a0d468d9d4d8c9370c > 53f8e4a3ba4d8affcd6be8b2dde84fff64 ifconfig > lo: flags=73 mtu 65536 > inet 127.0.0.1 netmask 255.0.0.0 > inet6 ::1 prefixlen 128 scopeid 0x10 > loop txqueuelen 0 (Local Loopback) > RX packets 0 bytes 0 (0.0 B) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 0 bytes 0 (0.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > nsbbe59188-ab: flags=4163 mtu 1500 > inet 192.168.1.135 netmask 255.255.255.0 broadcast 192.168.1.255 > inet6 fe80::f816:3eff:fe1c:46a prefixlen 64 scopeid 0x20 > ether fa:16:3e:1c:04:6a txqueuelen 1000 (Ethernet) > RX packets 8 bytes 648 (648.0 B) > RX errors 0 dropped 0 overruns 0 frame 0 > TX packets 8 bytes 648 (648.0 B) > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > when i try a ping it does not return: > > # ping -c3 192.168.1.135 > PING 192.168.1.135 (192.168.1.135) 56(84) bytes of data. > From 192.168.1.65 icmp_seq=1 Destination Host Unreachable > From 192.168.1.65 icmp_seq=2 Destination Host Unreachable > From 192.168.1.65 icmp_seq=3 Destination Host Unreachable > > --- 192.168.1.135 ping statistics --- > 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2008ms > > > i tried adding the interface to the bridge br-ex but it didn's work either: > > # ip netns exec 89b59bf9f442a0d468d9d4d8c9370c > 53f8e4a3ba4d8affcd6be8b2dde84fff64 ovs-vsctl add-port br-ex nsbbe59188-ab > > relevant log extract: > > 2014-12-28T22:57:59.577Z|00700|bridge|WARN|could not open network device > nsbbe59188-ab (No such > > > even with error it appears > > # ovs-vsctl list-ports br-ex > eth0 > nsbbe59188-ab > > > i think this is a bridging problem but i'm not sure. can somebody give me > a hint? > > Thanks > Ivan > > > > > > > > > > > > -- > Iv?n Chavero > Hacker > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Thanks, Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay.lau.513 at gmail.com Mon Dec 29 06:04:09 2014 From: jay.lau.513 at gmail.com (Jay Lau) Date: Mon, 29 Dec 2014 14:04:09 +0800 Subject: [openstack-dev] [Containers][docker] Networking problem In-Reply-To: References: <54A0E37B.9020908@chavero.com.mx> Message-ID: There is no problem for your cluster, it is working well. With nova docker driver, you need to use namespace to check the network as you did: # ip netns exec 89b59bf9f442a0d468d9d4d8c9370c 53f8e4a3ba4d8affcd6be8b2dde84fff64 ifconfig lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 nsbbe59188-ab: flags=4163 mtu 1500 inet 192.168.1.135 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::f816:3eff:fe1c:46a prefixlen 64 scopeid 0x20 ether fa:16:3e:1c:04:6a txqueuelen 1000 (Ethernet) RX packets 8 bytes 648 (648.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 nova docker driver is using following mode to set up the network: https://github.com/stackforge/nova-docker/blob/master/novadocker/virt/docker/driver.py#L419 You can try following attached image to set up your network, this logic was used in nova docker driver. 2014-12-29 13:49 GMT+08:00 Jay Lau : > There is no problem for your cluster, it is working well. With nova docker > driver, you need to use namespace to check the network as you did: > > > 2014-12-29 13:15 GMT+08:00 Iv?n Chavero : > >> Hello, >> >> I've installed OpenStack with Docker as hypervisor on a cubietruck, >> everything >> seems to work ok but the container ip does not respond to pings nor >> respond to >> the service i'm running inside the container (nginx por 80). >> >> I checked how nova created the container and it looks like everything is >> in place: >> >> # nova list >> +--------------------------------------+---------------+---- >> ----+------------+-------------+----------------------+ >> | ID | Name | Status | Task >> State | Power State | Networks | >> +--------------------------------------+---------------+---- >> ----+------------+-------------+----------------------+ >> | 249df778-b2b6-490c-9dce-1126f8f337f3 | test_nginx_13 | ACTIVE | - >> | Running | public=192.168.1.135 | >> +--------------------------------------+---------------+---- >> ----+------------+-------------+----------------------+ >> >> >> # docker ps >> CONTAINER ID IMAGE COMMAND CREATED STATUS >> PORTS NAMES >> 89b59bf9f442 sotolitolabs/nginx_arm:latest "/usr/sbin/nginx" 6 >> hours ago Up 6 hours nova-249df778-b2b6-490c-9dce-1126f8f337f3 >> >> >> A funny thing that i noticed but i'm not really sure it's relevant, the >> docker container >> does not show network info when created by nova: >> >> # docker inspect 89b59bf9f442 >> >> .... unnecesary output.... >> >> "NetworkSettings": { >> "Bridge": "", >> "Gateway": "", >> "IPAddress": "", >> "IPPrefixLen": 0, >> "PortMapping": null, >> "Ports": null >> }, >> >> >> >> >> # neutron router-list >> +--------------------------------------+---------+---------- >> ------------------------------------------------------------ >> ------------------------------------------------------------ >> ---------------------------------------------------------+-- >> -----------+-------+ >> | id | name | external_gateway_info >> | distributed | ha | >> +--------------------------------------+---------+---------- >> ------------------------------------------------------------ >> ------------------------------------------------------------ >> ---------------------------------------------------------+-- >> -----------+-------+ >> | f8dc7e15-1087-4681-b495-217ecfa95189 | router1 | {"network_id": >> "160add9a-2d2e-45ab-8045-68b334d29418", "enable_snat": true, >> "external_fixed_ips": [{"subnet_id": "1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d", >> "ip_address": "192.168.1.120"}]} | False | False | >> +--------------------------------------+---------+---------- >> ------------------------------------------------------------ >> ------------------------------------------------------------ >> ---------------------------------------------------------+-- >> -----------+-------+ >> >> >> # neutron subnet-list >> +--------------------------------------+----------------+--- >> -------------+----------------------------------------------------+ >> | id | name | cidr >> | allocation_pools | >> +--------------------------------------+----------------+--- >> -------------+----------------------------------------------------+ >> | 34995548-bc2b-4d33-bdb2-27443c01e483 | private_subnet | 10.0.0.0/24 >> | {"start": "10.0.0.2", "end": "10.0.0.254"} | >> | 1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d | public_subnet | 192.168.1.0/24 >> | {"start": "192.168.1.120", "end": "192.168.1.200"} | >> +--------------------------------------+----------------+--- >> -------------+----------------------------------------------------+ >> >> >> >> >> # neutron port-list >> +--------------------------------------+------+------------- >> ------+----------------------------------------------------- >> ---------------------------------+ >> | id | name | mac_address | >> fixed_ips | >> +--------------------------------------+------+------------- >> ------+----------------------------------------------------- >> ---------------------------------+ >> | 863eb9a3-461c-4016-9bd1-7c4c7210db98 | | fa:16:3e:24:7b:2c | >> {"subnet_id": "34995548-bc2b-4d33-bdb2-27443c01e483", "ip_address": >> "10.0.0.2"} | >> | bbe59188-ab4e-4b92-a578-bbc2d6759295 | | fa:16:3e:1c:04:6a | >> {"subnet_id": "1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d", "ip_address": >> "192.168.1.135"} | >> | c8b94a90-c7d1-44fc-a582-3370f5486d26 | | fa:16:3e:6f:69:71 | >> {"subnet_id": "34995548-bc2b-4d33-bdb2-27443c01e483", "ip_address": >> "10.0.0.1"} | >> | f108b583-0d54-4388-bcc0-f8d1cbe6efd4 | | fa:16:3e:bb:3a:1b | >> {"subnet_id": "1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d", "ip_address": >> "192.168.1.120"} | >> +--------------------------------------+------+------------- >> ------+----------------------------------------------------- >> ---------------------------------+ >> >> >> >> the network namespace is being created: >> >> # ip netns exec 89b59bf9f442a0d468d9d4d8c9370c >> 53f8e4a3ba4d8affcd6be8b2dde84fff64 ifconfig >> lo: flags=73 mtu 65536 >> inet 127.0.0.1 netmask 255.0.0.0 >> inet6 ::1 prefixlen 128 scopeid 0x10 >> loop txqueuelen 0 (Local Loopback) >> RX packets 0 bytes 0 (0.0 B) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 0 bytes 0 (0.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> nsbbe59188-ab: flags=4163 mtu 1500 >> inet 192.168.1.135 netmask 255.255.255.0 broadcast 192.168.1.255 >> inet6 fe80::f816:3eff:fe1c:46a prefixlen 64 scopeid 0x20 >> ether fa:16:3e:1c:04:6a txqueuelen 1000 (Ethernet) >> RX packets 8 bytes 648 (648.0 B) >> RX errors 0 dropped 0 overruns 0 frame 0 >> TX packets 8 bytes 648 (648.0 B) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> >> >> when i try a ping it does not return: >> >> # ping -c3 192.168.1.135 >> PING 192.168.1.135 (192.168.1.135) 56(84) bytes of data. >> From 192.168.1.65 icmp_seq=1 Destination Host Unreachable >> From 192.168.1.65 icmp_seq=2 Destination Host Unreachable >> From 192.168.1.65 icmp_seq=3 Destination Host Unreachable >> >> --- 192.168.1.135 ping statistics --- >> 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time >> 2008ms >> >> >> i tried adding the interface to the bridge br-ex but it didn's work >> either: >> >> # ip netns exec 89b59bf9f442a0d468d9d4d8c9370c >> 53f8e4a3ba4d8affcd6be8b2dde84fff64 ovs-vsctl add-port br-ex nsbbe59188-ab >> >> relevant log extract: >> >> 2014-12-28T22:57:59.577Z|00700|bridge|WARN|could not open network device >> nsbbe59188-ab (No such >> >> >> even with error it appears >> >> # ovs-vsctl list-ports br-ex >> eth0 >> nsbbe59188-ab >> >> >> i think this is a bridging problem but i'm not sure. can somebody give me >> a hint? >> >> Thanks >> Ivan >> >> >> >> >> >> >> >> >> >> >> >> -- >> Iv?n Chavero >> Hacker >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Thanks, > > Jay > -- Thanks, Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: networkNone.jpg Type: image/jpeg Size: 210407 bytes Desc: not available URL: From sragolu at mvista.com Mon Dec 29 07:44:32 2014 From: sragolu at mvista.com (Srinivasa Rao Ragolu) Date: Mon, 29 Dec 2014 13:14:32 +0530 Subject: [openstack-dev] [Openstack] Need help in validating CPU Pinning feature In-Reply-To: <257537232.3138680.1419635235848.JavaMail.zimbra@redhat.com> References: <202b345c.f8bc.14a86476dc6.Coremail.ifzing@126.com> <257537232.3138680.1419635235848.JavaMail.zimbra@redhat.com> Message-ID: Hi Steve, Thank you so much for your reply and detailed steps to go forward. I am using devstack setup and nova master commit. As I could not able to see CPUPinningFilter implementation in source, I have used NUMATopologyFilter. But same problem exists. I could not able to see any vcpupin in the guest xml. Please see the section of vcpu from xml below. test_pinning 2014-12-29 07:30:04 2048 20 0 0 2 admin admin 2097152 2097152 2 Kindly suggest me which branch of NOVA I need to take to validated pinning feature. Alse let me know CPUPinningFilter is required to validate pinning feature? Thanks a lot, Srinivas. On Sat, Dec 27, 2014 at 4:37 AM, Steve Gordon wrote: > ----- Original Message ----- > > From: "Srinivasa Rao Ragolu" > > To: "joejiang" > > > > Hi Joejing, > > > > Thanks for quick reply. Above xml is getting generated fine if I set > > "vcpu_pin_set=1-12" in /etc/nova/nova.conf. > > > > But how to pin each vcpu with pcpu something like below > > > > > > > > > > > > > > > > > > > > One more questions is Are Numa nodes are compulsory for pin each vcpu to > > pcpu? > > The specification for the CPU pinning functionality recently implemented > in Nova is here: > > > http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/virt-driver-cpu-pinning.html > > Note that exact vCPU to pCPU pinning is not exposed to the user as this > would require them to have direct knowledge of the host pCPU layout. > Instead they request that the instance receive "dedicated" CPU resourcing > and Nova handles allocation of pCPUs and pinning of vCPUs to them. > > Example usage: > > * Create a host aggregate and add set metadata on it to indicate it is to > be used for pinning, 'pinned' is used for the example but any key value can > be used. The same key must used be used in later steps though:: > > $ nova aggregate-create cpu_pinning > $ nova aggregate-set-metadata 1 pinned=true > > NB: For aggregates/flavors that wont be dedicated set pinned=false. > > * Set all existing flavors to avoid this aggregate:: > > $ for FLAVOR in `nova flavor-list | cut -f 2 -d ' ' | grep -o [0-9]*`; > do nova flavor-key ${FLAVOR} set > "aggregate_instance_extra_specs:pinned"="false"; done > > * Create flavor that has extra spec "hw:cpu_policy" set to "dedicated". In > this example it is created with ID of 6, 2048 MB of RAM, 20 GB drive, and 2 > vCPUs:: > > $ nova flavor-create pinned.medium 6 2048 20 2 > $ nova flavor-key 6 set "hw:cpu_policy"="dedicated" > > * Set the flavor to require the aggregate set aside for dedicated pinning > of guests:: > > $ nova flavor-key 6 set "aggregate_instance_extra_specs:pinned"="true" > > * Add a compute host to the created aggregate (see nova host-list to get > the host name(s)):: > > $ nova aggregate-add-host 1 my_packstack_host_name > > * Add the AggregateInstanceExtraSpecsFilter and CPUPinningFilter filters > to the scheduler_default_filters in /etc/nova.conf:: > > scheduler_default_filters = > RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter, > > ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter, > > AggregateInstanceExtraSpecsFilter,CPUPinningFilter > > NB: On Kilo code base I believe the filter is NUMATopologyFilter > > * Restart the scheduler:: > > # systemctl restart openstack-nova-scheduler > > * After the above - with a normal (non-admin user) try to boot an instance > with the newly created flavor:: > > $ nova boot --image fedora --flavor 6 test_pinning > > * Confirm the instance has succesfully booted and that it's vCPU's are > pinned to _a single_ host CPU by observing > the element of the generated domain XML:: > > # virsh list > Id Name State > ---------------------------------------------------- > 2 instance-00000001 running > # virsh dumpxml instance-00000001 > ... > 2 > > > > > > > -Steve > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From isviridov at mirantis.com Mon Dec 29 10:48:30 2014 From: isviridov at mirantis.com (isviridov) Date: Mon, 29 Dec 2014 12:48:30 +0200 Subject: [openstack-dev] [MagnetoDB] Andrey Ostapenko core nomination In-Reply-To: References: <549D6DCE.7080506@mirantis.com> Message-ID: <54A1317E.8060609@mirantis.com> My congratulations to Andrey! Welcome on board! Cheers, Ilya 26.12.2014 16:52, Dmitriy Ukhlov ?????: > Andrey is very active contributor, good team player and very helps us at > previous development cycle. +1 from my side. > > On Fri, Dec 26, 2014 at 4:16 PM, isviridov > wrote: > > Hello stackers and magnetians, > > I suggest nominating Andrey Ostapenko [1] to MagnetoDB cores. > > During last months he has made huge contribution to MagnetoDB [2] > Andrey drives Tempest and python-magnetodbclient successfully. > > Please rise your hands. > > Thank you, > Ilya Sviridov > > [1] http://stackalytics.com/report/users/aostapenko > [2] http://stackalytics.com/report/contribution/magnetodb/90 > > > > > > -- > Best regards, > Dmitriy Ukhlov > Mirantis Inc. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From anlin.kong at gmail.com Mon Dec 29 11:51:36 2014 From: anlin.kong at gmail.com (Lingxian Kong) Date: Mon, 29 Dec 2014 19:51:36 +0800 Subject: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host? Message-ID: Hi Stackers: As for now, we can get the 'host name', 'service' and 'availability zone' of a host through the CLI command 'nova host-list'. But as a programmer who communicates with OpenStack using its API, I want to get the host ip address, in order to perform some other actions in my program. And what I know is, the ip address of a host is populated in the 'compute_nodes' table of the database, during the 'update available resource' periodic task. So, is it possible of the community to support it in the future? I apologize if the topic was once covered and I missed it. -- Regards! ----------------------------------- Lingxian Kong From jaypipes at gmail.com Mon Dec 29 13:26:22 2014 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 29 Dec 2014 08:26:22 -0500 Subject: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host? In-Reply-To: References: Message-ID: <54A1567E.2080400@gmail.com> On 12/29/2014 06:51 AM, Lingxian Kong wrote: > Hi Stackers: > > As for now, we can get the 'host name', 'service' and 'availability > zone' of a host through the CLI command 'nova host-list'. But as a > programmer who communicates with OpenStack using its API, I want to > get the host ip address, in order to perform some other actions in my > program. > > And what I know is, the ip address of a host is populated in the > 'compute_nodes' table of the database, during the 'update available > resource' periodic task. > > So, is it possible of the community to support it in the future? > > I apologize if the topic was once covered and I missed it. Hi! I see no real technical reason this could not be done. It would require waiting until all of the API microversioning bits are done, and a micro-increment of the API, along with minimal changes of code in the hosts extension to return the host_ip field from the nova.objects.ComputeNode objects returned to the HostController object. Are you interested in working on such a feature? I would be happy to guide you through the process of making a blueprint and submitting code if you'd like. Just find me on IRC #openstack-nova. My IRC nick is jaypipes. Best, -jay From jay.lau.513 at gmail.com Mon Dec 29 13:40:33 2014 From: jay.lau.513 at gmail.com (Jay Lau) Date: Mon, 29 Dec 2014 21:40:33 +0800 Subject: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host? In-Reply-To: <54A1567E.2080400@gmail.com> References: <54A1567E.2080400@gmail.com> Message-ID: Does "nova hypervisor-show" help? It already include the host ip address. 2014-12-29 21:26 GMT+08:00 Jay Pipes : > On 12/29/2014 06:51 AM, Lingxian Kong wrote: > >> Hi Stackers: >> >> As for now, we can get the 'host name', 'service' and 'availability >> zone' of a host through the CLI command 'nova host-list'. But as a >> programmer who communicates with OpenStack using its API, I want to >> get the host ip address, in order to perform some other actions in my >> program. >> >> And what I know is, the ip address of a host is populated in the >> 'compute_nodes' table of the database, during the 'update available >> resource' periodic task. >> >> So, is it possible of the community to support it in the future? >> >> I apologize if the topic was once covered and I missed it. >> > > Hi! > > I see no real technical reason this could not be done. It would require > waiting until all of the API microversioning bits are done, and a > micro-increment of the API, along with minimal changes of code in the hosts > extension to return the host_ip field from the nova.objects.ComputeNode > objects returned to the HostController object. > > Are you interested in working on such a feature? I would be happy to guide > you through the process of making a blueprint and submitting code if you'd > like. Just find me on IRC #openstack-nova. My IRC nick is jaypipes. > > Best, > -jay > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Thanks, Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Mon Dec 29 13:56:52 2014 From: sgordon at redhat.com (Steve Gordon) Date: Mon, 29 Dec 2014 08:56:52 -0500 (EST) Subject: [openstack-dev] [Openstack] Need help in validating CPU Pinning feature In-Reply-To: References: <202b345c.f8bc.14a86476dc6.Coremail.ifzing@126.com> <257537232.3138680.1419635235848.JavaMail.zimbra@redhat.com> Message-ID: <743542206.3612407.1419861412544.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Srinivasa Rao Ragolu" > To: "OpenStack Development Mailing List (not for usage questions)" , > > Hi Steve, > > Thank you so much for your reply and detailed steps to go forward. > > I am using devstack setup and nova master commit. As I could not able to > see CPUPinningFilter implementation in source, I have used > NUMATopologyFilter. > > But same problem exists. I could not able to see any vcpupin in the guest > xml. Please see the section of vcpu from xml below. > > > > > test_pinning > 2014-12-29 07:30:04 > > 2048 > 20 > 0 > 0 > 2 > > > admin > uuid="4904cdf59c254546981f577351b818de">admin > > > > > 2097152 > 2097152 > 2 > > Kindly suggest me which branch of NOVA I need to take to validated pinning > feature. Alse let me know CPUPinningFilter is required to validate pinning > feature? There are still a few outstanding patches, e.g. see: https://review.openstack.org/#/q/topic:bp/virt-driver-cpu-pinning,n,z Thanks, Steve > On Sat, Dec 27, 2014 at 4:37 AM, Steve Gordon wrote: > > > ----- Original Message ----- > > > From: "Srinivasa Rao Ragolu" > > > To: "joejiang" > > > > > > Hi Joejing, > > > > > > Thanks for quick reply. Above xml is getting generated fine if I set > > > "vcpu_pin_set=1-12" in /etc/nova/nova.conf. > > > > > > But how to pin each vcpu with pcpu something like below > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > One more questions is Are Numa nodes are compulsory for pin each vcpu to > > > pcpu? > > > > The specification for the CPU pinning functionality recently implemented > > in Nova is here: > > > > > > http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/virt-driver-cpu-pinning.html > > > > Note that exact vCPU to pCPU pinning is not exposed to the user as this > > would require them to have direct knowledge of the host pCPU layout. > > Instead they request that the instance receive "dedicated" CPU resourcing > > and Nova handles allocation of pCPUs and pinning of vCPUs to them. > > > > Example usage: > > > > * Create a host aggregate and add set metadata on it to indicate it is to > > be used for pinning, 'pinned' is used for the example but any key value can > > be used. The same key must used be used in later steps though:: > > > > $ nova aggregate-create cpu_pinning > > $ nova aggregate-set-metadata 1 pinned=true > > > > NB: For aggregates/flavors that wont be dedicated set pinned=false. > > > > * Set all existing flavors to avoid this aggregate:: > > > > $ for FLAVOR in `nova flavor-list | cut -f 2 -d ' ' | grep -o [0-9]*`; > > do nova flavor-key ${FLAVOR} set > > "aggregate_instance_extra_specs:pinned"="false"; done > > > > * Create flavor that has extra spec "hw:cpu_policy" set to "dedicated". In > > this example it is created with ID of 6, 2048 MB of RAM, 20 GB drive, and 2 > > vCPUs:: > > > > $ nova flavor-create pinned.medium 6 2048 20 2 > > $ nova flavor-key 6 set "hw:cpu_policy"="dedicated" > > > > * Set the flavor to require the aggregate set aside for dedicated pinning > > of guests:: > > > > $ nova flavor-key 6 set "aggregate_instance_extra_specs:pinned"="true" > > > > * Add a compute host to the created aggregate (see nova host-list to get > > the host name(s)):: > > > > $ nova aggregate-add-host 1 my_packstack_host_name > > > > * Add the AggregateInstanceExtraSpecsFilter and CPUPinningFilter filters > > to the scheduler_default_filters in /etc/nova.conf:: > > > > scheduler_default_filters = > > RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter, > > > > ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter, > > > > AggregateInstanceExtraSpecsFilter,CPUPinningFilter > > > > NB: On Kilo code base I believe the filter is NUMATopologyFilter > > > > * Restart the scheduler:: > > > > # systemctl restart openstack-nova-scheduler > > > > * After the above - with a normal (non-admin user) try to boot an instance > > with the newly created flavor:: > > > > $ nova boot --image fedora --flavor 6 test_pinning > > > > * Confirm the instance has succesfully booted and that it's vCPU's are > > pinned to _a single_ host CPU by observing > > the element of the generated domain XML:: > > > > # virsh list > > Id Name State > > ---------------------------------------------------- > > 2 instance-00000001 running > > # virsh dumpxml instance-00000001 > > ... > > 2 > > > > > > > > > > > > > > -Steve > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Steve Gordon, RHCE Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform From Charles_Wang at symantec.com Mon Dec 29 15:11:44 2014 From: Charles_Wang at symantec.com (Charles Wang) Date: Mon, 29 Dec 2014 07:11:44 -0800 Subject: [openstack-dev] [MagnetoDB] Andrey Ostapenko core nomination In-Reply-To: <549D6DCE.7080506@mirantis.com> References: <549D6DCE.7080506@mirantis.com> Message-ID: Congrats Andrey, well deserved. On 12/26/14, 9:16 AM, "isviridov" wrote: >Hello stackers and magnetians, > >I suggest nominating Andrey Ostapenko [1] to MagnetoDB cores. > >During last months he has made huge contribution to MagnetoDB [2] >Andrey drives Tempest and python-magnetodbclient successfully. > >Please rise your hands. > >Thank you, >Ilya Sviridov > >[1] http://stackalytics.com/report/users/aostapenko >[2] http://stackalytics.com/report/contribution/magnetodb/90 > > From dkranz at redhat.com Mon Dec 29 20:41:59 2014 From: dkranz at redhat.com (David Kranz) Date: Mon, 29 Dec 2014 15:41:59 -0500 Subject: [openstack-dev] [qa] tempest stable/icehouse builds are broken Message-ID: <54A1BC97.7010301@redhat.com> Some kind of regression has caused stable/icehouse builds to fail, and hence prevents any code from merging in tempest. This is being tracked at https://bugs.launchpad.net/python-heatclient/+bug/1405579. Jeremy (fungi) provided a hacky work-around here https://review.openstack.org/#/c/144347/ which I hope can soon be +A by a tempest core until there is some better fix. -David From anteaya at anteaya.info Mon Dec 29 21:56:37 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Mon, 29 Dec 2014 16:56:37 -0500 Subject: [openstack-dev] [Openstack-operators] The state of nova-network to neutron migration In-Reply-To: References: <54945973.1010904@anteaya.info> <54986C4A.8020708@anteaya.info> Message-ID: <54A1CE15.9090905@anteaya.info> On 12/24/2014 04:07 AM, Oleg Bondarev wrote: > On Mon, Dec 22, 2014 at 10:08 PM, Anita Kuno wrote: >> >> On 12/22/2014 01:32 PM, Joe Gordon wrote: >>> On Fri, Dec 19, 2014 at 9:28 AM, Kyle Mestery >> wrote: >>> >>>> On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno >> wrote: >>>>> >>>>> Rather than waste your time making excuses let me state where we are >> and >>>>> where I would like to get to, also sharing my thoughts about how you >> can >>>>> get involved if you want to see this happen as badly as I have been >> told >>>>> you do. >>>>> >>>>> Where we are: >>>>> * a great deal of foundation work has been accomplished to achieve >>>>> parity with nova-network and neutron to the extent that those involved >>>>> are ready for migration plans to be formulated and be put in place >>>>> * a summit session happened with notes and intentions[0] >>>>> * people took responsibility and promptly got swamped with other >>>>> responsibilities >>>>> * spec deadlines arose and in neutron's case have passed >>>>> * currently a neutron spec [1] is a work in progress (and it needs >>>>> significant work still) and a nova spec is required and doesn't have a >>>>> first draft or a champion >>>>> >>>>> Where I would like to get to: >>>>> * I need people in addition to Oleg Bondarev to be available to >> help >>>>> come up with ideas and words to describe them to create the specs in a >>>>> very short amount of time (Oleg is doing great work and is a fabulous >>>>> person, yay Oleg, he just can't do this alone) >>>>> * specifically I need a contact on the nova side of this complex >>>>> problem, similar to Oleg on the neutron side >>>>> * we need to have a way for people involved with this effort to >> find >>>>> each other, talk to each other and track progress >>>>> * we need to have representation at both nova and neutron weekly >>>>> meetings to communicate status and needs >>>>> >>>>> We are at K-2 and our current status is insufficient to expect this >> work >>>>> will be accomplished by the end of K-3. I will be championing this >> work, >>>>> in whatever state, so at least it doesn't fall off the map. If you >> would >>>>> like to help this effort please get in contact. I will be thinking of >>>>> ways to further this work and will be communicating to those who >>>>> identify as affected by these decisions in the most effective methods >> of >>>>> which I am capable. >>>>> >>>>> Thank you to all who have gotten us as far as well have gotten in this >>>>> effort, it has been a long haul and you have all done great work. Let's >>>>> keep going and finish this. >>>>> >>>>> Thank you, >>>>> Anita. >>>>> >>>>> Thank you for volunteering to drive this effort Anita, I am very happy >>>> about this. I support you 100%. >>>> >>>> I'd like to point out that we really need a point of contact on the nova >>>> side, similar to Oleg on the Neutron side. IMHO, this is step 1 here to >>>> continue moving this forward. >>>> >>> >>> At the summit the nova team marked the nova-network to neutron migration >> as >>> a priority [0], so we are collectively interested in seeing this happen >> and >>> want to help in any way possible. With regard to a nova point of >> contact, >>> anyone in nova-specs-core should work, that way we can cover more time >>> zones. >>> >>> From what I can gather the first step is to finish fleshing out the first >>> spec [1], and it sounds like it would be good to get a few nova-cores >>> reviewing it as well. >>> >>> >>> >>> >>> [0] >>> >> http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html >>> [1] https://review.openstack.org/#/c/142456/ >>> >>> >> Wonderful, thank you for the support Joe. >> >> It appears that we need to have a regular weekly meeting to track >> progress in an archived manner. >> >> I know there was one meeting November but I don't know what it was >> called so so far I can't find the logs for that. >> > > It wasn't official, we just gathered together on #novamigration. Attaching > the log here. > Ah, that would explain why I couldn't find the log. Thanks for the attachment. > >> So if those affected by this issue can identify what time (UTC please, >> don't tell me what time zone you are in it is too hard to guess what UTC >> time you are available) and day of the week you are available for a >> meeting I'll create one and we can start talking to each other. >> >> I need to avoid Monday 1500 and 2100 UTC, Tuesday 0800 UTC, 1400 UTC and >> 1900 - 2200 UTC, Wednesdays 1500 - 1700 UTC, Thursdays 1400 and 2100 UTC. >> > > I'm available each weekday 0700-1600 UTC, 1700-1800 UTC is also acceptable. > > Thanks, > Oleg Wonderful, thank you Oleg. We will aim for a meeting time in this range. I also understand holidays in Russia start on January 1 and go until the 11th or the 12th, I'm guessing this includes you. Thanks, Anita. > > >> >> Thanks, >> Anita. >> >>>> >>>> Thanks, >>>> Kyle >>>> >>>> >>>>> [0] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron >>>>> [1] https://review.openstack.org/#/c/142456/ >>>>> >>>>> _______________________________________________ >>>>> OpenStack-operators mailing list >>>>> OpenStack-operators at lists.openstack.org >>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>>>> >>>> >>>> _______________________________________________ >>>> OpenStack-dev mailing list >>>> OpenStack-dev at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From anteaya at anteaya.info Mon Dec 29 21:58:17 2014 From: anteaya at anteaya.info (Anita Kuno) Date: Mon, 29 Dec 2014 16:58:17 -0500 Subject: [openstack-dev] [infra] [storyboard] Nominating Yolanda Robla for StoryBoard Core In-Reply-To: References: Message-ID: <54A1CE79.3060004@anteaya.info> On 12/24/2014 03:58 AM, Ricardo Carrillo Cruz wrote: > Big +1 from me :-). > > Yolanda is an amazing engineer, both frontend and backend. > As Michael said, she's not only been doing Storyboard but a bunch of other > infra stuff that will be beneficial for the project. > > Regards! > > 2014-12-24 0:38 GMT+01:00 Zaro : > >> +1 >> >> On Tue, Dec 23, 2014 at 2:34 PM, Michael Krotscheck >> wrote: >> >>> Hello everyone! >>> >>> StoryBoard is the much anticipated successor to Launchpad, and is a >>> component of the Infrastructure Program. The storyboard-core group is >>> intended to be a superset of the infra-core group, with additional >>> reviewers who specialize in the field. >>> >>> Yolanda has been working on StoryBoard ever since the Atlanta Summit, and >>> has provided a diligent and cautious voice to our development effort. She >>> has consistently provided feedback on our reviews, and is neither afraid of >>> asking for clarification, nor of providing constructive criticism. In >>> return, she has been nothing but gracious and responsive when improvements >>> were suggested to her own submissions. >>> >>> Furthermore, Yolanda has been quite active in the infrastructure team as >>> a whole, and provides valuable context for us in the greater realm of infra. >>> >>> Please respond within this thread with either supporting commentary, or >>> concerns about her promotion. Since many western countries are currently >>> celebrating holidays, the review period will remain open until January 9th. >>> If the consensus is positive, we will promote her then! >>> >>> Thanks, >>> >>> Michael >>> >>> >>> References: >>> >>> https://review.openstack.org/#/q/reviewer:%22yolanda.robla+%253Cinfo%2540ysoft.biz%253E%22,n,z >>> >>> http://stackalytics.com/?user_id=yolanda.robla&metric=marks >>> +1 for Yolanda for core reviewer for StoryBoard. Anita. >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mikal at stillhq.com Mon Dec 29 21:58:45 2014 From: mikal at stillhq.com (Michael Still) Date: Tue, 30 Dec 2014 08:58:45 +1100 Subject: [openstack-dev] [nova] No team meeting this week Message-ID: I assume people generally take new years day off... Cheers, Michael -- Rackspace Australia From devananda.vdv at gmail.com Mon Dec 29 22:45:54 2014 From: devananda.vdv at gmail.com (Devananda van der Veen) Date: Mon, 29 Dec 2014 22:45:54 +0000 Subject: [openstack-dev] [Ironic] mid-cycle details Message-ID: Hi folks! tldr; If you will be attending the midcycle sprint in Grenoble the week of Feb 3rd, please sign up HERE . Long version... Before the holidays, I was behind in gathering and sharing information about our midcycle sprint, which makes me even further behind now.... so I've finally got those details to share with y'all! Also, I have some thoughts / concerns, which I've shared in a separate email -- please go read it. Dates: Feb 3 - 5 (Tue - Thu) with a half day for those sticking around on Friday, Feb 6th. Location: Hewlett Packard Centre de Comp?tences 5 Avenue Raymond Chanas 38320 Eybens Grenoble, France Grenoble is flat and fairly easy to get around, both by tram and by car. The easiest airport to travel through is Lyon, and it is about an hour's drive by car from the airport to Grenoble. (Also, it's a beautiful drive in the countryside - I recommend it!) I have previously stayed at the Mercure Centre Alpotel [1], and while not the closest hotel (it's about 10 minutes by car or 25 minutes by tram to HP's campus) it is within walking distance to the city center. I'll be staying there again. There are also hotels around the Expo center, which is just a few blocks from the HP campus, such as [2]. I have not arranged any group rates at these hotels, but the city has plenty of availability and this isn't peak travel season so rates are quite reasonable. The weather forecast [3] will probably be chilly (around 45F or 7C during the day), likely overcast with some rain, but probably not snowing in the city. We'll be within easy driving distance of the Alps, so if you plan to go exploring outside the city (ski trip, anyone?) dress for snow. Regards, Devananda [1] Hotel Mercure Grenoble Centre Alpotel 12 Boulevard Mar?chal Joffre 38000 Grenoble France [2] Park & Suites Elegance Grenoble Alpexpo 1 Avenue d'Innsbruck 38100 Grenoble France [3] https://weatherspark.com/averages/32103/2/Grenoble-Rhone-Alpes-France -------------- next part -------------- An HTML attachment was scrubbed... URL: From devananda.vdv at gmail.com Mon Dec 29 22:45:57 2014 From: devananda.vdv at gmail.com (Devananda van der Veen) Date: Mon, 29 Dec 2014 22:45:57 +0000 Subject: [openstack-dev] [Ironic] thoughts on the midcycle Message-ID: I'm sending the details of the midcycle in a separate email. Before you reply that you won't be able to make it, I'd like to share some thoughts / concerns. In the last few weeks, several people who I previously thought would attend told me that they can't. By my informal count, it looks like we will have at most 5 of our 10 core reviewers in attendance. I don't think we should cancel based on that, but it does mean that we need to set our expectations accordingly. Assuming that we will be lacking about half the core team, I think it will be more practical as a focused sprint, rather than a planning & design meeting. While that's a break from precedent, planning should be happening via the spec review process *anyway*. Also, we already have a larger back log of specs and work than we had this time last cycle, but with the same size review team. Rather than adding to our backlog, I would like us to use this gathering to burn through some specs and land some code. That being said, I'd also like to put forth this idea: if we had a second gathering (with the same focus on writing code) the following week (let's say, Feb 11 - 13) in the SF Bay area -- who would attend? Would we be able to get the "other half" of the core team together and get more work done? Is this a good idea? OK. That's enough of my musing for now... Once again, if you will be attending the midcycle sprint in Grenoble the week of Feb 3rd, please sign up HERE . Regards, Devananda -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at the-davies.net Mon Dec 29 23:01:46 2014 From: michael at the-davies.net (Michael Davies) Date: Tue, 30 Dec 2014 09:31:46 +1030 Subject: [openstack-dev] [Ironic] thoughts on the midcycle In-Reply-To: References: Message-ID: On Tue, Dec 30, 2014 at 9:15 AM, Devananda van der Veen < devananda.vdv at gmail.com> wrote: > [snip] > That being said, I'd also like to put forth this idea: if we had a second > gathering (with the same focus on writing code) the following week (let's > say, Feb 11 - 13) in the SF Bay area -- who would attend? Would we be able > to get the "other half" of the core team together and get more work done? > Is this a good idea? > Just like to register my interest here - the time to get to SFO from AU is quite a bit less than Grenoble, so is something I would try to possibly make happen. -- Michael Davies michael at the-davies.net Rackspace Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Mon Dec 29 23:45:27 2014 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 29 Dec 2014 15:45:27 -0800 Subject: [openstack-dev] [Ironic] thoughts on the midcycle In-Reply-To: References: Message-ID: <20141229234527.GA25511@jimrollenhagen.com> On Mon, Dec 29, 2014 at 10:45:57PM +0000, Devananda van der Veen wrote: > I'm sending the details of the midcycle in a separate email. Before you > reply that you won't be able to make it, I'd like to share some thoughts / > concerns. > > In the last few weeks, several people who I previously thought would attend > told me that they can't. By my informal count, it looks like we will have > at most 5 of our 10 core reviewers in attendance. I don't think we should > cancel based on that, but it does mean that we need to set our expectations > accordingly. > > Assuming that we will be lacking about half the core team, I think it will > be more practical as a focused sprint, rather than a planning & design > meeting. While that's a break from precedent, planning should be happening > via the spec review process *anyway*. Also, we already have a larger back > log of specs and work than we had this time last cycle, but with the same > size review team. Rather than adding to our backlog, I would like us to use > this gathering to burn through some specs and land some code. > > That being said, I'd also like to put forth this idea: if we had a second > gathering (with the same focus on writing code) the following week (let's > say, Feb 11 - 13) in the SF Bay area -- who would attend? Would we be able > to get the "other half" of the core team together and get more work done? > Is this a good idea? > I'm +1 on a Bay Area meetup; however, if it happens I likely won't be making the Grenoble meetup. There's a slim chance I can do both. I'd like to figure this out ASAP so I can book travel at a reasonable price. A second meetup certainly can't be bad; I'm sure we can get a ton of work done with the folks that I assume would attend. :) // jim > OK. That's enough of my musing for now... > > Once again, if you will be attending the midcycle sprint in Grenoble the > week of Feb 3rd, please sign up HERE > > . > > Regards, > Devananda > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sukhdevkapur at gmail.com Mon Dec 29 23:54:40 2014 From: sukhdevkapur at gmail.com (Sukhdev Kapur) Date: Mon, 29 Dec 2014 15:54:40 -0800 Subject: [openstack-dev] [Neutron][ML2] - canceling this week's ML2 meeting Message-ID: Dear fellow ML2'ers, In sprit of holidays, Bob and I decided to cancel this week's ML2 meeting. We will resume our meetings from January 7th onwards. Happy New Year to you and your loved ones. -Sukhdev -------------- next part -------------- An HTML attachment was scrubbed... URL: From legiangthanh at gmail.com Tue Dec 30 02:46:42 2014 From: legiangthanh at gmail.com (thanh le giang) Date: Tue, 30 Dec 2014 09:46:42 +0700 Subject: [openstack-dev] [Neutron] CI configure for neutron ML2 mechanism driver Message-ID: Hi all According to this tutorial http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing-system/, I have setup CI system successfully. Now, I wonder if Our CI system need to run tempest test against network with real our device or just need run our unit test ? Any advice is appreciated. Thanks and Regards -- L.G.Thanh Email: legiangthan at gmail.com lgthanh at fit.hcmus.edu.vn -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Tue Dec 30 05:16:23 2014 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Mon, 29 Dec 2014 21:16:23 -0800 Subject: [openstack-dev] [Keystone] reminder no meeting this week Message-ID: <1A53212F-02A1-4A46-91BE-6FAFBB8336D5@gmail.com> This is just a reminder there will be no meeting this week for Keystone. Have a great New Years! --Morgan Sent via mobile From anlin.kong at gmail.com Tue Dec 30 06:52:13 2014 From: anlin.kong at gmail.com (Lingxian Kong) Date: Tue, 30 Dec 2014 14:52:13 +0800 Subject: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host? In-Reply-To: References: <54A1567E.2080400@gmail.com> Message-ID: Thanks, Jay Pipes and Jay Lau, for your reply! Just as what Jay Lau said, 'nova hypervisor-show ' indeed returns host ip address, and there are more other information included than 'nova host-describe '. I feel a little confused about the 'host' and 'hypervisor', what's the difference between them? For cloud operator, maybe 'host' is more usefull and intuitive for management than 'hypervisor'. From the implementation perspective, both 'compute_nodes' and 'services' database tables are used for them. Should them be combined for more common use cases? 2014-12-29 21:40 GMT+08:00 Jay Lau : > Does "nova hypervisor-show" help? It already include the host ip address. > > 2014-12-29 21:26 GMT+08:00 Jay Pipes : >> >> On 12/29/2014 06:51 AM, Lingxian Kong wrote: >>> >>> Hi Stackers: >>> >>> As for now, we can get the 'host name', 'service' and 'availability >>> zone' of a host through the CLI command 'nova host-list'. But as a >>> programmer who communicates with OpenStack using its API, I want to >>> get the host ip address, in order to perform some other actions in my >>> program. >>> >>> And what I know is, the ip address of a host is populated in the >>> 'compute_nodes' table of the database, during the 'update available >>> resource' periodic task. >>> >>> So, is it possible of the community to support it in the future? >>> >>> I apologize if the topic was once covered and I missed it. >> >> >> Hi! >> >> I see no real technical reason this could not be done. It would require >> waiting until all of the API microversioning bits are done, and a >> micro-increment of the API, along with minimal changes of code in the hosts >> extension to return the host_ip field from the nova.objects.ComputeNode >> objects returned to the HostController object. >> >> Are you interested in working on such a feature? I would be happy to guide >> you through the process of making a blueprint and submitting code if you'd >> like. Just find me on IRC #openstack-nova. My IRC nick is jaypipes. >> >> Best, >> -jay >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Thanks, > > Jay > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards! ----------------------------------- Lingxian Kong From adamg at ubuntu.com Tue Dec 30 07:09:20 2014 From: adamg at ubuntu.com (Adam Gandelman) Date: Mon, 29 Dec 2014 23:09:20 -0800 Subject: [openstack-dev] check-grenade-dsvm-ironic-sideways failing, blocking much code Message-ID: Heads up here since IRC seems to be crickets this week... Fallout from newer pip's creation and usage of ~/.cache is still biting the sideways ironic grenade job: https://bugs.launchpad.net/ironic/+bug/1405626 This is blocking patches to ironic, nova, devstack, tempest, grenade master + stable/juno. master's main tempest job was broken by the same issue during the holiday, and fixed with some patches to devstack. Those need to be backported to stable/juno devstack and grenade (via a functions-common sync), but two remaining patches are wedged and cannot merge without the other: https://review.openstack.org/#/c/144352/ https://review.openstack.org/#/c/144374/ I've proposed temporary workaround to devstack-gate, which will hopefully allow those two to backport: https://review.openstack.org/#/c/144408/ If anyone has a minute while they switch from eggnog to champagne, it would be appreciated! Thanks Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay.lau.513 at gmail.com Tue Dec 30 07:58:08 2014 From: jay.lau.513 at gmail.com (Jay Lau) Date: Tue, 30 Dec 2014 15:58:08 +0800 Subject: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host? In-Reply-To: References: <54A1567E.2080400@gmail.com> Message-ID: Yes, host is from service table and hypervisor is from compute_nodes table, I think that there are some discussion for this in Paris Summit and there might be some change for this in Kilo. - Detach service from compute node: https://review.openstack.org/#/c/126895/ (implementation on a patch series https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/detach-service-from-computenode,n,z) - Goal : Only provide resources to the ComputeNode object, not anything else - The primary goal of this blueprint is to decouple the servicegroup API from the ideas of tracking resources, since they are two wholly separate things I'm not sure if there are some changes for those commands when host was added to compute_nodes table. 2014-12-30 14:52 GMT+08:00 Lingxian Kong : > Thanks, Jay Pipes and Jay Lau, for your reply! > > Just as what Jay Lau said, 'nova hypervisor-show ' > indeed returns host ip address, and there are more other information > included than 'nova host-describe '. I feel a little > confused about the 'host' and 'hypervisor', what's the difference > between them? For cloud operator, maybe 'host' is more usefull and > intuitive for management than 'hypervisor'. From the implementation > perspective, both 'compute_nodes' and 'services' database tables are > used for them. Should them be combined for more common use cases? > > 2014-12-29 21:40 GMT+08:00 Jay Lau : > > Does "nova hypervisor-show" help? It already include the host ip address. > > > > 2014-12-29 21:26 GMT+08:00 Jay Pipes : > >> > >> On 12/29/2014 06:51 AM, Lingxian Kong wrote: > >>> > >>> Hi Stackers: > >>> > >>> As for now, we can get the 'host name', 'service' and 'availability > >>> zone' of a host through the CLI command 'nova host-list'. But as a > >>> programmer who communicates with OpenStack using its API, I want to > >>> get the host ip address, in order to perform some other actions in my > >>> program. > >>> > >>> And what I know is, the ip address of a host is populated in the > >>> 'compute_nodes' table of the database, during the 'update available > >>> resource' periodic task. > >>> > >>> So, is it possible of the community to support it in the future? > >>> > >>> I apologize if the topic was once covered and I missed it. > >> > >> > >> Hi! > >> > >> I see no real technical reason this could not be done. It would require > >> waiting until all of the API microversioning bits are done, and a > >> micro-increment of the API, along with minimal changes of code in the > hosts > >> extension to return the host_ip field from the nova.objects.ComputeNode > >> objects returned to the HostController object. > >> > >> Are you interested in working on such a feature? I would be happy to > guide > >> you through the process of making a blueprint and submitting code if > you'd > >> like. Just find me on IRC #openstack-nova. My IRC nick is jaypipes. > >> > >> Best, > >> -jay > >> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > -- > > Thanks, > > > > Jay > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Regards! > ----------------------------------- > Lingxian Kong > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Thanks, Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From chen.li at intel.com Tue Dec 30 09:26:35 2014 From: chen.li at intel.com (Li, Chen) Date: Tue, 30 Dec 2014 09:26:35 +0000 Subject: [openstack-dev] [Manila]Rename driver mode Message-ID: <988E98D31B01E44893AF6E48ED9DEFD401B8FA35@SHSMSX101.ccr.corp.intel.com> Hi list, There are two driver modes in manila currently, "multi_svm_mode" and "single_svm_mode". "multi_svm_mode" means usage of share-networks that contain network information for dynamic creation of share-servers (SVMs). "single_svm_mode" means usage of predefined some endpoint, without need to provide share-network and without creation of share-servers (SVMs). Currently, the name of driver mode describes how many servers the driver can handle. For "multi", it says that share driver can handle more than one server. And, if someone share server is just already exist from share driver, but it uses some server anyway with host address, username, password, NFS daemon, etc... are defined as works in "single_svm" mode too. But, as a new user to manila, these names make me really confusing. Because I thought the driver mode names describe how drivers works with share-servers. I thought "multi-" and "single-" indicate the number of share-servers would been created when we create a share, if we are using the driver. Obviously, my understanding is wrong. When we're working under driver generic, one share-server would be created for one share-network. When we're working under driver glusterfs, no share-server would be created at all. I believe I would not be the only one who is making this mistake. To make code more readable, I'd like to suggest to rename driver mode names. Name them based on behavior but not ability. I think three driver modes are needed: - dynamic_svm_mode : Usage of share-networks that contain network information for dynamic creation of share-servers (SVMs). This is how current generic driver works. Under this mode, driver manages share servers itself, and share-server would be created and deleted with related shares. - static_svm_mode: Using pre-create share servers. The case as https://review.openstack.org/#/c/144342/ Under this mode, driver do not manage share servers, but work with them. - no_svm_mode: the case as driver glusterfs working currently, no share-server would be created. Thanks. -chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From skalinowski at mirantis.com Tue Dec 30 11:22:15 2014 From: skalinowski at mirantis.com (Sebastian Kalinowski) Date: Tue, 30 Dec 2014 12:22:15 +0100 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: <20141208144751.GA6497@Ryans-MBP> References: <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> <547F3D48.5020401@gmail.com> <1A3C52DFCD06494D8528644858247BF017812DEB@EX10MBOX03.pnnl.gov> <20141208144751.GA6497@Ryans-MBP> Message-ID: Hello all, What is the current situation with choosing web framework? Was there any progress in the topic? I would like to avoid forgetting about it. 2014-12-08 15:47 GMT+01:00 Ryan Petrello : > Feel free to ask any questions you have in #pecanpy on IRC; I can answer > a lot > more quickly than researching docs, and if you have a special need, I can > usually accommodate with changes to Pecan (I've done so with several > OpenStack > projects in the past). > On 12/08/14 02:10 PM, Nikolay Markov wrote: > > > Yes, and it's been 4 days since last message in this thread and no > > > objections, so it seems > > > that Pecan in now our framework-of-choice for Nailgun and future > > > apps/projects. > > > > We still need some research to do about technical issues and how easy > > we can move to Pecan. Thanks to Ryan, we now have multiple links to > > solutions and docs on discussed issues. I guess we'll dedicate some > > engineer(s) responsible for doing such a research and then make all > > our decisions on subject. > > > > On Mon, Dec 8, 2014 at 11:07 AM, Sebastian Kalinowski > > wrote: > > > 2014-12-04 14:01 GMT+01:00 Igor Kalnitsky : > > >> > > >> Ok, guys, > > >> > > >> It became obvious that most of us either vote for Pecan or abstain > from > > >> voting. > > > > > > > > > Yes, and it's been 4 days since last message in this thread and no > > > objections, so it seems > > > that Pecan in now our framework-of-choice for Nailgun and future > > > apps/projects. > > > > > >> > > >> > > >> So I propose to stop fighting this battle (Flask vs Pecan) and start > > >> thinking about moving to Pecan. You know, there are many questions > > >> that need to be discussed (such as 'should we change API version' or > > >> 'should be it done iteratively or as one patchset'). > > > > > > > > > IMHO small, iterative changes are rather obvious. > > > For other questions maybe we need (draft of ) a blueprint and a > separate > > > mail thread? > > > > > >> > > >> > > >> - Igor > > > > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > OpenStack-dev at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > -- > > Best regards, > > Nick Markov > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Ryan Petrello > Senior Developer, DreamHost > ryan.petrello at dreamhost.com > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobrelia at mirantis.com Tue Dec 30 11:37:59 2014 From: bdobrelia at mirantis.com (Bogdan Dobrelya) Date: Tue, 30 Dec 2014 12:37:59 +0100 Subject: [openstack-dev] [Fuel][plugins] Fuel 6.0 plugin for Pacemaker STONITH (HA fencing) Message-ID: <54A28E97.4000804@mirantis.com> Hello. There is a long living blueprint [0] about HA fencing of failed nodes in Corosync and Pacemaker cluster. Happily, in 6.0 release we have a pluggable architecture supported in Fuel. I propose the following implementation [1] (WIP repo [2]) for this feature as a plugin for puppet. It addresses the related blueprint for HA Fencing in puppet manifests of Fuel library [3]. For initial version, all the data definitions for power management devices should be done manually in YAML files (see the plugin's README.md file). Later it could be done in a more user friendly way, as a part of Fuel UI perhaps. Note that the similar approach - YAML data structures which should be filled in by the cloud admin and passed to Fuel Orchestrator automatically at PXE provision stage - could be used as well for Power management blueprint, see the related ML thread [4]. Please also note, there is a dev docs for Fuel plugins merged recently [5] where you can find how to build and install this plugin. [0] https://blueprints.launchpad.net/fuel/+spec/ha-fencing [1] https://review.openstack.org/#/c/144425/ [2] https://github.com/bogdando/fuel-plugins/tree/fencing_puppet_newprovider/ha_fencing [3] https://blueprints.launchpad.net/fuel/+spec/fencing-in-puppet-manifests [4] http://lists.openstack.org/pipermail/openstack-dev/2014-November/049794.html [5] http://docs.mirantis.com/fuel/fuel-6.0/plugin-dev.html#what-is-pluggable-architecture -- Best regards, Bogdan Dobrelya, Skype #bogdando_at_yahoo.com Irc #bogdando From obondarev at mirantis.com Tue Dec 30 11:47:07 2014 From: obondarev at mirantis.com (Oleg Bondarev) Date: Tue, 30 Dec 2014 14:47:07 +0300 Subject: [openstack-dev] [Openstack-operators] The state of nova-network to neutron migration In-Reply-To: <54A1CE15.9090905@anteaya.info> References: <54945973.1010904@anteaya.info> <54986C4A.8020708@anteaya.info> <54A1CE15.9090905@anteaya.info> Message-ID: On Tue, Dec 30, 2014 at 12:56 AM, Anita Kuno wrote: > On 12/24/2014 04:07 AM, Oleg Bondarev wrote: > > On Mon, Dec 22, 2014 at 10:08 PM, Anita Kuno > wrote: > >> > >> On 12/22/2014 01:32 PM, Joe Gordon wrote: > >>> On Fri, Dec 19, 2014 at 9:28 AM, Kyle Mestery > >> wrote: > >>> > >>>> On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno > >> wrote: > >>>>> > >>>>> Rather than waste your time making excuses let me state where we are > >> and > >>>>> where I would like to get to, also sharing my thoughts about how you > >> can > >>>>> get involved if you want to see this happen as badly as I have been > >> told > >>>>> you do. > >>>>> > >>>>> Where we are: > >>>>> * a great deal of foundation work has been accomplished to > achieve > >>>>> parity with nova-network and neutron to the extent that those > involved > >>>>> are ready for migration plans to be formulated and be put in place > >>>>> * a summit session happened with notes and intentions[0] > >>>>> * people took responsibility and promptly got swamped with other > >>>>> responsibilities > >>>>> * spec deadlines arose and in neutron's case have passed > >>>>> * currently a neutron spec [1] is a work in progress (and it > needs > >>>>> significant work still) and a nova spec is required and doesn't have > a > >>>>> first draft or a champion > >>>>> > >>>>> Where I would like to get to: > >>>>> * I need people in addition to Oleg Bondarev to be available to > >> help > >>>>> come up with ideas and words to describe them to create the specs in > a > >>>>> very short amount of time (Oleg is doing great work and is a fabulous > >>>>> person, yay Oleg, he just can't do this alone) > >>>>> * specifically I need a contact on the nova side of this complex > >>>>> problem, similar to Oleg on the neutron side > >>>>> * we need to have a way for people involved with this effort to > >> find > >>>>> each other, talk to each other and track progress > >>>>> * we need to have representation at both nova and neutron weekly > >>>>> meetings to communicate status and needs > >>>>> > >>>>> We are at K-2 and our current status is insufficient to expect this > >> work > >>>>> will be accomplished by the end of K-3. I will be championing this > >> work, > >>>>> in whatever state, so at least it doesn't fall off the map. If you > >> would > >>>>> like to help this effort please get in contact. I will be thinking of > >>>>> ways to further this work and will be communicating to those who > >>>>> identify as affected by these decisions in the most effective methods > >> of > >>>>> which I am capable. > >>>>> > >>>>> Thank you to all who have gotten us as far as well have gotten in > this > >>>>> effort, it has been a long haul and you have all done great work. > Let's > >>>>> keep going and finish this. > >>>>> > >>>>> Thank you, > >>>>> Anita. > >>>>> > >>>>> Thank you for volunteering to drive this effort Anita, I am very > happy > >>>> about this. I support you 100%. > >>>> > >>>> I'd like to point out that we really need a point of contact on the > nova > >>>> side, similar to Oleg on the Neutron side. IMHO, this is step 1 here > to > >>>> continue moving this forward. > >>>> > >>> > >>> At the summit the nova team marked the nova-network to neutron > migration > >> as > >>> a priority [0], so we are collectively interested in seeing this happen > >> and > >>> want to help in any way possible. With regard to a nova point of > >> contact, > >>> anyone in nova-specs-core should work, that way we can cover more time > >>> zones. > >>> > >>> From what I can gather the first step is to finish fleshing out the > first > >>> spec [1], and it sounds like it would be good to get a few nova-cores > >>> reviewing it as well. > >>> > >>> > >>> > >>> > >>> [0] > >>> > >> > http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html > >>> [1] https://review.openstack.org/#/c/142456/ > >>> > >>> > >> Wonderful, thank you for the support Joe. > >> > >> It appears that we need to have a regular weekly meeting to track > >> progress in an archived manner. > >> > >> I know there was one meeting November but I don't know what it was > >> called so so far I can't find the logs for that. > >> > > > > It wasn't official, we just gathered together on #novamigration. > Attaching > > the log here. > > > Ah, that would explain why I couldn't find the log. Thanks for the > attachment. > > > >> So if those affected by this issue can identify what time (UTC please, > >> don't tell me what time zone you are in it is too hard to guess what UTC > >> time you are available) and day of the week you are available for a > >> meeting I'll create one and we can start talking to each other. > >> > >> I need to avoid Monday 1500 and 2100 UTC, Tuesday 0800 UTC, 1400 UTC and > >> 1900 - 2200 UTC, Wednesdays 1500 - 1700 UTC, Thursdays 1400 and 2100 > UTC. > >> > > > > I'm available each weekday 0700-1600 UTC, 1700-1800 UTC is also > acceptable. > > > > Thanks, > > Oleg > Wonderful, thank you Oleg. We will aim for a meeting time in this range. > I also understand holidays in Russia start on January 1 and go until the > 11th or the 12th, I'm guessing this includes you. > Correct. However I'll be available since January 5. Thanks Anita. Thanks, Oleg > Thanks, > Anita. > > > > > >> > >> Thanks, > >> Anita. > >> > >>>> > >>>> Thanks, > >>>> Kyle > >>>> > >>>> > >>>>> [0] > https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron > >>>>> [1] https://review.openstack.org/#/c/142456/ > >>>>> > >>>>> _______________________________________________ > >>>>> OpenStack-operators mailing list > >>>>> OpenStack-operators at lists.openstack.org > >>>>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >>>>> > >>>> > >>>> _______________________________________________ > >>>> OpenStack-dev mailing list > >>>> OpenStack-dev at lists.openstack.org > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > >>>> > >>> > >> > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmarkov at mirantis.com Tue Dec 30 13:59:19 2014 From: nmarkov at mirantis.com (Nikolay Markov) Date: Tue, 30 Dec 2014 17:59:19 +0400 Subject: [openstack-dev] [Fuel][Nailgun] Web framework In-Reply-To: References: <547F23EB.7010402@gmail.com> <547F2D08.3030705@gmail.com> <547F3D48.5020401@gmail.com> <1A3C52DFCD06494D8528644858247BF017812DEB@EX10MBOX03.pnnl.gov> <20141208144751.GA6497@Ryans-MBP> Message-ID: Hi Sebastian, Nobody is forgetting this topic, especially me :) We're going to dedicate an engineer to do some research on topic based on Ryan's comments and my old pull request on Nailgun with Pecan. The only thing is that it's not very high priority topic in our roadmap. Don't worry, I'm sure we'll get to it already in January. 30 ??? 2014 ?. 15:25 ???????????? "Sebastian Kalinowski" < skalinowski at mirantis.com> ???????: > Hello all, > > What is the current situation with choosing web framework? Was there any > progress in the topic? I would like to avoid forgetting about it. > > 2014-12-08 15:47 GMT+01:00 Ryan Petrello : > >> Feel free to ask any questions you have in #pecanpy on IRC; I can answer >> a lot >> more quickly than researching docs, and if you have a special need, I can >> usually accommodate with changes to Pecan (I've done so with several >> OpenStack >> projects in the past). >> On 12/08/14 02:10 PM, Nikolay Markov wrote: >> > > Yes, and it's been 4 days since last message in this thread and no >> > > objections, so it seems >> > > that Pecan in now our framework-of-choice for Nailgun and future >> > > apps/projects. >> > >> > We still need some research to do about technical issues and how easy >> > we can move to Pecan. Thanks to Ryan, we now have multiple links to >> > solutions and docs on discussed issues. I guess we'll dedicate some >> > engineer(s) responsible for doing such a research and then make all >> > our decisions on subject. >> > >> > On Mon, Dec 8, 2014 at 11:07 AM, Sebastian Kalinowski >> > wrote: >> > > 2014-12-04 14:01 GMT+01:00 Igor Kalnitsky : >> > >> >> > >> Ok, guys, >> > >> >> > >> It became obvious that most of us either vote for Pecan or abstain >> from >> > >> voting. >> > > >> > > >> > > Yes, and it's been 4 days since last message in this thread and no >> > > objections, so it seems >> > > that Pecan in now our framework-of-choice for Nailgun and future >> > > apps/projects. >> > > >> > >> >> > >> >> > >> So I propose to stop fighting this battle (Flask vs Pecan) and start >> > >> thinking about moving to Pecan. You know, there are many questions >> > >> that need to be discussed (such as 'should we change API version' or >> > >> 'should be it done iteratively or as one patchset'). >> > > >> > > >> > > IMHO small, iterative changes are rather obvious. >> > > For other questions maybe we need (draft of ) a blueprint and a >> separate >> > > mail thread? >> > > >> > >> >> > >> >> > >> - Igor >> > > >> > > >> > > >> > > _______________________________________________ >> > > OpenStack-dev mailing list >> > > OpenStack-dev at lists.openstack.org >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > >> > >> > >> > >> > -- >> > Best regards, >> > Nick Markov >> > >> > _______________________________________________ >> > OpenStack-dev mailing list >> > OpenStack-dev at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -- >> Ryan Petrello >> Senior Developer, DreamHost >> ryan.petrello at dreamhost.com >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dguryanov at parallels.com Tue Dec 30 14:18:19 2014 From: dguryanov at parallels.com (Dmitry Guryanov) Date: Tue, 30 Dec 2014 17:18:19 +0300 Subject: [openstack-dev] Why nova mounts FS for LXC container instead of libvirt? Message-ID: <4412597.XcIlrXLFmH@dblinov.sw.ru> Hello, Libvirt can create loop or nbd device for LXC container and mount it by itself, for instance, you can add something like this to xml config: But nova mounts filesystem for container by itself. Is this because rhel-6 doesn't support filesystems with type='file' or there are some other reasons? -- Dmitry Guryanov From dkranz at redhat.com Tue Dec 30 14:46:35 2014 From: dkranz at redhat.com (David Kranz) Date: Tue, 30 Dec 2014 09:46:35 -0500 Subject: [openstack-dev] [all] Proper use of 'git review -R' Message-ID: <54A2BACB.7040005@redhat.com> Many times when I review a revision of an existing patch, I can't see just the change from the previous version due to other rebases. The git-review documentation mentions this issue and suggests using -R to make life easier for reviewers when submitting new revisions. Can some one explain when we should *not* use -R after doing 'git commit --amend'? Or is using -R just something that should be done but many folks don't know about it? -David From git-review doc: -R, --no-rebase Do not automatically perform a rebase before submitting the change to Gerrit. When submitting a change for review, you will usually want it to be based on the tip of upstream branch in order to avoid possible conflicts. When amending a change and rebasing the new patchset, the Gerrit web interface will show a difference between the two patchsets which contains all commits in between. This may confuse many reviewers that would expect to see a much simpler differ? ence. From dguryanov at parallels.com Tue Dec 30 14:52:58 2014 From: dguryanov at parallels.com (Dmitry Guryanov) Date: Tue, 30 Dec 2014 17:52:58 +0300 Subject: [openstack-dev] Why does nova mount FS for LXC container instead of libvirt? In-Reply-To: <4412597.XcIlrXLFmH@dblinov.sw.ru> References: <4412597.XcIlrXLFmH@dblinov.sw.ru> Message-ID: <3165292.35T0RzmyXs@dblinov.sw.ru> On Tuesday 30 December 2014 17:18:19 Dmitry Guryanov wrote: > Hello, > > Libvirt can create loop or nbd device for LXC container and mount it by > itself, for instance, you can add something like this to xml config: > > > > > > > > But nova mounts filesystem for container by itself. Is this because rhel-6 > doesn't support filesystems with type='file' or there are some other > reasons? You can define domain with such filesystem in rhel-6's libvirt, but container will use host's root fs, probably there is a bug. -- Dmitry Guryanov From fungi at yuggoth.org Tue Dec 30 16:16:25 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 30 Dec 2014 16:16:25 +0000 Subject: [openstack-dev] [QA] check-grenade-dsvm-ironic-sideways failing, blocking much code In-Reply-To: References: Message-ID: <20141230161625.GF2497@yuggoth.org> On 2014-12-29 23:09:20 -0800 (-0800), Adam Gandelman wrote: [...] > I've proposed temporary workaround to devstack-gate, which will > hopefully allow those two to backport: > > https://review.openstack.org/#/c/144408/ [...] That wasn't going to work since it ran too late. We need to catch it between when the stack user is created and when DevStack runs anything as that same user, so the patch has to go into the openstack-dev/devstack stable/icehouse branch and work its way forward from there. See https://review.openstack.org/144475 for my counterproposal. -- Jeremy Stanley From dolph.mathews at gmail.com Tue Dec 30 16:32:22 2014 From: dolph.mathews at gmail.com (Dolph Mathews) Date: Tue, 30 Dec 2014 10:32:22 -0600 Subject: [openstack-dev] [all] Proper use of 'git review -R' In-Reply-To: <54A2BACB.7040005@redhat.com> References: <54A2BACB.7040005@redhat.com> Message-ID: The default behavior, rebasing automatically, is the sane default to avoid having developers run into unexpected merge conflicts on new patch submissions. But if git-review can check to see if a review already exists in gerrit *before* doing the local rebase, I'd be in favor of it skipping the rebase by default if the review already exists. Require developers to rebase existing patches manually. (This is my way of saying I can't think of a good answer to your question.) While we're on the topic, it's also worth noting that --no-rebase becomes critically important when a patch in the review sequence has already been approved, because the entire series will be rebased, potentially pulling patches out of the gate, clearing the Workflow+1 bit, and resetting the gate (probably unnecessarily). A tweak to the default behavior would help avoid this scenario. -Dolph On Tue, Dec 30, 2014 at 8:46 AM, David Kranz wrote: > Many times when I review a revision of an existing patch, I can't see just > the change from the previous version due to other rebases. The git-review > documentation mentions this issue and suggests using -R to make life easier > for reviewers when submitting new revisions. Can some one explain when we > should *not* use -R after doing 'git commit --amend'? Or is using -R just > something that should be done but many folks don't know about it? > > -David > > From git-review doc: > > -R, --no-rebase > Do not automatically perform a rebase before submitting the > change to Gerrit. > > When submitting a change for review, you will usually want it to > be based on the tip of upstream branch in order to avoid possible > conflicts. When amending a change and rebasing the new patchset, > the Gerrit web interface will show a difference between the two > patchsets which contains all commits in between. This may confuse > many reviewers that would expect to see a much simpler differ? > ence. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.mitchell at rackspace.com Tue Dec 30 16:34:16 2014 From: kevin.mitchell at rackspace.com (Kevin L. Mitchell) Date: Tue, 30 Dec 2014 10:34:16 -0600 Subject: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host? In-Reply-To: References: <54A1567E.2080400@gmail.com> Message-ID: <1419957256.3402.2.camel@einstein.kev> On Tue, 2014-12-30 at 14:52 +0800, Lingxian Kong wrote: > Just as what Jay Lau said, 'nova hypervisor-show ' > indeed returns host ip address, and there are more other information > included than 'nova host-describe '. I feel a little > confused about the 'host' and 'hypervisor', what's the difference > between them? For cloud operator, maybe 'host' is more usefull and > intuitive for management than 'hypervisor'. From the implementation > perspective, both 'compute_nodes' and 'services' database tables are > used for them. Should them be combined for more common use cases? Well, the host and the hypervisor are conceptually distinct objects. The hypervisor is, obviously, the thing on which all the VMs run. The host, though, is the node running the corresponding nova-compute service, which may be separate from the hypervisor. For instance, on Xen-based setups, the host runs in a VM on the hypervisor. There has also been discussion of allowing one host to be responsible for multiple hypervisors, which would be useful for providers with large numbers of hypervisors. -- Kevin L. Mitchell Rackspace From fungi at yuggoth.org Tue Dec 30 16:37:25 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 30 Dec 2014 16:37:25 +0000 Subject: [openstack-dev] [all] Proper use of 'git review -R' In-Reply-To: <54A2BACB.7040005@redhat.com> References: <54A2BACB.7040005@redhat.com> Message-ID: <20141230163725.GG2497@yuggoth.org> On 2014-12-30 09:46:35 -0500 (-0500), David Kranz wrote: [...] > Can some one explain when we should *not* use -R after doing 'git > commit --amend'? [...] In the standard workflow this should never be necessary. The default behavior in git-review is to attempt a rebase and then undo it before submitting. If the rebase shows merge conflicts, the push will be averted and the user instructed to deal with those conflicts. Using -R will skip this check and allow you to push changes which can't merge due to conflicts. > From git-review doc: > > -R, --no-rebase > Do not automatically perform a rebase before submitting the > change to Gerrit. > > When submitting a change for review, you will usually want it to > be based on the tip of upstream branch in order to avoid possible > conflicts. When amending a change and rebasing the new patchset, > the Gerrit web interface will show a difference between the two > patchsets which contains all commits in between. This may confuse > many reviewers that would expect to see a much simpler differ? > ence. While not entirely incorrect, it could stand to be updated with slightly more clarification around the fact that git-review (since around 1.16 a few years ago) does not push an automatically rebased change for you unless you are using -F/--force-rebase. If you are finding changes which are gratuitously rebased, this is likely either from a contributor who does not use the recommended change update workflow, has modified their rebase settings or perhaps is running a very, very old git-review version. -- Jeremy Stanley From josh at pcsforeducation.com Tue Dec 30 16:37:49 2014 From: josh at pcsforeducation.com (Josh Gachnang) Date: Tue, 30 Dec 2014 16:37:49 +0000 Subject: [openstack-dev] [Ironic] thoughts on the midcycle References: <20141229234527.GA25511@jimrollenhagen.com> Message-ID: I could definitely make a Bay Area meetup. On Mon Dec 29 2014 at 3:50:04 PM Jim Rollenhagen wrote: > On Mon, Dec 29, 2014 at 10:45:57PM +0000, Devananda van der Veen wrote: > > I'm sending the details of the midcycle in a separate email. Before you > > reply that you won't be able to make it, I'd like to share some thoughts > / > > concerns. > > > > In the last few weeks, several people who I previously thought would > attend > > told me that they can't. By my informal count, it looks like we will have > > at most 5 of our 10 core reviewers in attendance. I don't think we should > > cancel based on that, but it does mean that we need to set our > expectations > > accordingly. > > > > Assuming that we will be lacking about half the core team, I think it > will > > be more practical as a focused sprint, rather than a planning & design > > meeting. While that's a break from precedent, planning should be > happening > > via the spec review process *anyway*. Also, we already have a larger back > > log of specs and work than we had this time last cycle, but with the same > > size review team. Rather than adding to our backlog, I would like us to > use > > this gathering to burn through some specs and land some code. > > > > That being said, I'd also like to put forth this idea: if we had a second > > gathering (with the same focus on writing code) the following week (let's > > say, Feb 11 - 13) in the SF Bay area -- who would attend? Would we be > able > > to get the "other half" of the core team together and get more work done? > > Is this a good idea? > > > > I'm +1 on a Bay Area meetup; however, if it happens I likely won't be > making the Grenoble meetup. There's a slim chance I can do both. I'd > like to figure this out ASAP so I can book travel at a reasonable price. > > A second meetup certainly can't be bad; I'm sure we can get a ton of > work done with the folks that I assume would attend. :) > > // jim > > > OK. That's enough of my musing for now... > > > > Once again, if you will be attending the midcycle sprint in Grenoble the > > week of Feb 3rd, please sign up HERE > > -sprint-in-grenoble-tickets-15082886319> > > . > > > > Regards, > > Devananda > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Dec 30 16:47:06 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 30 Dec 2014 16:47:06 +0000 Subject: [openstack-dev] [all] Proper use of 'git review -R' In-Reply-To: References: <54A2BACB.7040005@redhat.com> Message-ID: <20141230164705.GH2497@yuggoth.org> On 2014-12-30 10:32:22 -0600 (-0600), Dolph Mathews wrote: > The default behavior, rebasing automatically, is the sane default > to avoid having developers run into unexpected merge conflicts on > new patch submissions. Please show me an example of this in the wild. I suspect a lot of reviewers are placing blame on this without due investigation. > But if git-review can check to see if a review already exists in > gerrit *before* doing the local rebase, I'd be in favor of it > skipping the rebase by default if the review already exists. > Require developers to rebase existing patches manually. (This is > my way of saying I can't think of a good answer to your question.) It already requires contributors to take manual action--it will not automatically rebase and then push that without additional steps. > While we're on the topic, it's also worth noting that --no-rebase > becomes critically important when a patch in the review sequence > has already been approved, because the entire series will be > rebased, potentially pulling patches out of the gate, clearing the > Workflow+1 bit, and resetting the gate (probably unnecessarily). A > tweak to the default behavior would help avoid this scenario. The only thing -R is going to accomplish is people uploading changes which can never pass because they're merge-conflicting with the target branch. -- Jeremy Stanley From jay at jvf.cc Tue Dec 30 16:51:00 2014 From: jay at jvf.cc (Jay Faulkner) Date: Tue, 30 Dec 2014 16:51:00 +0000 Subject: [openstack-dev] [Ironic] thoughts on the midcycle In-Reply-To: References: Message-ID: On Dec 29, 2014, at 2:45 PM, Devananda van der Veen > wrote: That being said, I'd also like to put forth this idea: if we had a second gathering (with the same focus on writing code) the following week (let's say, Feb 11 - 13) in the SF Bay area -- who would attend? Would we be able to get the "other half" of the core team together and get more work done? Is this a good idea? +1 I?d be willing and able to attend this. - Jay Faulkner -------------- next part -------------- An HTML attachment was scrubbed... URL: From dguryanov at parallels.com Tue Dec 30 16:53:21 2014 From: dguryanov at parallels.com (Dmitry Guryanov) Date: Tue, 30 Dec 2014 19:53:21 +0300 Subject: [openstack-dev] [Nova] Why nova mounts FS for LXC container instead of libvirt? In-Reply-To: <4412597.XcIlrXLFmH@dblinov.sw.ru> References: <4412597.XcIlrXLFmH@dblinov.sw.ru> Message-ID: <2170693.4W2S2TJls9@dblinov.sw.ru> On Tuesday 30 December 2014 17:18:19 Dmitry Guryanov wrote: > Hello, > > Libvirt can create loop or nbd device for LXC container and mount it by > itself, for instance, you can add something like this to xml config: > > > > > > > > But nova mounts filesystem for container by itself. Is this because rhel-6 > doesn't support filesystems with type='file' or there are some other > reasons? Sorry, forgot to add [Nova] prefix in the first message. -- Dmitry Guryanov From blk at acm.org Tue Dec 30 16:54:24 2014 From: blk at acm.org (Brant Knudson) Date: Tue, 30 Dec 2014 10:54:24 -0600 Subject: [openstack-dev] [all] Proper use of 'git review -R' In-Reply-To: <54A2BACB.7040005@redhat.com> References: <54A2BACB.7040005@redhat.com> Message-ID: On Tue, Dec 30, 2014 at 8:46 AM, David Kranz wrote: > Many times when I review a revision of an existing patch, I can't see just > the change from the previous version due to other rebases. I've gotten used to this. Typically when I review a new patch set I look for my comments and make sure they were addressed. Then I go back to compare with the base revision and look through the patch again. It's quicker this time since I remember what it was about. The git-review documentation mentions this issue and suggests using -R to > make life easier for reviewers when submitting new revisions. Can some one > explain when we should *not* use -R after doing 'git commit --amend'? A developer updating a patch is going to want to test the change using the latest master and not an old buggy version, so before you make your changes you rebase and -R isn't going to make a difference. You could be really considerate and re-rebase on the original parent. (Or you could be lazy and not test your changes locally with the latest master.) When you have a chain of commits rebasing gets more complicated since gerrit shows a dependent review as out of date if the parent review is changed in any way, and if there's a merge conflict far down the chain you have to rebase the whole chain. Rebasing the patch on master does make one thing easier -- if you want to download the patch and try it out and your newer environment (tox or devstack for example) doesn't work with the patch repo's old environment you'll need to rebase first to get it to work. Or is using -R just something that should be done but many folks don't know > about it? > > -David > > From git-review doc: > > -R, --no-rebase > Do not automatically perform a rebase before submitting the > change to Gerrit. > > When submitting a change for review, you will usually want it to > be based on the tip of upstream branch in order to avoid possible > conflicts. When amending a change and rebasing the new patchset, > the Gerrit web interface will show a difference between the two > patchsets which contains all commits in between. This may confuse > many reviewers that would expect to see a much simpler differ? > ence. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkranz at redhat.com Tue Dec 30 17:31:35 2014 From: dkranz at redhat.com (David Kranz) Date: Tue, 30 Dec 2014 12:31:35 -0500 Subject: [openstack-dev] [all] Proper use of 'git review -R' In-Reply-To: <20141230163725.GG2497@yuggoth.org> References: <54A2BACB.7040005@redhat.com> <20141230163725.GG2497@yuggoth.org> Message-ID: <54A2E177.40403@redhat.com> On 12/30/2014 11:37 AM, Jeremy Stanley wrote: > On 2014-12-30 09:46:35 -0500 (-0500), David Kranz wrote: > [...] >> Can some one explain when we should *not* use -R after doing 'git >> commit --amend'? > [...] > > In the standard workflow this should never be necessary. The default > behavior in git-review is to attempt a rebase and then undo it > before submitting. If the rebase shows merge conflicts, the push > will be averted and the user instructed to deal with those > conflicts. Using -R will skip this check and allow you to push > changes which can't merge due to conflicts. > >> From git-review doc: >> >> -R, --no-rebase >> Do not automatically perform a rebase before submitting the >> change to Gerrit. >> >> When submitting a change for review, you will usually want it to >> be based on the tip of upstream branch in order to avoid possible >> conflicts. When amending a change and rebasing the new patchset, >> the Gerrit web interface will show a difference between the two >> patchsets which contains all commits in between. This may confuse >> many reviewers that would expect to see a much simpler differ? >> ence. > While not entirely incorrect, it could stand to be updated with > slightly more clarification around the fact that git-review (since > around 1.16 a few years ago) does not push an automatically rebased > change for you unless you are using -F/--force-rebase. > > If you are finding changes which are gratuitously rebased, this is > likely either from a contributor who does not use the recommended > change update workflow, has modified their rebase settings or > perhaps is running a very, very old git-review version. Thanks for the replies. The rebases I was referring to are not gratuitous, they just make it harder for the reviewer. I take a few things away from this. 1. This is really a UI issue, and one that is experienced by many. What is desired is an option to look at different revisions of the patch that show only what the author actually changed, unless there was a conflict. 2. Using -R is dangerous unless you really know what you are doing. The doc string makes it sound like an innocuous way to help reviewers. -David From fungi at yuggoth.org Tue Dec 30 18:24:02 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 30 Dec 2014 18:24:02 +0000 Subject: [openstack-dev] [all] Proper use of 'git review -R' In-Reply-To: <54A2E177.40403@redhat.com> References: <54A2BACB.7040005@redhat.com> <20141230163725.GG2497@yuggoth.org> <54A2E177.40403@redhat.com> Message-ID: <20141230182401.GI2497@yuggoth.org> On 2014-12-30 12:31:35 -0500 (-0500), David Kranz wrote: [...] > 1. This is really a UI issue, and one that is experienced by many. > What is desired is an option to look at different revisions of the > patch that show only what the author actually changed, unless > there was a conflict. I'm not sure it's entirely a UI issue. It runs deeper. There simply isn't enough metadata in Git to separate intentional edits from edits made to solve merge conflicts. Using merge commits instead of rebases mostly solves this particular problem but at the expense of introducing all sorts of new ones. A rebase-oriented workflow makes it easier for merge conflicts to be resolved along the way, instead of potentially nullifying valuable review effort at the very end when it comes time to approve the change and it's no longer relevant to the target branch. There is a potential work-around, though it currently involves some manual effort (not sure whether it would be sane to automate as a feature of git-review). When you notice your change conflicts and will need a rebase, first reset and stash your change, then reset --hard to the previous patchset already in Gerrit, then rebase that and push it (solving the merge conflicts if any), then pop your stashed edits (solving any subsequent merge conflicts) and finally push that as yet another patchset. This separates the rebase from your intentional modifications though at the cost of rather a lot of extra work. Alternatively you could push your edits with git review -R and _then_ follow up with another patchset rebasing on the target branch and resolving the merge conflicts. Possibly slightly easier? > 2. Using -R is dangerous unless you really know what you are > doing. The doc string makes it sound like an innocuous way to help > reviewers. Not necessarily dangerous, but it does allow you to push changes which are just going to flat fail all jobs because they can't be merged to the target branch to get tested. -- Jeremy Stanley From me at clifhouck.com Tue Dec 30 21:58:13 2014 From: me at clifhouck.com (Clif Houck) Date: Tue, 30 Dec 2014 15:58:13 -0600 Subject: [openstack-dev] [Ironic] thoughts on the midcycle In-Reply-To: References: Message-ID: <54A31FF5.5070207@clifhouck.com> I'll attend. Whether it's in-person or remote is up in the air though. Clif On 12/30/2014 10:51 AM, Jay Faulkner wrote: > >> On Dec 29, 2014, at 2:45 PM, Devananda van der Veen >> > wrote: >> >> That being said, I'd also like to put forth this idea: if we had a >> second gathering (with the same focus on writing code) the following >> week (let's say, Feb 11 - 13) in the SF Bay area -- who would attend? >> Would we be able to get the "other half" of the core team together and >> get more work done? Is this a good idea? >> > > +1 I?d be willing and able to attend this. > > > - > Jay Faulkner > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From kurt.r.taylor at gmail.com Tue Dec 30 22:14:09 2014 From: kurt.r.taylor at gmail.com (Kurt Taylor) Date: Tue, 30 Dec 2014 16:14:09 -0600 Subject: [openstack-dev] [third-party] Third-party CI meeting change Message-ID: There will be a new working group meeting for improving the consumability of infra CI components and documentation. The vote has been tallied and the meeting times will be Wednesdays at 1500/0400 UTC alternating. The existing meeting at 1800 UTC on Monday January 5th (and all future meetings at that time) will be cancelled and we will start the new time on January 7th with 1500 UTC. The following week, January 14th, will be at the alternating time 0400 UTC. The other times on Monday and Tuesday will remain unchanged for helping new CI operators in understanding the infra tools and processes. Refer to the meetings page for more details: https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting Thanks, Kurt Taylor (krtaylor) -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian.otto at rackspace.com Tue Dec 30 22:34:33 2014 From: adrian.otto at rackspace.com (Adrian Otto) Date: Tue, 30 Dec 2014 22:34:33 +0000 Subject: [openstack-dev] [Magnum] Proposed Changes to Magnum Core In-Reply-To: <61EAB960-3A1B-401E-9C1B-670DCCEB95A3@rackspace.com> References: <61EAB960-3A1B-401E-9C1B-670DCCEB95A3@rackspace.com> Message-ID: <564BCD0C-9CDD-407E-ABA2-B1FAE1289995@rackspace.com> Thanks for your votes. Changes have been applied. Welcome Motohiro/Yuanying Otsuka to the core group. Regards, Adrian On Dec 26, 2014, at 12:44 PM, Adrian Otto wrote: > Magnum Cores, > > I propose the following addition to the Magnum Core group[1]: > > + Motohiro/Yuanying Otsuka (ootsuka) > > Please let me know your votes by replying to this message. > > Thanks, > > Adrian > > [1] https://review.openstack.org/#/admin/groups/473,members Current Members > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dannchoi at cisco.com Tue Dec 30 22:41:59 2014 From: dannchoi at cisco.com (Danny Choi (dannchoi)) Date: Tue, 30 Dec 2014 22:41:59 +0000 Subject: [openstack-dev] [qa] What does it mean when a network's admin_state_up = false? Message-ID: Hi, I have a VM with an interface attached to network ?provider-net-1? and assigned IP 66.0.0.8. localadmin at qa4:~/devstack$ nova list +--------------------------------------+------+--------+------------+-------------+--------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+--------------------------+ | d4815a38-ea64-4189-95b2-fefe82a07b72 | vm-1 | ACTIVE | - | Running | provider_net-1=66.0.0.8 | +--------------------------------------+------+--------+------------+-------------+--------------------------+ Verify ping 66.0.0.8 from the router namespace is successful. Then I set the admin_state_up = false for the network. localadmin at qa4:~/devstack$ neutron net-update --admin_state_up=false provider_net-1 Updated network: provider_net-1 localadmin at qa4:~/devstack$ neutron net-show provider_net-1 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | False | <<<<<<< | id | 9532b759-68a2-4dc0-bcd4-b372fccabe3c | | name | provider_net-1 | | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 399 | | router:external | False | | shared | False | | status | ACTIVE | | subnets | 8e75c110-9b31-4268-ba5c-e130fa139d32 | | tenant_id | e217fbc20a3b4f4fab49ec580e9b6a15 | +---------------------------+--------------------------------------+ Afterwards, the ping is still successful. I expect the ping to fail since the network admin_state_up= false. What is the expected behavior? What does it mean when a network's admin_state_up = false? Thanks, Danny -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.padrixe at gmail.com Tue Dec 30 23:58:21 2014 From: pierre.padrixe at gmail.com (Pierre Padrixe) Date: Wed, 31 Dec 2014 00:58:21 +0100 Subject: [openstack-dev] [Solum] Addition to solum core In-Reply-To: <2C3448F15339494A8F56D64C997C071477989EDE@ORD1EXD04.RACKSPACE.CORP> References: <11D5ED2E-A3E7-426F-9F87-8C2757506932@rackspace.com> <2C3448F15339494A8F56D64C997C071477989EDE@ORD1EXD04.RACKSPACE.CORP> Message-ID: +1 2014-12-27 19:02 GMT+01:00 Devdatta Kulkarni < devdatta.kulkarni at rackspace.com>: > +1 > > ------------------------------ > *From:* James Y. Li [yueli.m at gmail.com] > *Sent:* Saturday, December 27, 2014 9:03 AM > *To:* OpenStack Development Mailing List > *Subject:* Re: [openstack-dev] [Solum] Addition to solum core > > +1! > > -James Li > On Dec 27, 2014 2:02 AM, "Adrian Otto" wrote: > >> Solum cores, >> >> I propose the following addition to the solum-core group[1]: >> >> + Ed Cranford (ed--cranford) >> >> Please reply to this email to indicate your votes. >> >> Thanks, >> >> Adrian Otto >> >> [1] https://review.openstack.org/#/admin/groups/229,members Current >> Members >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Wed Dec 31 06:20:18 2014 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 31 Dec 2014 14:20:18 +0800 Subject: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host? In-Reply-To: <1419957256.3402.2.camel@einstein.kev> References: <54A1567E.2080400@gmail.com> <1419957256.3402.2.camel@einstein.kev> Message-ID: Thanks Kevin for your clarification, which further affirms my belief that ip address should be included in the host info. I will contact Jay Pipes on IRC, to see what can I help towards this effort, soon after the New Year's Day in China. :) 2014-12-31 0:34 GMT+08:00 Kevin L. Mitchell : > On Tue, 2014-12-30 at 14:52 +0800, Lingxian Kong wrote: >> Just as what Jay Lau said, 'nova hypervisor-show ' >> indeed returns host ip address, and there are more other information >> included than 'nova host-describe '. I feel a little >> confused about the 'host' and 'hypervisor', what's the difference >> between them? For cloud operator, maybe 'host' is more usefull and >> intuitive for management than 'hypervisor'. From the implementation >> perspective, both 'compute_nodes' and 'services' database tables are >> used for them. Should them be combined for more common use cases? > > Well, the host and the hypervisor are conceptually distinct objects. > The hypervisor is, obviously, the thing on which all the VMs run. The > host, though, is the node running the corresponding nova-compute > service, which may be separate from the hypervisor. For instance, on > Xen-based setups, the host runs in a VM on the hypervisor. There has > also been discussion of allowing one host to be responsible for multiple > hypervisors, which would be useful for providers with large numbers of > hypervisors. > -- > Kevin L. Mitchell > Rackspace > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards! ----------------------------------- Lingxian Kong From nikhil at manchanda.me Wed Dec 31 09:37:29 2014 From: nikhil at manchanda.me (Nikhil Manchanda) Date: Wed, 31 Dec 2014 01:37:29 -0800 Subject: [openstack-dev] [Trove] No Weekly Trove Meeting on Wednesday, Dec 31st Message-ID: Hey folks: Just a quick reminder that there will be no Weekly Trove Meeting on Wednesday, Dec 31st. We will resume the weekly meeting next year on January 7th. See you in the new year! Thanks, Nikhil -------------- next part -------------- An HTML attachment was scrubbed... URL: From smelikyan at mirantis.com Wed Dec 31 11:02:58 2014 From: smelikyan at mirantis.com (Serg Melikyan) Date: Wed, 31 Dec 2014 15:02:58 +0400 Subject: [openstack-dev] [Murano] Meeting on 01/06 is canceled Message-ID: Hi folks, We agreed to cancel next meeting scheduled on 01/06 due to extended holidays in Russia. Next meeting is scheduled on 01/13. -- Serg Melikyan, Senior Software Engineer at Mirantis, Inc. http://mirantis.com | smelikyan at mirantis.com +7 (495) 640-4904, 0261 +7 (903) 156-0836 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Wed Dec 31 13:11:30 2014 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Wed, 31 Dec 2014 13:11:30 +0000 Subject: [openstack-dev] [Ironic] thoughts on the midcycle In-Reply-To: <54A31FF5.5070207@clifhouck.com> References: <54A31FF5.5070207@clifhouck.com> Message-ID: Hi I probably won't be able to make it to the SF Bay Area, but I think it's a good idea for those who can't go to Grenoble. Lucas On Tue, Dec 30, 2014 at 9:58 PM, Clif Houck wrote: > I'll attend. Whether it's in-person or remote is up in the air though. > > Clif > > On 12/30/2014 10:51 AM, Jay Faulkner wrote: > > > >> On Dec 29, 2014, at 2:45 PM, Devananda van der Veen > >> > wrote: > >> > >> That being said, I'd also like to put forth this idea: if we had a > >> second gathering (with the same focus on writing code) the following > >> week (let's say, Feb 11 - 13) in the SF Bay area -- who would attend? > >> Would we be able to get the "other half" of the core team together and > >> get more work done? Is this a good idea? > >> > > > > +1 I?d be willing and able to attend this. > > > > > > - > > Jay Faulkner > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcm at cisco.com Wed Dec 31 15:33:42 2014 From: pcm at cisco.com (Paul Michali (pcm)) Date: Wed, 31 Dec 2014 15:33:42 +0000 Subject: [openstack-dev] [neutron] Need help getting DevStack setup working for VPN testing Message-ID: I?ve been playing a bit with trying to get VPNaaS working post-repo split, and haven?t been successful. I?m trying it a few ways with DevStack, and I?m not sure whether I have a config error, setup issue, or there is something due to the split. In the past (and it?s been a few months since I verified VPN operation), I used two bare metal machines and an external switch connecting them. With a DevStack cloud running on each. That configuration is currently setup for a vendor VPN solution, so I wanted to try different methods to test the reference VPN implementation. I?ve got two ideas to do this: A) Run DevStack and create two routers with a shared ?public? network, and two private networks, setting up a VPN connection between the private nets. B) Run two DevStack instances (on two VMs) and try to setup a provider network between them. I?m starting with A (though I did try B quickly, but it didn?t work), and I spun up the stack, added a second router (all under the same tenant), created another private network, and booted a Cirros VM in each private net. Before even trying VPN, I checked pings. From the first private net VM (10.1.0.4), I could ping on the pubic net, including the public IP of the second private net?s public interface for its router. I cannot ping the VM from the host. That seems all expected to me. What seems wrong is the other VM (this is on the post stack net I created). Like the other VM, I can ping public net IPs. However, I can also ping the private net address of the first network?s router (10.1.0.1)! Shouldn?t that have failed (at least that was what I was expecting)? I can?t ping the VM on that side though. Another curiosity is that the VM got the second IP on the subnet (10.2.0.2), unlike the other private net, where DHCP and a compute probe got the 2nd and 3rd IPs. There is DHCP enabled on this private network. When I tried VPN, both connections show as DOWN, and all I see are phase 1 ident packets. I cannot ping from VM to VM. I don?t see any logging for the OpenSwan processes, so not to sure how to debug. Maybe I can try some ipsec show command? I?m not too sure what is wrong with this setup. For a comparison, I decided to do the same thing, using stable/juno. So, I fired up a VM and cloned DevStack with stable/juno and stacked. This time, things are even worse! When I try to boot a VM, and then check the status, the VM is in PAUSED power state. I can?t seem to unpause (nor do I know why it is in this state). Verified this with both Cirros 3.3, 3.2, and Ubuntu cloud images: +--------------------------------------+----------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | juno | | OS-EXT-SRV-ATTR:hypervisor_hostname | juno | | OS-EXT-SRV-ATTR:instance_name | instance-00000001 | | OS-EXT-STS:power_state | 3 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2014-12-31T15:15:33.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-12-31T15:15:24Z | | flavor | m1.tiny (1) | | hostId | 5b0c48250ccc0ac3fca8a821e29e4b154ec0b101f9cc0a0b27071a3f | | id | ec5c8d70-ae80-4cc3-a5bb-b68019170dd6 | | image | cirros-0.3.3-x86_64-uec (797e4dee-8c03-497f-8dac-a44b9351dfa3) | | key_name | - | | metadata | {} | | name | peter | | os-extended-volumes:volumes_attached | [] | | private network | 10.0.0.4 | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id | 7afb5bc1d88d462c8d57178437d3c277 | | updated | 2014-12-31T15:15:34Z | | user_id | 4ff18bdbeb4d436ea4ff1bcd29e269a9 | +--------------------------------------+????????????????????????????????+ +--------------------------------------+-------+--------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------+--------+------------+-------------+------------------+ | ec5c8d70-ae80-4cc3-a5bb-b68019170dd6 | peter | ACTIVE | - | Paused | private=10.0.0.4 | +--------------------------------------+-------+--------+------------+-------------+?????????+ Any ideas why the VM won?t start up correctly? I didn?t see anything on a google search. For reference, here is my local.conf currently: [[local|localrc]] GIT_BASE=https://github.com DEST=/opt/stack disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron enable_service q-vpn # FIXED_RANGE=10.1.0.0/24 # FIXED_NETWORK_SIZE=256 # NETWORK_GATEWAY=10.1.0.1 # PRIVATE_SUBNET_NAME=privateA PUBLIC_SUBNET_NAME=public-subnet # FLOATING_RANGE=172.24.4.0/24 # PUBLIC_NETWORK_GATEWAY=172.24.4.10 # Q_FLOATING_ALLOCATION_POOL="start=172.24.4.11,end=172.24.4.29" # Q_USE_SECGROUP=True # was False # VIRT_DRIVER=libvirt IMAGE_URLS="http://cloud-images.ubuntu.com/releases/14.04.1/release/ubuntu-14.04-server-cloudimg-amd64.tar.gz,http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz" SCREEN_LOGDIR=/opt/stack/screen-logs SYSLOG=True LOGFILE=~/devstack/stack.sh.log ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=tokentoken Q_USE_DEBUG_COMMAND=True RECLONE=No # RECLONE=yes OFFLINE=False Originally, I had floating pool lines and net names, but even with all these commented out, I have the same issue with the VM (didn?t think they were related). For this stable/juno, Devstack is using commit 817e9b6, and Neutron is using 57e8ea8. I?ll try to play with option B some more as well, though I need to figure out how to setup the provider network correctly. If I can get time, I?ll reconfigure the bare metal setup I have in the lab to try stable/juno and then kilo reference VPN as well. If anyone has done this with a VM (either one or two), using juno or kilo, please pass along your local.conf, so I can compare. PCM (Paul Michali) MAIL ?..?. pcm at cisco.com IRC ??..? pc_m (irc.freenode.com) TW ???... @pmichali GPG Key ? 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pcm at cisco.com Wed Dec 31 15:41:12 2014 From: pcm at cisco.com (Paul Michali (pcm)) Date: Wed, 31 Dec 2014 15:41:12 +0000 Subject: [openstack-dev] [nova] boot images in power state PAUSED for stable/juno Message-ID: Not sure if I?m going crazy or what. I?m using DevStack and, after stacking I tried booting a Cirros 3.2, 3.3, and Ubuntu cloud 14.04 image. Each time, the image ends up in PAUSED power state: ubuntu at juno:/opt/stack/neutron$ nova show peter +--------------------------------------+----------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | juno | | OS-EXT-SRV-ATTR:hypervisor_hostname | juno | | OS-EXT-SRV-ATTR:instance_name | instance-00000001 | | OS-EXT-STS:power_state | 3 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2014-12-31T15:15:33.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-12-31T15:15:24Z | | flavor | m1.tiny (1) | | hostId | 5b0c48250ccc0ac3fca8a821e29e4b154ec0b101f9cc0a0b27071a3f | | id | ec5c8d70-ae80-4cc3-a5bb-b68019170dd6 | | image | cirros-0.3.3-x86_64-uec (797e4dee-8c03-497f-8dac-a44b9351dfa3) | | key_name | - | | metadata | {} | | name | peter | | os-extended-volumes:volumes_attached | [] | | private network | 10.0.0.4 | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id | 7afb5bc1d88d462c8d57178437d3c277 | | updated | 2014-12-31T15:15:34Z | | user_id | 4ff18bdbeb4d436ea4ff1bcd29e269a9 | +--------------------------------------+----------------------------------------------------------------+ ubuntu at juno:/opt/stack/neutron$ nova list +--------------------------------------+-------+--------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------+--------+------------+-------------+------------------+ | ec5c8d70-ae80-4cc3-a5bb-b68019170dd6 | peter | ACTIVE | - | Paused | private=10.0.0.4 | I don?t see this with Kilo latest images. Any idea what I may be doing wrong, or if there is an issue (I didn?t see anything on Google search)? IMAGE_ID=`nova image-list | grep 'cloudimg-amd64 ' | cut -d' ' -f 2` PRIVATE_NET=`neutron net-list | grep 'private ' | cut -f 2 -d' ?` nova boot peter --flavor 3 --image $IMAGE_ID --user-data ~/devstack/user_data.txt --nic net-id=$PRIVATE_NET nova boot --flavor 1 --image cirros-0.3.3-x86_64-uec --nic net-id=$PRIVATE_NET paul Thanks. PCM (Paul Michali) MAIL ?..?. pcm at cisco.com IRC ??..? pc_m (irc.freenode.com) TW ???... @pmichali GPG Key ? 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From harlowja at outlook.com Wed Dec 31 17:24:10 2014 From: harlowja at outlook.com (Joshua Harlow) Date: Wed, 31 Dec 2014 09:24:10 -0800 Subject: [openstack-dev] [Ironic] thoughts on the midcycle In-Reply-To: References: Message-ID: I'm not a core but I'll most definitely show up to the SF bay area one since it's close by for me. Elsewhere would be unlikely but since it's nearby it would be very easy to just go to wherever it is in the area and help out... My 2 cents. -Josh Devananda van der Veen wrote: > I'm sending the details of the midcycle in a separate email. Before you > reply that you won't be able to make it, I'd like to share some thoughts > / concerns. > > In the last few weeks, several people who I previously thought would > attend told me that they can't. By my informal count, it looks like we > will have at most 5 of our 10 core reviewers in attendance. I don't > think we should cancel based on that, but it does mean that we need to > set our expectations accordingly. > > Assuming that we will be lacking about half the core team, I think it > will be more practical as a focused sprint, rather than a planning & > design meeting. While that's a break from precedent, planning should be > happening via the spec review process *anyway*. Also, we already have a > larger back log of specs and work than we had this time last cycle, but > with the same size review team. Rather than adding to our backlog, I > would like us to use this gathering to burn through some specs and land > some code. > > That being said, I'd also like to put forth this idea: if we had a > second gathering (with the same focus on writing code) the following > week (let's say, Feb 11 - 13) in the SF Bay area -- who would attend? > Would we be able to get the "other half" of the core team together and > get more work done? Is this a good idea? > > OK. That's enough of my musing for now... > > Once again, if you will be attending the midcycle sprint in Grenoble the > week of Feb 3rd, please sign up HERE > . > > > > Regards, > Devananda > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From hongbin034 at gmail.com Wed Dec 31 17:54:20 2014 From: hongbin034 at gmail.com (Hongbin Lu) Date: Wed, 31 Dec 2014 12:54:20 -0500 Subject: [openstack-dev] [Containers][Magnum] Questions on dbapi Message-ID: Hi all, I am writing tests for the Magnum dbapi. I have several questions about its implementation and appreciate if someone could comment on them. * Exceptions: The exceptions below were ported from Ironic but don't seem to make sense in Magnum. I think we should purge them from the code except InstanceAssociated and NodeAssociated. Do everyone agree? *class InstanceAssociated(Conflict):* * message = _("Instance %(instance_uuid)s is already associated with a node,"* * " it cannot be associated with this other node %(node)s")* *class BayAssociated(InvalidState):* * message = _("Bay %(bay)s is associated with instance %(instance)s.")* *class ContainerAssociated(InvalidState):* * message = _("Container %(container)s is associated with "* * "instance %(instance)s.")* *class PodAssociated(InvalidState):* * message = _("Pod %(pod)s is associated with instance %(instance)s.")* *class ServiceAssociated(InvalidState):* * message = _("Service %(service)s is associated with "* * "instance %(instance)s.")* *NodeAssociated: it is used but definition missing* *BayModelAssociated: it is used but definition missing* * APIs: the APIs below seem to be ported from Ironic Node, but it seems we won't need them all. Again, I think we should purge some of them that does not make sense. In addition, these APIs are defined without being call. Does it make sense remove them for now, and add them one by one later when they are actually needed. *def reserve_bay(self, tag, bay_id):* * """Reserve a bay.* *def release_bay(self, tag, bay_id):* * """Release the reservation on a bay.* *def reserve_baymodel(self, tag, baymodel_id):* * """Reserve a baymodel.* *def release_baymodel(self, tag, baymodel_id):* * """Release the reservation on a baymodel.* *def reserve_container(self, tag, container_id):* * """Reserve a container.* *def reserve_node(self, tag, node_id):* * """Reserve a node.* *def release_node(self, tag, node_id):* * """Release the reservation on a node.* *def reserve_pod(self, tag, pod_id):* * """Reserve a pod.* *def release_pod(self, tag, pod_id):* * """Release the reservation on a pod.* *def reserve_service(self, tag, service_id):* * """Reserve a service.* *def release_service(self, tag, service_id):* * """Release the reservation on a service.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From shrewsbury.dave at gmail.com Wed Dec 31 19:29:59 2014 From: shrewsbury.dave at gmail.com (David Shrewsbury) Date: Wed, 31 Dec 2014 14:29:59 -0500 Subject: [openstack-dev] [Ironic] thoughts on the midcycle In-Reply-To: References: Message-ID: <356B5333-F198-43D0-A855-560A267FDACD@gmail.com> > On Dec 29, 2014, at 5:45 PM, Devananda van der Veen wrote: > > I'm sending the details of the midcycle in a separate email. Before you reply that you won't be able to make it, I'd like to share some thoughts / concerns. > > In the last few weeks, several people who I previously thought would attend told me that they can't. By my informal count, it looks like we will have at most 5 of our 10 core reviewers in attendance. I don't think we should cancel based on that, but it does mean that we need to set our expectations accordingly. > > Assuming that we will be lacking about half the core team, I think it will be more practical as a focused sprint, rather than a planning & design meeting. While that's a break from precedent, planning should be happening via the spec review process *anyway*. Also, we already have a larger back log of specs and work than we had this time last cycle, but with the same size review team. Rather than adding to our backlog, I would like us to use this gathering to burn through some specs and land some code. > > That being said, I'd also like to put forth this idea: if we had a second gathering (with the same focus on writing code) the following week (let's say, Feb 11 - 13) in the SF Bay area -- who would attend? Would we be able to get the "other half" of the core team together and get more work done? Is this a good idea? I could (and likely would) attend the Bay area one. -Dave From pcm at cisco.com Wed Dec 31 19:35:45 2014 From: pcm at cisco.com (Paul Michali (pcm)) Date: Wed, 31 Dec 2014 19:35:45 +0000 Subject: [openstack-dev] [neutron] Need help getting DevStack setup working for VPN testing In-Reply-To: References: Message-ID: Just more data? I keep consistently seeing that on private subnet, the VM can only access router (as expected), but on privateB subnet, the VM can access the private I/F of router1 on private subnet. From the router?s namespace, I cannot ping the local VM (why not?). Oddly, I can ping router1?s private IP from router2 namespace! I tried these commands to create security group rules (are they wrong?): # There are two default groups created by DevStack group=`neutron security-group-list | grep default | cut -f 2 -d' ' | head -1` neutron security-group-rule-create --protocol ICMP $group neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 $group group=`neutron security-group-list | grep default | cut -f 2 -d' ' | tail -1` neutron security-group-rule-create --protocol ICMP $group neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 $group The only change that happens, when I do these commands, is that the VM in privateB subnet can now ping the VM from private subnet, but not vice versa. From router1 namespace, it can then access local VMs. From router2 namespace it can access local VMs and VMs in private subnet (all access). It seems like I have some issue with security groups, and I need to square that away, before I can test VPN out. Am I creating the security group rules correctly? My goal is that the private nets can access the public net, but not each other (until VPN connection is established). Lastly, in this latest try, I set OVS_PHYSICAL_BRIDGE=br-ex. In earlier runs w/o that, there were QVO interfaces, but no QVB or QBR interfaces at all. It didn?t seem to change connectivity, however. Ideas? PCM (Paul Michali) MAIL ?..?. pcm at cisco.com IRC ??..? pc_m (irc.freenode.com) TW ???... @pmichali GPG Key ? 4525ECC253E31A83 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 On Dec 31, 2014, at 10:33 AM, Paul Michali (pcm) wrote: > I?ve been playing a bit with trying to get VPNaaS working post-repo split, and haven?t been successful. I?m trying it a few ways with DevStack, and I?m not sure whether I have a config error, setup issue, or there is something due to the split. > > In the past (and it?s been a few months since I verified VPN operation), I used two bare metal machines and an external switch connecting them. With a DevStack cloud running on each. That configuration is currently setup for a vendor VPN solution, so I wanted to try different methods to test the reference VPN implementation. I?ve got two ideas to do this: > > A) Run DevStack and create two routers with a shared ?public? network, and two private networks, setting up a VPN connection between the private nets. > B) Run two DevStack instances (on two VMs) and try to setup a provider network between them. > > I?m starting with A (though I did try B quickly, but it didn?t work), and I spun up the stack, added a second router (all under the same tenant), created another private network, and booted a Cirros VM in each private net. > > Before even trying VPN, I checked pings. From the first private net VM (10.1.0.4), I could ping on the pubic net, including the public IP of the second private net?s public interface for its router. I cannot ping the VM from the host. That seems all expected to me. > > What seems wrong is the other VM (this is on the post stack net I created). Like the other VM, I can ping public net IPs. However, I can also ping the private net address of the first network?s router (10.1.0.1)! Shouldn?t that have failed (at least that was what I was expecting)? I can?t ping the VM on that side though. Another curiosity is that the VM got the second IP on the subnet (10.2.0.2), unlike the other private net, where DHCP and a compute probe got the 2nd and 3rd IPs. There is DHCP enabled on this private network. > > When I tried VPN, both connections show as DOWN, and all I see are phase 1 ident packets. I cannot ping from VM to VM. I don?t see any logging for the OpenSwan processes, so not to sure how to debug. Maybe I can try some ipsec show command? > > I?m not too sure what is wrong with this setup. > > For a comparison, I decided to do the same thing, using stable/juno. So, I fired up a VM and cloned DevStack with stable/juno and stacked. This time, things are even worse! When I try to boot a VM, and then check the status, the VM is in PAUSED power state. I can?t seem to unpause (nor do I know why it is in this state). Verified this with both Cirros 3.3, 3.2, and Ubuntu cloud images: > > +--------------------------------------+----------------------------------------------------------------+ > | Property | Value | > +--------------------------------------+----------------------------------------------------------------+ > | OS-DCF:diskConfig | MANUAL | > | OS-EXT-AZ:availability_zone | nova | > | OS-EXT-SRV-ATTR:host | juno | > | OS-EXT-SRV-ATTR:hypervisor_hostname | juno | > | OS-EXT-SRV-ATTR:instance_name | instance-00000001 | > | OS-EXT-STS:power_state | 3 | > | OS-EXT-STS:task_state | - | > | OS-EXT-STS:vm_state | active | > | OS-SRV-USG:launched_at | 2014-12-31T15:15:33.000000 | > | OS-SRV-USG:terminated_at | - | > | accessIPv4 | | > | accessIPv6 | | > | config_drive | | > | created | 2014-12-31T15:15:24Z | > | flavor | m1.tiny (1) | > | hostId | 5b0c48250ccc0ac3fca8a821e29e4b154ec0b101f9cc0a0b27071a3f | > | id | ec5c8d70-ae80-4cc3-a5bb-b68019170dd6 | > | image | cirros-0.3.3-x86_64-uec (797e4dee-8c03-497f-8dac-a44b9351dfa3) | > | key_name | - | > | metadata | {} | > | name | peter | > | os-extended-volumes:volumes_attached | [] | > | private network | 10.0.0.4 | > | progress | 0 | > | security_groups | default | > | status | ACTIVE | > | tenant_id | 7afb5bc1d88d462c8d57178437d3c277 | > | updated | 2014-12-31T15:15:34Z | > | user_id | 4ff18bdbeb4d436ea4ff1bcd29e269a9 | > +--------------------------------------+????????????????????????????????+ > > +--------------------------------------+-------+--------+------------+-------------+------------------+ > | ID | Name | Status | Task State | Power State | Networks | > +--------------------------------------+-------+--------+------------+-------------+------------------+ > | ec5c8d70-ae80-4cc3-a5bb-b68019170dd6 | peter | ACTIVE | - | Paused | private=10.0.0.4 | > +--------------------------------------+-------+--------+------------+-------------+?????????+ > > Any ideas why the VM won?t start up correctly? I didn?t see anything on a google search. > > For reference, here is my local.conf currently: > > [[local|localrc]] > GIT_BASE=https://github.com > DEST=/opt/stack > > disable_service n-net > enable_service q-svc > enable_service q-agt > enable_service q-dhcp > enable_service q-l3 > enable_service q-meta > enable_service neutron > enable_service q-vpn > > # FIXED_RANGE=10.1.0.0/24 > # FIXED_NETWORK_SIZE=256 > # NETWORK_GATEWAY=10.1.0.1 > # PRIVATE_SUBNET_NAME=privateA > > PUBLIC_SUBNET_NAME=public-subnet > # FLOATING_RANGE=172.24.4.0/24 > # PUBLIC_NETWORK_GATEWAY=172.24.4.10 > # Q_FLOATING_ALLOCATION_POOL="start=172.24.4.11,end=172.24.4.29" > # Q_USE_SECGROUP=True # was False > > # VIRT_DRIVER=libvirt > IMAGE_URLS="http://cloud-images.ubuntu.com/releases/14.04.1/release/ubuntu-14.04-server-cloudimg-amd64.tar.gz,http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz" > > SCREEN_LOGDIR=/opt/stack/screen-logs > SYSLOG=True > LOGFILE=~/devstack/stack.sh.log > > ADMIN_PASSWORD=password > MYSQL_PASSWORD=password > RABBIT_PASSWORD=password > SERVICE_PASSWORD=password > SERVICE_TOKEN=tokentoken > > Q_USE_DEBUG_COMMAND=True > > RECLONE=No > # RECLONE=yes > OFFLINE=False > > Originally, I had floating pool lines and net names, but even with all these commented out, I have the same issue with the VM (didn?t think they were related). > > For this stable/juno, Devstack is using commit 817e9b6, and Neutron is using 57e8ea8. > > > I?ll try to play with option B some more as well, though I need to figure out how to setup the provider network correctly. If I can get time, I?ll reconfigure the bare metal setup I have in the lab to try stable/juno and then kilo reference VPN as well. > > If anyone has done this with a VM (either one or two), using juno or kilo, please pass along your local.conf, so I can compare. > > PCM (Paul Michali) > > MAIL ?..?. pcm at cisco.com > IRC ??..? pc_m (irc.freenode.com) > TW ???... @pmichali > GPG Key ? 4525ECC253E31A83 > Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83 > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From sdake at redhat.com Wed Dec 31 19:43:33 2014 From: sdake at redhat.com (Steven Dake) Date: Wed, 31 Dec 2014 12:43:33 -0700 Subject: [openstack-dev] [Containers][Magnum] Questions on dbapi In-Reply-To: References: Message-ID: <54A451E5.2030003@redhat.com> On 12/31/2014 10:54 AM, Hongbin Lu wrote: > Hi all, > > I am writing tests for the Magnum dbapi. I have several questions > about its implementation and appreciate if someone could comment on them. > > * Exceptions: The exceptions below were ported from Ironic but don't > seem to make sense in Magnum. I think we should purge them from the > code except InstanceAssociated and NodeAssociated. Do everyone agree? > Hongbin, Agree we should remove any exceptions that were from Ironic that don't make any sense in Magnum. The only reason I copied alot of Ironic code base was to pull in the versioned objects support which should be heading to oslo at some point. > /class InstanceAssociated(Conflict):/ > / message = _("Instance %(instance_uuid)s is already associated > with a node,"/ > / " it cannot be associated with this other node > %(node)s")/ > / > / > /class BayAssociated(InvalidState):/ > / message = _("Bay %(bay)s is associated with instance %(instance)s.")/ > / > / > /class ContainerAssociated(InvalidState):/ > / message = _("Container %(container)s is associated with "/ > / "instance %(instance)s.")/ > / > / > /class PodAssociated(InvalidState):/ > / message = _("Pod %(pod)s is associated with instance %(instance)s.")/ > / > / > /class ServiceAssociated(InvalidState):/ > / message = _("Service %(service)s is associated with "/ > / "instance %(instance)s.")/ > / > / > /NodeAssociated: it is used but definition missing/ > > /BayModelAssociated: it is used but definition missing/ > > * APIs: the APIs below seem to be ported from Ironic Node, but it > seems we won't need them all. Again, I think we should purge some of > them that does not make sense. In addition, these APIs are defined > without being call. Does it make sense remove them for now, and add > them one by one later when they are actually needed. > Agree they should be removed now and added as needed later. > /def reserve_bay(self, tag, bay_id):/ > / """Reserve a bay./ > / > / > /def release_bay(self, tag, bay_id):/ > / """Release the reservation on a bay./ > / > / > /def reserve_baymodel(self, tag, baymodel_id):/ > / """Reserve a baymodel./ > / > / > /def release_baymodel(self, tag, baymodel_id):/ > / """Release the reservation on a baymodel./ > / > / > /def reserve_container(self, tag, container_id):/ > / """Reserve a container./ > / > / > /def reserve_node(self, tag, node_id):/ > / """Reserve a node./ > / > / > /def release_node(self, tag, node_id):/ > / """Release the reservation on a node./ > / > / > /def reserve_pod(self, tag, pod_id):/ > / """Reserve a pod./ > / > / > /def release_pod(self, tag, pod_id):/ > / """Release the reservation on a pod./ > / > / > /def reserve_service(self, tag, service_id):/ > / """Reserve a service./ > / > / > /def release_service(self, tag, service_id):/ > / """Release the reservation on a service./ > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at jvf.cc Wed Dec 31 19:54:35 2014 From: jay at jvf.cc (Jay Faulkner) Date: Wed, 31 Dec 2014 19:54:35 +0000 Subject: [openstack-dev] [Ironic] [Agent] Breaking HardwareManager API Change proposed Message-ID: Hi all, I proposed https://review.openstack.org/#/c/143193 to ironic-python-agent, in an attempt to make Hardware Manager loading more sane. As it works today, the most specific hardware manager is the only one chosen. This means in order to use a mix of hardware managers, you have to compose a custom interface. This is not the way I originally thought it worked, and not the way Josh and I presented it at the summit[1]. This change makes it so we will try each method, in priority order (from most specific to least specific hardware manager). If the method exists and doesn?t throw NotImplementedError, it will be allowed to complete and errors bubble up. If an AttributeError or NotImplementedError is thrown, the next most generic method is called until all methods have been attempted (in which case we fail) or a method does not raise the exceptions above. The downside to this is that it will change behavior for anyone using hardware managers downstream. As of today, the only hardware manager that I know of external to Ironic is the one we use at Rackspace for OnMetal[2]. I?m sending this email to check and see if anyone has objection to this interface changing in this way, and generally asking for comment. Thanks, Jay Faulkner 1: https://www.youtube.com/watch?v=2Oi2T2pSGDU 2: https://github.com/rackerlabs/onmetal-ironic-hardware-manager -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Wed Dec 31 19:56:25 2014 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Wed, 31 Dec 2014 20:56:25 +0100 Subject: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host? In-Reply-To: References: <54A1567E.2080400@gmail.com> <1419957256.3402.2.camel@einstein.kev> Message-ID: Hi Sorry If I didn't understand clearly about it , looks to me the hypervisor itself hosts the instances and it should have a IP with it (like Linux host KVM instances, Linux is the hypervisor, the PC is the host) while the host is physical node and only to be used by 'hypervisor' concept ,so I think maybe we don't need ip for the 'host' ? thanks a lot Best Regards! Kevin (Chen) Ji ? ? Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82454158 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Lingxian Kong To: "OpenStack Development Mailing List (not for usage questions)" Date: 12/31/2014 07:22 AM Subject: Re: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host? Thanks Kevin for your clarification, which further affirms my belief that ip address should be included in the host info. I will contact Jay Pipes on IRC, to see what can I help towards this effort, soon after the New Year's Day in China. :) 2014-12-31 0:34 GMT+08:00 Kevin L. Mitchell : > On Tue, 2014-12-30 at 14:52 +0800, Lingxian Kong wrote: >> Just as what Jay Lau said, 'nova hypervisor-show ' >> indeed returns host ip address, and there are more other information >> included than 'nova host-describe '. I feel a little >> confused about the 'host' and 'hypervisor', what's the difference >> between them? For cloud operator, maybe 'host' is more usefull and >> intuitive for management than 'hypervisor'. From the implementation >> perspective, both 'compute_nodes' and 'services' database tables are >> used for them. Should them be combined for more common use cases? > > Well, the host and the hypervisor are conceptually distinct objects. > The hypervisor is, obviously, the thing on which all the VMs run. The > host, though, is the node running the corresponding nova-compute > service, which may be separate from the hypervisor. For instance, on > Xen-based setups, the host runs in a VM on the hypervisor. There has > also been discussion of allowing one host to be responsible for multiple > hypervisors, which would be useful for providers with large numbers of > hypervisors. > -- > Kevin L. Mitchell > Rackspace > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards! ----------------------------------- Lingxian Kong _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From kevin.mitchell at rackspace.com Wed Dec 31 20:35:33 2014 From: kevin.mitchell at rackspace.com (Kevin L. Mitchell) Date: Wed, 31 Dec 2014 14:35:33 -0600 Subject: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host? In-Reply-To: References: <54A1567E.2080400@gmail.com> <1419957256.3402.2.camel@einstein.kev> Message-ID: <1420058133.5135.7.camel@einstein.kev> On Wed, 2014-12-31 at 20:56 +0100, Chen CH Ji wrote: > Sorry If I didn't understand clearly about it , looks to > me the hypervisor itself hosts the instances and it should have a IP > with it (like Linux host KVM instances, Linux is the hypervisor, the > PC is the host) > while the host is physical node and only to be used by > 'hypervisor' concept ,so I think maybe we don't need ip for the > 'host' ? thanks a lot The hypervisor hosts the VMs, yes, but the component that sits between the hypervisor and the rest of nova?that is, nova-compute?does not necessarily reside on the hypervisor. It is the nova-compute node (which may be either a VM or a physical host) that is referred to by the nova term "host". For KVM, I believe the host is often the same as the hypervisor, meaning that nova-compute runs directly on the hypervisor? but this is not necessarily the case for all virt drivers. For example, the host for Xen-based installations is often a separate VM on the same hypervisor, which would have its own distinct IP address. -- Kevin L. Mitchell Rackspace From jichenjc at cn.ibm.com Wed Dec 31 20:40:52 2014 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Wed, 31 Dec 2014 21:40:52 +0100 Subject: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host? In-Reply-To: <1420058133.5135.7.camel@einstein.kev> References: <54A1567E.2080400@gmail.com> <1419957256.3402.2.camel@einstein.kev> <1420058133.5135.7.camel@einstein.kev> Message-ID: ok, that make sense to me , thanks a lot Best Regards! Kevin (Chen) Ji ? ? Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82454158 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: "Kevin L. Mitchell" To: Date: 12/31/2014 09:37 PM Subject: Re: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host? On Wed, 2014-12-31 at 20:56 +0100, Chen CH Ji wrote: > Sorry If I didn't understand clearly about it , looks to > me the hypervisor itself hosts the instances and it should have a IP > with it (like Linux host KVM instances, Linux is the hypervisor, the > PC is the host) > while the host is physical node and only to be used by > 'hypervisor' concept ,so I think maybe we don't need ip for the > 'host' ? thanks a lot The hypervisor hosts the VMs, yes, but the component that sits between the hypervisor and the rest of nova?that is, nova-compute?does not necessarily reside on the hypervisor. It is the nova-compute node (which may be either a VM or a physical host) that is referred to by the nova term "host". For KVM, I believe the host is often the same as the hypervisor, meaning that nova-compute runs directly on the hypervisor? but this is not necessarily the case for all virt drivers. For example, the host for Xen-based installations is often a separate VM on the same hypervisor, which would have its own distinct IP address. -- Kevin L. Mitchell Rackspace _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From adrian.otto at rackspace.com Wed Dec 31 21:10:56 2014 From: adrian.otto at rackspace.com (Adrian Otto) Date: Wed, 31 Dec 2014 21:10:56 +0000 Subject: [openstack-dev] [Containers][Magnum] Questions on dbapi In-Reply-To: <54A451E5.2030003@redhat.com> References: , <54A451E5.2030003@redhat.com> Message-ID: <3f5qjeqr1uswjjc2gs4jg80i.1420060247679@email.android.com> I would welcome any patches to true this code up to be more appropriate for our needs. We might as well trim cruft out now if we notice it. Our milestone-2 will add a lot of tests, so it would be great to get a clean start. Adrian -------- Original message -------- From: Steven Dake Date:12/31/2014 11:46 AM (GMT-08:00) To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [Containers][Magnum] Questions on dbapi On 12/31/2014 10:54 AM, Hongbin Lu wrote: Hi all, I am writing tests for the Magnum dbapi. I have several questions about its implementation and appreciate if someone could comment on them. * Exceptions: The exceptions below were ported from Ironic but don't seem to make sense in Magnum. I think we should purge them from the code except InstanceAssociated and NodeAssociated. Do everyone agree? Hongbin, Agree we should remove any exceptions that were from Ironic that don't make any sense in Magnum. The only reason I copied alot of Ironic code base was to pull in the versioned objects support which should be heading to oslo at some point. class InstanceAssociated(Conflict): message = _("Instance %(instance_uuid)s is already associated with a node," " it cannot be associated with this other node %(node)s") class BayAssociated(InvalidState): message = _("Bay %(bay)s is associated with instance %(instance)s.") class ContainerAssociated(InvalidState): message = _("Container %(container)s is associated with " "instance %(instance)s.") class PodAssociated(InvalidState): message = _("Pod %(pod)s is associated with instance %(instance)s.") class ServiceAssociated(InvalidState): message = _("Service %(service)s is associated with " "instance %(instance)s.") NodeAssociated: it is used but definition missing BayModelAssociated: it is used but definition missing * APIs: the APIs below seem to be ported from Ironic Node, but it seems we won't need them all. Again, I think we should purge some of them that does not make sense. In addition, these APIs are defined without being call. Does it make sense remove them for now, and add them one by one later when they are actually needed. Agree they should be removed now and added as needed later. def reserve_bay(self, tag, bay_id): """Reserve a bay. def release_bay(self, tag, bay_id): """Release the reservation on a bay. def reserve_baymodel(self, tag, baymodel_id): """Reserve a baymodel. def release_baymodel(self, tag, baymodel_id): """Release the reservation on a baymodel. def reserve_container(self, tag, container_id): """Reserve a container. def reserve_node(self, tag, node_id): """Reserve a node. def release_node(self, tag, node_id): """Release the reservation on a node. def reserve_pod(self, tag, pod_id): """Reserve a pod. def release_pod(self, tag, pod_id): """Release the reservation on a pod. def reserve_service(self, tag, service_id): """Reserve a service. def release_service(self, tag, service_id): """Release the reservation on a service. _______________________________________________ OpenStack-dev mailing list OpenStack-dev at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From egon at egon.cc Wed Dec 31 22:03:44 2014 From: egon at egon.cc (James Downs) Date: Wed, 31 Dec 2014 14:03:44 -0800 Subject: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host? In-Reply-To: <1420058133.5135.7.camel@einstein.kev> References: <54A1567E.2080400@gmail.com> <1419957256.3402.2.camel@einstein.kev> <1420058133.5135.7.camel@einstein.kev> Message-ID: <910A7831-C8B9-48A7-8F3E-34616C531FB0@egon.cc> On Dec 31, 2014, at 12:35 PM, Kevin L. Mitchell wrote: > but this is not necessarily the case for all virt drivers. For example, > the host for Xen-based installations is often a separate VM on the same > hypervisor, which would have its own distinct IP address. This is quite similar to how Openstack / xCat / zVM would work together. Cheers, -j